Batch Scheduler Rosetta
User Commands | PBS | Slurm | LSF |
---|---|---|---|
Job Submission | qsub Script_File | sbatch Script_File | bsub < Script_File |
Job Deletion | qdel Job_ID | scancel Job_ID | bkill Job_ID |
Job status (by job) |
qstat Job_ID | squeue Job_ID | bjobs Job_ID |
Job status (by user) |
qstat -u User_Name | squeue -u User_Name | bjobs -u User_Name |
Job hold | qhold Job_ID | scontrol hold Job_ID | bstop Job_ID |
Job release | qrls Job_ID | scontrol release Job_ID | bresume Job_ID |
Queue list | qstat -Q | squeue | bqueues |
Node list | pbsnodes -l | sinfo -N OR scontrol show nodes | bhosts |
Cluster status | qstat -a | sinfo | bqueues |
GUI | xpbsmon | sview | xlsf OR xlsbatch |
Environment | PBS | Slurm | LSF |
Job ID | $PBS_JOBID | $SLURM_JOBID | $LSB_JOBID |
Submit Directory | $PBS_O_WORKDIR | $SLURM_SUBMIT_DIR | $LSB_SUBCWD |
Submit Host | $PBS_O_HOST | $SLURM_SUBMIT_HOST | $LSB_SUB_HOST |
Node List | $PBS_NODEFILE | $SLURM_JOB_NODELIST | $LSB_HOSTS/LSB_MCPU_HOST |
Job Array Index | $PBS_ARRAYID | $SLURM_ARRAY_TASK_ID | $LSB_JOBINDEX |
Job Specification | PBS | Slurm | LSF |
Script Directive | #PBS | #SBATCH | #BSUB |
Queue | -q Queue_Name | ARL: -p Queue_Name AFRL and Navy: -q Queue_Name |
-q Queue_Name |
Node Count | -l select=N1:ncpus=N2: mpiprocs=N3 (N1 = Node count N2 = Max cores per node N3 = Cores to use per node) |
-N min[-max] | -n CoreCount -R "span[ptile=CoresPerNode]" (NodeCount = CoreCount / Cores Per Node) |
Core Count | -l select=N1:ncpus=N2: mpiprocs=N3 (N1 = Node count N2 = Max cores per node N3 = Cores to use per node Core Count = N1 x N3) |
--ntasks=total_cores_in_run | -n Core_Count |
Wall Clock Limit | -l walltime=hh:mm:ss | -t min OR -t days-hh:mm:ss |
-W hh:mm |
Standard Output File | -o File_Name | -o File_Name | -o File_Name |
Standard Error File | -e File_Name | -e File_Name | -e File_Name |
Combine stdout/err | -j oe (both to stdout) OR -j eo (both to stderr) |
(use -o without -e) | (use -o without -e) |
Copy Environment | -V | --export=ALL|NONE|Variable_List | |
Event Notification | -m [a][b][e] | --mail-type=[BEGIN],[END],[FAIL] | -B or -N |
Email Address | -M Email_Address | --mail-user=Email_Address | -u Email_Address |
Job Name | -N Job_Name | --job-name=Job_Name | -J Job_Name |
Job Restart | -r y|n | --requeue OR --no-requeue (NOTE: configurable default) |
-r |
Working Directory | No option – defaults to home directory | --workdir=/Directory/Path | No option – defaults to submission directory |
Resource Sharing | -l place=scatter:excl | --exclusive OR --shared |
-x |
Account to charge | -A Project_ID | --account=Project_ID | -P Project_ID |
Tasks per Node | -l select=N1:ncpus=N2: mpiprocs=N3 (N1 = Node count N2 = Max cores per node N3 = Cores to use per node) |
--tasks-per-node=count | |
Job Dependency | -W depend=state:Job_ID[:Job_ID...][,state:Job_ID[:Job_ID...]] | --depend=state:Job_ID | -w done|exit|finish |
Job host preference | --nodelist=nodes AND/OR --exclude=nodes |
-m Node_List (i.e., "inf001" -or- inf[001-128]) OR -m node_type (i.e., "inference", "training", or "visualization") |
|
Job Arrays | -J N-M[:step][%Max_Jobs] | --array=N-M[:step] | -J "Array_Name[N-M[:step]][%Max_Jobs]" (Note: bold black brackets are literal) |
Generic Resources | -l other=Resource_Spec | --gres=Resource_Spec | |
Licenses | -l app=number Example: -l abaqus=21 (Note: license resource allocation) |
-L app:number | Example -L abaqus:21 -R "rusage[License_Spec]" (Note: brackets are literal) |
Begin Time | -a [[[YYYY]MM]DD]hhmm[.ss] (Note: no delimiters) |
--begin=YYYY-MM-DD[Thh:mm[:ss]] | -b [[YYYY:][MM:]DD:]hh:mm |