Batch Scheduler Rosetta
| User Commands | PBS | Slurm |
|---|---|---|
| Job Submission | qsub Script_File | sbatch Script_File |
| Job Deletion | qdel Job_ID | scancel Job_ID |
| Job status (by job) |
qstat Job_ID | squeue Job_ID |
| Job status (by user) |
qstat -u User_Name | squeue -u User_Name |
| Job hold | qhold Job_ID | scontrol hold Job_ID |
| Job release | qrls Job_ID | scontrol release Job_ID |
| Queue list | qstat -Q | squeue |
| Node list | pbsnodes -l | sinfo -N OR scontrol show nodes |
| Cluster status | qstat -a | sinfo |
| GUI | xpbsmon | sview |
| Environment | PBS | Slurm |
| Job ID | $PBS_JOBID | $SLURM_JOBID |
| Submit Directory | $PBS_O_WORKDIR | $SLURM_SUBMIT_DIR |
| Submit Host | $PBS_O_HOST | $SLURM_SUBMIT_HOST |
| Node List | $PBS_NODEFILE | $SLURM_JOB_NODELIST |
| Job Array Index | $PBS_ARRAYID | $SLURM_ARRAY_TASK_ID |
| Job Specification | PBS | Slurm |
| Script Directive | #PBS | #SBATCH |
| Queue | -q Queue_Name | ARL: -p Queue_Name AFRL and Navy: -q Queue_Name |
| Node Count | -l select=N1:ncpus=N2: mpiprocs=N3 (N1 = Node count N2 = Max cores per node N3 = Cores to use per node) |
-N min[-max] |
| Core Count | -l select=N1:ncpus=N2: mpiprocs=N3 (N1 = Node count N2 = Max cores per node N3 = Cores to use per node Core Count = N1 x N3) |
--ntasks=total_cores_in_run |
| Wall Clock Limit | -l walltime=hh:mm:ss | -t min OR -t days-hh:mm:ss |
| Standard Output File | -o File_Name | -o File_Name |
| Standard Error File | -e File_Name | -e File_Name |
| Combine stdout/err | -j oe (both to stdout) OR -j eo (both to stderr) |
(use -o without -e) |
| Copy Environment | -V | --export=ALL|NONE|Variable_List |
| Event Notification | -m [a][b][e] | --mail-type=[BEGIN],[END],[FAIL] |
| Email Address | -M Email_Address | --mail-user=Email_Address |
| Job Name | -N Job_Name | --job-name=Job_Name |
| Job Restart | -r y|n | --requeue OR --no-requeue (NOTE: configurable default) |
| Working Directory | No option – defaults to home directory | --workdir=/Directory/Path |
| Resource Sharing | -l place=scatter:excl | --exclusive OR --shared |
| Account to charge | -A Project_ID | --account=Project_ID |
| Tasks per Node | -l select=N1:ncpus=N2: mpiprocs=N3 (N1 = Node count N2 = Max cores per node N3 = Cores to use per node) |
--tasks-per-node=count |
| Job Dependency | -W depend=state:Job_ID[:Job_ID...][,state:Job_ID[:Job_ID...]] | --depend=state:Job_ID |
| Job host preference | --nodelist=nodes AND/OR --exclude=nodes |
|
| Job Arrays | -J N-M[:step][%Max_Jobs] | --array=N-M[:step] |
| Generic Resources | -l other=Resource_Spec | --gres=Resource_Spec |
| Licenses | -l app=number Example: -l abaqus=21 (Note: license resource allocation) |
-L app:number |
| Begin Time | -a [[[YYYY]MM]DD]hhmm[.ss] (Note: no delimiters) |
--begin=YYYY-MM-DD[Thh:mm[:ss]] |