Nautilus Quick Start Guide
This document provides a brief summary of information that you'll need to know to quickly get started working on Nautilus. For more detailed information, see the Nautilus User Guide.
2. Get a Kerberos Ticket
For security purposes, you must have a current Kerberos ticket on your computer before attempting to connect to Nautilus. A Kerberos client kit must be installed on your desktop to enable you to get a Kerberos ticket. Information about installing Kerberos clients on your Windows desktop can be found on the Kerberos & Authentication webpage.
3. Connect to Nautilus
Nautilus can be accessed via Kerberized ssh as follows:
% ssh email@example.com
4. Home, Working, and Center-wide Directories
Each user has file space in the $HOME, $WORKDIR, $WORKDIR2 and $CENTER directories. The $HOME, $WORKDIR, $WORKDIR2 and $CENTER environment variables are predefined for you and point to the appropriate locations in the file systems. You are strongly encouraged to use these variables in your scripts.
Note: $WORKDIR and $WORKDIR2 are "scratch" file system, and $CENTER is a center-wide file system that is accessible to all center production machines. Neither of these file systems is backed up. You are responsible for managing files in your $WORKDIR, $WORKDIR2 and $CENTER directories by backing up files to the archive system and deleting unneeded files. Currently, $WORKDIR and $WORKDIR2 files older than 21 days and $CENTER files older than 120 days are subject to being purged.
If it is determined as part of the normal purge cycle that files in your $WORKDIR and $WORKDIR2 directories must be deleted, you WILL NOT be notified prior to deletion. You are responsible to monitor your workspace to prevent data loss.
5. Transfer Files and Data to Nautilus
File transfers to DSRC systems must be performed using Kerberized versions of the following tools: scp, sftp, and mpscp. For example, the command below uses secure copy (scp) to copy a local file into a destination directory on a Nautilus login node.
% scp local_file firstname.lastname@example.org:/target_dir
For additional information on file transfers to and from Nautilus, see the File Transfers section of the Nautilus User Guide.
6. Submit Jobs to the Batch Queue
Slurm is the workload management system for Nautilus. To submit a batch job, use the following command:
sbatch [ options ] my_job_script
where my_job_script is the name of the file containing your batch script. For more information on using Slurm or on job scripts, see the Nautilus User Guide, the Nautilus Slurm Guide, or the sample script examples in the $SAMPLES_HOME directory on Nautilus.
7. Batch Queues
The following table describes the Slurm queues available on Nautilus:
|Priority||Queue Name||Max Wall Clock Time||Max Cores Per Job||Description|
|Highest||urgent||24 Hours||16,384||Jobs belonging to DoD HPCMP Urgent Projects|
|debug||30 Minutes||10,752||Time/resource-limited for user testing and debug purposes|
|high||168 Hours||65,536||Jobs belonging to DoD HPCMP High Priority Projects|
|frontier||168 Hours||65,536||Jobs belonging to DoD HPCMP Frontier Projects|
|standard||168 Hours||16,384||Standard jobs|
|HIE||24 Hours||3,072||Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide.|
|Lowest||background||4 Hours||4,096||User jobs that are not charged against the project allocation|
8. Monitoring Your Job
You can monitor your batch jobs on Nautilus using the squeue command.
The squeue command lists all jobs in the queue. The -u username option shows only jobs owned by the given user, as follows:
% squeue -u user1 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 184628 general mytest user1 R 1:50:59 40 n[0067-0071,0808-0842] 184626 general slurmJob user1 R 5:53:50 10 nautilus-mla[23-32] 184612 general mat_test user1 R 8:42:48 44 n[056-0604] 184624 general inspect user1 R 6:03:14 1 n0144
Notice that the output contains the JobID for each job. This ID can be used with the qview, squeue, and scancel commands.
To delete a job, use the command scancel jobID.
9. Archiving Your Work
When your job is finished, you should archive any important data to prevent automatic deletion by the purge scripts.
Copy one or more files to the archive system
archive put [-C path ] [-D] [-s] file1 [file2 ...]
Copy one or more files from the archive system
archive get [-C path ] [-s] file1 [file2 ...]
For more information on archiving your files, see the Navy DSRC Archive Guide.
Software modules are a very convenient way to set needed environment variables and include necessary directories in your path so that commands for particular applications can be found. Nautilus uses "modules" to initialize your environment with COTS application software, system commands and libraries, compiler suites, environment variables, and Slurm batch system commands.
A number of modules are loaded automatically as soon as you log in. To see the modules that are currently loaded, run module list. To see the entire list of available modules, run module avail. You can modify the configuration of your environment by loading and unloading modules. For complete information on how to do this, see the Modules User Guide.
11. Available Software
A list of software on Nautilus is available on the software page.
12. Advance Reservation Service
A subset of Nautilus's nodes has been set aside for use as part of the Advance Reservation Service (ARS). The ARS allows users to reserve a user-designated number of nodes for a specified number of hours starting at a specific date/time. This service enables users to execute interactive or other time-critical jobs within the batch system environment. The ARS is accessible via most modern web browsers at https://reservation.hpc.mil. Authenticated access is required. The ARS User Guide is available on HPC Centers.