Unclassified Systems

Centennial is currently Up.

Centennial is an SGI ICE XA system located at the ARL DSRC. It has 1,784 standard compute nodes, 32 large-memory compute nodes, and 32 GPU compute nodes (a total of 1,848 compute nodes or 73,920 compute cores). It has 253 TB of memory and is rated at 2.6 peak PFLOPS.

More Info

Node Configuration
Login Standard Large-Memory GPU
Total Nodes 24 1,784 32 32
Processor Intel E5-2698v4 Broadwell Intel E5-2698v4 Broadwell Intel E5-2698v4 Broadwell Intel E5-2698v4 Broadwell
Processor Speed 2.2 GHz 2.2 GHz 2.6 GHz 2.2 GHz
Sockets / Node 2 2 2 2
Cores / Node 40 40 40 40
Total CPU Cores 960 71,360 1,280 1,280
Usable Memory / Node 252 GB 124 GB 508 GB 252 GB
Accelerators / Node None None None 1
Accelerator n/a n/a n/a NVIDIA K40c PCIe 3
Memory / Accelerator n/a n/a n/a 12 GB
Storage on Node None None None None
Interconnect InfiniBand EDR InfiniBand EDR InfiniBand EDR InfiniBand EDR
Operating System RHEL RHEL RHEL RHEL
Queue Descriptions and Limits on Centennial
Priority Queue Name Max Wall Clock Time Max Cores Per Job Description
Highest debug 1 Hour N/A User diagnostic jobs
Down arrow for decreasing priority transfer 48 Hours 1 Data transfer jobs
urgent 96 Hours N/A Designated urgent jobs by DoD HPCMP
high 96 Hours N/A Designated high-priority projects by service/agency
frontier 168 Hours N/A Frontier projects only
cots 96 Hours N/A Abaqus and Fluent jobs
interactive 12 Hours N/A Interactive jobs
standard-long 200 Hours N/A ARL DSRC permission required
standard 168 Hours N/A Normal user jobs
Lowest background 24 Hours 4,096 User jobs that will not be charged against the project allocation
Gaffney is currently Up.

Gaffney is an HPE SGI 8600 system located at the Navy DSRC. It has 704 standard compute nodes, 16 large-memory nodes, and 32 GPU-accelerated nodes (a total of 752 compute nodes or 36,096 compute cores). It has 154 TB of memory and is rated at 3.05 peak PFLOPS.

More Info

Maintenance
Date / TimeDetails
2023 May 30 13:00 - 17:00 CT Network Maintenance
Node Configuration
Login Standard Large-Memory GPU
Total Nodes 8 704 16 32
Processor Intel 8168 Skylake Intel 8168 Skylake Intel 8168 Skylake Intel 8168 Skylake
Processor Speed 2.7 GHz 2.7 GHz 2.7 GHz 2.7 GHz
Sockets / Node 2 2 2 2
Cores / Node 48 48 48 48
Total CPU Cores 384 33,792 768 1,536
Usable Memory / Node 320 GB 170 GB 742 GB 367 GB
Accelerators / Node None None None 1
Accelerator n/a n/a n/a NVIDIA P100 PCIe 3
Memory / Accelerator n/a n/a n/a 16 GB
Storage on Node None None 3.2 TB SSD None
Interconnect Intel Omni-Path Intel Omni-Path Intel Omni-Path Intel Omni-Path
Operating System RHEL RHEL RHEL RHEL
Queue Descriptions and Limits on Gaffney
Priority Queue Name Max Wall Clock Time Max Cores Per Job Description
Highest urgent 24 Hours 768 Designated urgent projects by DoD HPCMP
Down arrow for decreasing priority frontier 168 Hours 14,400 Designated frontier projects by DoD HPCMP
high 168 Hours 15,840 Designated high-priority jobs by Service/Agency
debug 30 Minutes 2,400 User diagnostic jobs
HIE 24 Hours 384 Rapid response for interactive work
standard 168 Hours 8,160 Normal priority user jobs
serial 168 Hours 1 Serial user jobs
gpu 24 Hours 384 GPU-accelerated jobs
transfer 48 Hours N/A Data transfer jobs
bigmem 96 Hours 720 Large-memory jobs
Lowest background 4 Hours 1,200 User jobs that will not be charged against the project allocation
Koehr is currently Up.

Koehr is an HPE SGI 8600 system located at the Navy DSRC. It has 704 standard compute nodes, 16 large-memory nodes, and 32 GPU-accelerated nodes (a total of 752 compute nodes or 36,096 compute cores). It has 154 TB of memory and is rated at 3.05 peak PFLOPS.

More Info

Maintenance
Date / TimeDetails
2023 May 30 13:00 - 17:00 CT Network Maintenance
Node Configuration
Login Standard Large-Memory GPU
Total Nodes 8 704 16 32
Processor Intel 8168 Skylake Intel 8168 Skylake Intel 8168 Skylake Intel 8168 Skylake
Processor Speed 2.7 GHz 2.7 GHz 2.7 GHz 2.7 GHz
Sockets / Node 2 2 2 2
Cores / Node 48 48 48 48
Total CPU Cores 384 33,792 768 1,536
Usable Memory / Node 320 GB 170 GB 742 GB 367 GB
Accelerators / Node None None None 1
Accelerator n/a n/a n/a NVIDIA P100 PCIe 3
Memory / Accelerator n/a n/a n/a 16 GB
Storage on Node None None 3.2 TB SSD None
Interconnect Intel Omni-Path Intel Omni-Path Intel Omni-Path Intel Omni-Path
Operating System RHEL RHEL RHEL RHEL
Queue Descriptions and Limits on Koehr
Priority Queue Name Max Wall Clock Time Max Cores Per Job Description
Highest urgent 24 Hours 768 Designated urgent projects by DoD HPCMP
Down arrow for decreasing priority frontier 168 Hours 14,400 Designated frontier projects by DoD HPCMP
high 168 Hours 14,304 Designated high-priority jobs by Service/Agency
debug 30 Minutes 2,400 User diagnostic jobs
HIE 24 Hours 384 Rapid response for interactive work
standard 168 Hours 8,160 Normal priority user jobs
serial 168 Hours 1 Serial user jobs
gpu 24 Hours 384 GPU-accelerated jobs
transfer 48 Hours N/A Data transfer jobs
bigmem 96 Hours 720 Large-memory jobs
Lowest background 4 Hours 1,200 User jobs that will not be charged against the project allocation
Mustang is currently Up.

Mustang is an HPE SGI 8600 system located at the AFRL DSRC. It has 1,128 standard compute nodes, 24 large-memory compute nodes, and 24 GPU compute nodes (a total of 1,176 compute nodes or 56,448 compute cores). It has 244 TB of memory and is rated at 4.88 peak PFLOPS.

More Info

Node Configuration
Login Standard Large-Memory GPU
Total Nodes 12 1,128 24 24
Processor Intel 8168 Skylake Intel 8168 Skylake Intel 8168 Skylake Intel 8168 Skylake
Processor Speed 2.7 GHz 2.7 GHz 2.7 GHz 2.7 GHz
Sockets / Node 2 2 2 2
Cores / Node 48 48 48 48
Total CPU Cores 576 54,144 1,152 1,152
Usable Memory / Node 380 GB 180 GB 744 GB 372 GB
Accelerators / Node None None None 1
Accelerator n/a n/a n/a NVIDIA P100 PCIe 3
Memory / Accelerator n/a n/a n/a 16 GB
Storage on Node None None None None
Interconnect Intel Omni-Path Intel Omni-Path Intel Omni-Path Intel Omni-Path
Operating System RHEL RHEL RHEL RHEL
Queue Descriptions and Limits on Mustang
Priority Queue Name Max Wall Clock Time Max Cores Per Job Description
Highest urgent 168 Hours 28,224 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority debug 1 Hour 1,152 User testing
high 168 Hours 28,224 Jobs belonging to DoD HPCMP High Priority Projects
frontier 168 Hours 28,224 Jobs belonging to DoD HPCMP Frontier Projects
standard 168 Hours 28,224 Standard jobs
HIE 24 Hours 96 Rapid response for interactive work
transfer 48 Hours 1 Data transfer for user jobs
Lowest background 120 Hours 48 Unrestricted access - no allocation charge
Narwhal is currently Up.

Narwhal is an HPE Cray EX system located at the Navy DSRC. It has 2,304 standard compute nodes, 26 large-memory nodes, 16 visualization accelerated nodes, 32 1-MLA accelerated nodes, and 32 2-MLA accelerated nodes (a total of 2,410 compute nodes or 308,480 compute cores). It has 640 TB of memory and is rated at 13.5 peak PFLOPS.

More Info

Maintenance
Date / TimeDetails
2023 May 30 13:00 - 17:00 CT Network Maintenance
Node Configuration
Login Standard Large-Memory Visualization MLA 1-GPU MLA 2-GPU
Total Nodes 11 2,304 26 16 32 32
Processor AMD 7H12 Rome AMD 7H12 Rome AMD 7H12 Rome AMD 7H12 Rome AMD 7H12 Rome AMD 7H12 Rome
Processor Speed 2.6 GHz 2.6 GHz 2.3 GHz 2.3 GHz 2.3 GHz 2.3 GHz
Sockets / Node 2 2 2 2 2 2
Cores / Node 128 128 128 128 128 128
Total CPU Cores 1,408 294,912 3,328 2,048 4,096 4,096
Usable Memory / Node 226 GB 238 GB 995 GB 234 GB 239 GB 239 GB
Accelerators / Node None None None 1 1 2
Accelerator n/a n/a n/a NVIDIA V100 PCIe 3 NVIDIA V100 PCIe 3 NVIDIA V100 PCIe 3
Memory / Accelerator n/a n/a n/a 32 GB 32 GB 32 GB
Storage on Node 880 GB SSD None 1.8 TB SSD None 880 GB SSD 880 GB SSD
Interconnect HPE Slingshot HPE Slingshot HPE Slingshot HPE Slingshot HPE Slingshot HPE Slingshot
Operating System SLES SLES SLES SLES SLES SLES
Queue Descriptions and Limits on Narwhal
Priority Queue Name Max Wall Clock Time Max Cores Per Job Description
Highest urgent 24 Hours 16,384 Designated urgent projects by DoD HPCMP
Down arrow for decreasing priority frontier 168 Hours 65,536 Designated frontier projects by DoD HPCMP
high 168 Hours 16,384 Designated high-priority projects by Service/Agency
debug 30 Minutes 8,192 User diagnostic jobs
HIE 24 Hours 3,072 Rapid response for interactive work
viz 24 Hours 128 Visualization jobs
standard 168 Hours 32,768 Normal priority user jobs
mla 24 Hours 128 Machine Learning Accelerated jobs
smla 24 Hours 128 Machine Learning Accelerated jobs
dmla 24 Hours 128 Machine Learning Accelerated jobs
serial 168 Hours 1 Serial User jobs
bigmem 96 Hours 1,280 Large-memory jobs
transfer 24 Hours N/A Data transfer jobs
Lowest background 4 Hours 1,024 User jobs that will not be charged against the project allocation.
Onyx is currently Up.

Onyx is a Cray XC40/50 system. It has 4,806 standard compute nodes, 8 large-memory compute nodes, 32 GPU compute nodes, 32 Knights Landing (Phi) compute nodes, and 64 Machine Learning Accelerator (MLA) multi-GPGPU nodes (a total of 4,942 compute nodes or 217,128 compute cores). It is rated at 6.06 peak PFLOPS.

More Info

Maintenance
Date / TimeDetails
2023 May 22 09:00 - May 27 09:00 CT (Completed) Software Maintenance
Node Configuration
Login Standard Large-Memory KNL GPU 2-MLA 10-MLA
Total Nodes 12 4,806 8 32 32 60 4
Processor Intel E5-2699v4 Broadwell Intel E5-2699v4 Broadwell Intel E5-2699v4 Broadwell Intel 7230 Knights Landing Intel E5-2699v4 Broadwell Intel 6148 Skylake Intel 6148 Skylake
Processor Speed 2.8 GHz 2.8 GHz 2.8 GHz 1.3 GHz 2.8 GHz 2.4 GHz 2.4 GHz
Sockets / Node 2 2 2 1 1 2 2
Cores / Node 44 44 44 64 22 40 40
Total CPU Cores 528 211,464 352 2,048 704 2,400 160
Usable Memory / Node 247 GB 121 GB 1 TB 90 GB 247 GB 172 GB 735 GB
Accelerators / Node None None None None 1 2 10
Accelerator n/a n/a n/a n/a NVIDIA P100 PCIe 3 NVIDIA V100 SXM2 NVIDIA V100 PCIe 3
Memory / Accelerator n/a n/a n/a n/a 16 GB 16 GB 32 GB
Storage on Node None None None None None None None
Interconnect Ethernet Cray Aries Cray Aries Cray Aries Cray Aries InfiniBand InfiniBand
Operating System SLES CLE CLE CLE CLE CentOS CentOS
Queue Descriptions and Limits on Onyx
Priority Queue Name Max Wall Clock Time Max Jobs Min Cores Per Job Max Cores Per Job Description
Highest urgent 24 Hours N/A 22 7,260 Designated urgent jobs by DoD HPCMP
Down arrow for decreasing priority debug 1 Hour 4 22 11,484 User testing
HIE 24 Hours 2 22 110 Rapid response for interactive work
frontier_lg 24 Hours 2 7,261 143,968 Frontier projects only (large jobs)
frontier_lw 168 Hours 15 22 15,708 Frontier projects only (long walltime)
frontier_sm 48 Hours 70 22 7,260 Frontier projects only (small jobs)
high_lg 24 Hours 2 8,449 105,820 Designated high-priority jobs by Service/Agency (large jobs)
high_lw 168 Hours 15 22 10,824 Designated high-priority jobs by Service/Agency (long walltime)
high_sm 24 Hours 70 22 8,448 Designated high-priority jobs by Service/Agency (small jobs)
frontier_md 96 Hours 2 15,709 34,540 Frontier projects only (medium sized, long walltime)
standard_lg 24 Hours 2 7,261 105,820 Normal priority jobs (large jobs)
standard_lw 168 Hours 3 22 5,808 Normal priority jobs (long walltime)
standard_sm 24 Hours 70 22 7,260 Normal priority jobs (small jobs)
transfer 48 Hours 6 1 1 Data transfer jobs. Access to the long-term storage
Lowest background 4 Hours 6 22 7,260 Unrestricted access - no allocation charge
SCOUT is currently Up.

SCOUT is an IBM Power9 system located at the ARL DSRC. It has 22 Training nodes, each with 6 nVidia V100 GPUs, 128 Inference nodes, each with 4 nVidia T4 GPUs, and 2 Visualization nodes, each with 2 nVidia GPUs (a total of 152 compute nodes or 6,080 cores). It has 45 TB of memory.

More Info

Node Configuration
Login Training Inference Visualization
Total Nodes 4 22 128 2
Processor IBM POWER9 IBM POWER9 IBM POWER9 IBM POWER9
Processor Speed 2.55 GHz 2.55 GHz 2.55 GHz 2.55 GHz
Sockets / Node 2 2 2 2
Cores / Node 40 40 40 40
Total CPU Cores 160 880 5,120 80
Usable Memory / Node 502 GB 690 GB 246 GB 502 GB
Accelerators / Node None 6 4 2
Accelerator n/a NVIDIA V100 PCIe 3 NVIDIA T4 PCIe 3 NVIDIA V100 PCIe 3
Memory / Accelerator n/a 32 GB 16 GB 16 GB
Storage on Node 1.5 TB PCIe 12 TB PCIe 3.3 TB PCIe 3.3 TB PCIe
Interconnect InfiniBand EDR InfiniBand EDR InfiniBand EDR InfiniBand EDR
Operating System RHEL RHEL RHEL RHEL
Queue Descriptions and Limits on SCOUT
Priority Queue Name Max Wall Clock Time Max Cores Per Job Description
Highest transfer 48 Hours N/A Data transfer jobs
Down arrow for decreasing priority urgent 96 Hours N/A Designated urgent jobs by DoD HPCMP
debug 1 Hour N/A User diagnostic jobs
high 168 Hours N/A Designated high-priority projects by service/agency
frontier 168 Hours N/A Frontier projects only
HIE 24 Hours N/A Rapid response for interactive work
interactive 12 Hours N/A Interactive jobs
standard 168 Hours N/A Normal user jobs
Lowest background 24 Hours N/A User jobs that will not be charged against the project allocation
Warhawk is currently running in a degraded state.

Warhawk is an HPE Cray EX system located at the AFRL DSRC. It has 1,024 standard compute nodes, 4 large-memory nodes, 24 1-GPU visualization nodes, and 40 2-GPU Machine-Learning nodes (a total of 1,092 compute nodes or 139,776 compute cores). It has 564 TB of memory and is rated at 6.86 peak PFLOPS.

More Info

Maintenance
Date / TimeDetails
2023 Apr 13 11:41 ET - TBD (In Progress) Degraded State
2023 Jun 05 08:00 - Jun 16 17:00 ET System Maintenance
Node Configuration
Login Standard Large-Memory Visualization Machine-Learning Accelerated
Total Nodes 7 1,024 4 24 40
Processor AMD 7H12 Rome AMD 7H12 Rome AMD 7H12 Rome AMD 7H12 Rome AMD 7H12 Rome
Processor Speed 2.6 GHz 2.6 GHz 2.6 GHz 2.6 GHz 2.6 GHz
Sockets / Node 2 2 2 2 2
Cores / Node 128 128 128 128 128
Total CPU Cores 896 131,072 512 3,072 5,120
Usable Memory / Node 995 GB 503 GB 995 GB 503 GB 503 GB
Accelerators / Node None None None 1 2
Accelerator n/a n/a n/a NVIDIA V100 PCIe 3 NVIDIA V100 PCIe 3
Memory / Accelerator n/a n/a n/a 32 GB 32 GB
Storage on Node None None None None None
Interconnect Cray Slingshot Cray Slingshot Cray Slingshot Cray Slingshot Cray Slingshot
Operating System SLES SLES SLES SLES SLES
Queue Descriptions and Limits on Warhawk
Priority Queue Name Max Wall Clock Time Max Cores Per Job Description
Highest urgent 168 Hours 69,888 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority debug 1 Hour 2,816 User testing
high 168 Hours 69,888 Jobs belonging to DoD HPCMP High Priority Projects
frontier 168 Hours 69,888 Jobs belonging to DoD HPCMP Frontier Projects
standard 168 Hours 69,888 Standard jobs
HIE 24 Hours 256 Rapid response for interactive work
transfer 48 Hours 1 Data transfer for user jobs
Lowest background 120 Hours 2,816 Unrestricted access - no allocation charge