Unclassified Systems

Centennial is currently Up.

Centennial is an SGI ICE XA system located at the ARL DSRC. It has 1,784 standard compute nodes, 32 large-memory compute nodes, and 32 GPU compute nodes (a total of 1,848 compute nodes or 73,920 compute cores). It has 253 TBytes of memory and is rated at 2.6 peak PFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory GPU Accelerated
Total Cores | Nodes 960 | 24 71,360 | 1,784 1,280 | 32 1,280 | 32
Operating System RHEL
Cores/Node 40 40 + 1 GPU
(1 x 2,880 GPU cores)
Core Type Intel Xeon
E5-2698v4 Broadwell
Intel Xeon
E5-2698v4 Broadwell
+NVIDIA Tesla K40P
Core Speed 2.2 GHz 2.6 GHz 2.2 GHz
Memory/Node 256 GBytes 128 GBytes 512 GBytes 256 GBytes
Accessible Memory/Node 252 GBytes 124 GBytes 508 GBytes 252 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet / InfiniBand
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest debug Debug 1 Hour N/A User diagnostic jobs
Down arrow for decreasing priority transfer N/A 48 Hours 1 Data transfer jobs.
urgent Urgent 168 Hours N/A Designated urgent jobs by DoD HPCMP
staff N/A 368 Hours N/A ARL DSRC staff testing only. System testing and user support.
high High 96 Hours N/A Designated high-priority projects by service/agency.
frontier Frontier 168 Hours N/A Frontier projects only
cots Standard 96 Hours N/A Abaqus, Fluent, and Cobalt jobs
interactive Standard 12 Hours N/A Interactive jobs
standard-long Standard 200 Hours N/A ARL DSRC permission required
standard Standard 168 Hours N/A Normal user jobs.
Lowest background Background 24 Hours User jobs that will not be charged against the project allocation.
Conrad is currently Up.

Conrad is a Cray XC40 system located at the Navy DSRC. It has 1,523 standard compute nodes, 8 large-memory compute nodes, and 168 Xeon Phi compute nodes (a total of 1,699 compute nodes or 51,008 compute cores). It has 208 TBytes of memory and is rated at 2 peak PFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Phi Accelerated
Total Cores | Nodes 192 | 6 48,736 | 1,523 256 | 8 2,016 | 168
Operating System SLES Cray Linux Environment
Cores/Node 32 12 + 1 Phi
(1 x 60 Phi cores)
Core Type Intel Xeon E5-2698v3 Intel Xeon E5-2696v2
+Intel Xeon 5120D Phi
Core Speed 2.3 GHz 2.4 GHz
+1.05 GHz
Memory/Node 256 GBytes 128 GBytes 512 GBytes 64 GBytes
+8 GBytes
Accessible Memory/Node 240 GBytes 125 GBytes 510 GBytes 63 GBytes
+7.5 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Aries / Dragonfly
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 768 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority frontier Frontier 168 Hours 16,000 Designated Frontier projects by DoD HPCMP
high High 168 Hours 16,000 Jobs belonging to DoD HPCMP High Priority Projects
debug Debug 30 Minutes 3,072 User diagnostic jobs.
standard Standard 168 Hours 8,000 Standard jobs
phi N/A 168 Hours 2,376 Phi-accelerated jobs.
bigmem N/A 24 Hours 224 Large-memory jobs
transfer N/A 48 Hours N/A Data transfer for user jobs.
Lowest background Background 4 Hours 2,048 Unrestricted access - no allocation charge
Copper is currently Up.

Copper is a Cray XE6m Open Research system located at the ERDC DSRC. It has 460 compute nodes (a total of 14,720 total compute cores). It has 29 TBytes of memory and is rated at 135 peak TFLOPS.

More Info

Maintenance
Date / TimeDetails
2019 Dec 19 08:00 - 17:00 CT Archive Maintenance
Node Configuration
Login Nodes Compute Nodes
Total Cores | Nodes 32 | 2 14,720 | 460
Operating System SLES Cray Linux Environment
Cores/Node 16 32
Core Type AMD 64-bit Opteron AMD Interlagos Opteron
Core Speed 2.7 GHz 2.3 GHz
Memory/Node 128 GBytes 64 GBytes
Accessible Memory/Node 8 GBytes 60 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Gemini
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 168 Hours 4,096 Designated urgent jobs by DoD HPCMP
Down arrow for decreasing priority test N/A 48 Hours N/A Staff-only testing
debug* Debug 1 Hour 512 User testing
high High 168 Hours 4,096 Designated high-priority jobs by Service/Agency
frontier Frontier 168 Hours 4,096 Frontier projects only
standard Standard 168 Hours 4,096 Normal priority jobs
transfer N/A 48 Hours 1 Data transfer for user jobs. Access to the long-term storage.
Lowest background** Background 4 Hours 4,096 Unrestricted access - no allocation charge. Number of cores can be altered depending on the current system load.
Excalibur is currently Up.

Excalibur is a Cray XC40 system located at the ARL DSRC. It has 3,098 standard compute nodes, 32 large-memory compute nodes, and 32 GPU compute nodes (a total of 3,162 compute nodes or 101,184 compute cores). It has 421 TBytes of memory and is rated at 3.77 peak PFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory GPU Accelerated
Total Cores | Nodes 512 | 16 99,136 | 3,098 1,024 | 32 1,024 | 32
Operating System SLES Cray Linux Environment
Cores/Node 32 32 + 1 GPU
(1 x 2,880 GPU cores)
Core Type Intel Xeon E5-2698 v3 Intel Xeon E5-2698 v3
+NVIDIA Tesla K40
Core Speed 2.3 GHz 2.5 GHz
Memory/Node 256 GBytes 128 GBytes 512 GBytes 256 GBytes
+12 GBytes
Accessible Memory/Node 2 GBytes 126 GBytes 508 GBytes 252 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet / InfiniBand Cray Aries / Dragonfly Ethernet / InfiniBand
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 168 Hours N/A Designated urgent jobs by DoD HPCMP
Down arrow for decreasing priority high High 168 Hours N/A Designated high-priority projects by service/agency.
frontier Frontier 168 Hours N/A Frontier projects only
standard-long Standard 200 Hours 512 Access available by request.
debug Debug 1 Hour N/A User diagnostic jobs
standard Standard 168 Hours N/A Normal user jobs.
transfer N/A 48 Hours 1 Data transfer jobs.
Lowest background Background 24 Hours 15,000 User jobs that will not be charged against the project allocation.
Gaffney is currently Up.

Gaffney is an HPE SGI 8600 system located at the Navy DSRC. It has 704 standard compute nodes, 16 large-memory nodes, and 32 GPU-accelerated nodes (a total of 752 compute nodes or 36,096 compute cores). It has 154 TBytes of memory and is rated at 3.05 peak PFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory GPU Accelerated
Total Cores | Nodes 384 | 8 33,792 | 704 768 | 16 1,536 | 32
Operating System RHEL
Cores/Node 48 48 + 1 GPU
(1 x 3,584 GPU cores)
Core Type Intel Xeon Platinum 8168 Intel Xeon Platinum 8168
+NVIDIA Tesla P100
Core Speed 2.7 GHz
Memory/Node 384 GBytes 192 GBytes 768 GBytes 384 GBytes
+16 GBytes
Accessible Memory/Node 380 GBytes 180 GBytes 744 GBytes 372 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Intel Omni-Path
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 768 Designated urgent projects by DoD HPCMP
Down arrow for decreasing priority frontier Frontier 168 Hours 19200 Designated frontier projects by DoD HPCMP
high High 168 Hours 15840 Designated high-priority jobs by Service/Agency
debug Debug 30 Minutes 2400 User diagnostic jobs
standard Standard 24 Hours 8168 Normal priority user jobs
gpu N/A 24 Hours 48 GPU-accelerated jobs
transfer N/A 48 Hours N/A Data transfer jobs
bigmem N/A 96 Hours 768 Large-memory jobs
Lowest background Background 4 Hours 1200 User jobs that will not be charged against the project allocation
Gordon is currently Up.

Gordon is a Cray XC40 system located at the Navy DSRC. It has 1,523 standard compute nodes, 8 large-memory compute nodes, and 168 Xeon Phi compute nodes (a total of 1,699 compute nodes or 51,008 compute cores). It has 208 TBytes of memory and is rated at 2 peak PFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Phi Accelerated
Total Cores | Nodes 192 | 6 48,736 | 1,523 256 | 8 2,016 | 168
Operating System SLES Cray Linux Environment
Cores/Node 32 12 + 1 Phi
(1 x 60 Phi cores)
Core Type Intel Xeon E5-2698v3 Intel Xeon E5-2696v2
+Intel Xeon 5120D Phi
Core Speed 2.3 GHz 2.4 GHz
+1.05 GHz
Memory/Node 256 GBytes 128 GBytes 512 GBytes 64 GBytes
+8 GBytes
Accessible Memory/Node 240 GBytes 125 GBytes 510 GBytes 63 GBytes
+7.5 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Aries / Dragonfly
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 768 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority frontier Frontier 168 Hours 16,000 Designated Frontier projects by DoD HPCMP
high High 168 Hours 16,000 Jobs belonging to DoD HPCMP High Priority Projects
debug Debug 30 Minutes 3,072 User diagnostic jobs.
standard Standard 168 Hours 8,000 Standard jobs
phi N/A 168 Hours 4,032 Phi-accelerated jobs.
bigmem N/A 24 Hours 224 Large-memory jobs
transfer N/A 48 Hours N/A Data transfer for user jobs.
Lowest background Background 4 Hours 768 Unrestricted access - no allocation charge
Koehr is currently Up.

Koehr is an HPE SGI 8600 system located at the Navy DSRC. It has 704 standard compute nodes, 16 large-memory nodes, and 32 GPU-accelerated nodes (a total of 752 compute nodes or 36,096 compute cores). It has 154 TBytes of memory and is rated at 3.05 peak PFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory GPU Accelerated
Total Cores | Nodes 384 | 8 33,792 | 704 768 | 16 1,536 | 32
Operating System RHEL
Cores/Node 48 48 + 1 GPU
(1 x 3,584 GPU cores)
Core Type Intel Xeon Platinum 8168 Intel Xeon Platinum 8168
+NVIDIA Tesla P100
Core Speed 2.7 GHz
Memory/Node 384 GBytes 192 GBytes 768 GBytes 384 GBytes
+16 GBytes
Accessible Memory/Node 380 GBytes 180 GBytes 744 GBytes 372 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Intel Omni-Path
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 768 Designated urgent projects by DoD HPCMP
Down arrow for decreasing priority frontier Frontier 168 Hours 19200 Designated frontier projects by DoD HPCMP
high High 168 Hours 15840 Designated high-priority jobs by Service/Agency
debug Debug 30 Minutes 2400 User diagnostic jobs
standard Standard 24 Hours 8168 Normal priority user jobs
gpu N/A 24 Hours 48 GPU-accelerated jobs
transfer N/A 48 Hours N/A Data transfer jobs
bigmem N/A 96 Hours 768 Large-memory jobs
Lowest background Background 4 Hours 1200 User jobs that will not be charged against the project allocation
Mustang is currently running in a degraded state.

Mustang is an HPE SGI 8600 system located at the AFRL DSRC. It has 1,128 standard compute nodes, 24 large-memory compute nodes, and 24 GPU compute nodes (a total of 1,176 compute nodes or 56,448 compute cores). It has 244 TBytes of memory and is rated at 4.88 peak PFLOPS.

More Info

Status: Degraded

Mustang is currently running in a degraded state. Jobs may be negatively affected.

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory GPU Accelerated
Total Cores | Nodes 576 | 12 54,144 | 1,128 1,152 | 24 1,152 | 24
Operating System RHEL
Cores/Node 48 48 + 1 GPU
(1 x 8,584 GPU cores)
Core Type Intel Xeon Platinum 8168 Intel Xeon Platinum 8168
+NVIDIA Tesla P100
Core Speed 2.7 GHz
Memory/Node 384 GBytes 192 GBytes 768 GBytes 384 GBytes
+16 GBytes
Accessible Memory/Node 380 GBytes 180 GBytes 744 GBytes 372 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Intel Omni-Path
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 168 Hours 28,224 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority debug Debug 1 Hour 1,152 User testing.
high High 168 Hours 28,224 Jobs belonging to DoD HPCMP High Priority Projects
frontier Frontier 168 Hours 28,224 Jobs belonging to DoD HPCMP Frontier Projects.
standard Standard 168 Hours 28,224 Standard jobs
HIE Debug 24 Hours 1 HPC Interactive Environment.
transfer N/A 48 Hours 1 Data transfer for user jobs.
Lowest background Background 120 Hours 48 Unrestricted access - no allocation charge
Onyx is currently Up.

Onyx is a Cray XC40/50 system located at the ERDC DSRC. It has 4,810 standard compute nodes, 4 large-memory compute nodes, 32 KNL nodes, and 32 GPU compute nodes (a total of 4,878 compute nodes or 214,568 compute cores). It has 616 TBytes of memory and is rated at 6.06 peak PFLOPS.

More Info

Maintenance
Date / TimeDetails
2019 Dec 19 08:00 - 17:00 CT Archive Maintenance
Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory KNL GPU Accelerated
Total Cores | Nodes 264 | 12 211,640 | 4,810 176 | 4 2,048 | 32 704 | 32
Operating System SLES Cray Linux Environment
Cores/Node 22 44 64 22 + 1 GPU
(1 x 3,584 GPU cores)
Core Type Intel Xeon
E5-2699v4 Broadwell
Intel Xeon Phi 7230
(Knights Landing)
Intel Xeon
E5-2699v4 Broadwell
+NVIDIA Pascal GPU (Tesla P100)
Core Speed 2.8 GHz 1.3 GHz 2.8 GHz
Memory/Node 256 GBytes 128 GBytes 1 TBytes 96 GBytes 256 GBytes
+16 GBytes
Accessible Memory/Node 8 GBytes 122 GBytes 991 GBytes 90 GBytes 247 GBytes
+16 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Aries
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 4,312 Designated urgent jobs by DoD HPCMP
Down arrow for decreasing priority test N/A 24 Hours N/A Staff-only testing
debug Debug 1 Hour 6,820 User testing
high_lg High 24 62,876 Designated high-priority jobs by Service/Agency (large jobs)
high_sm High 24 4,312 Designated high-priority jobs by Service/Agency (small jobs)
high_lw High 168 3,432 Designated high-priority jobs by Service/Agency (long walltime)
frontier_lg Frontier 24 85,536 Frontier projects only (large jobs)
frontier_sm Frontier 48 4,312 Frontier projects only (small jobs)
frontier_lw Frontier 168 9,328 Frontier projects only (long walltime)
standard_lg Standard 24 62,876 Normal priority jobs (large jobs)
standard_sm Standard 24 4,312 Normal priority jobs (small jobs)
standard_lw Standard 168 3,432 Normal priority jobs (long walltime)
transfer N/A 48 1 Data transfer jobs. Access to the long-term storage.
Lowest background Background 4 4,312 Unrestricted access - no allocation charge. Number of cores can be altered depending on the current system load.
Thunder is currently Up.

Thunder is an SGI ICE X system located at the AFRL DSRC. It has 3,216 standard compute nodes, 4 large-memory compute nodes, 178 Xeon Phi compute nodes, and 178 GPU compute nodes (a total of 3,576 compute nodes or 125,888 compute cores). It has 460 TBytes of memory and is rated at 5.62 peak PFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory GPU Accelerated Phi Accelerated
Total Cores | Nodes 448 | 16 115,776 | 3,216 144 | 4 4,984 | 178 4,984 | 178
Operating System SLES
Cores/Node 28 36 28 + 2 GPU
(2 x 2,880 GPU cores)
28 + 2 Phi
(2 x 61 Phi cores)
Core Type Intel E5-2699v3 Intel E5-2697v3
+NVIDIA Tesla K40M
Intel E5-2697v3
+Intel 7120P
Core Speed 2.3 GHz 2.6 GHz
Memory/Node 128 GBytes 768 GBytes 128 GBytes
+24 GBytes
128 GBytes
+16 GBytes
Accessible Memory/Node 126 GBytes 766 GBytes 126 GBytes 126 GBytes
+15 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type 4x FDR InfiniBand 4x FDR InfiniBand;
Enhanced LX Hypercube
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 168 Hours 57,888 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority debug Debug 1 Hour 2,340 User testing.
frontier Frontier 168 Hours 57,888 Jobs belonging to the DoD HPCMP Frontier Projects
high High 168 Hours 57,888 Jobs belonging to DoD HPCMP High Priority Projects
standard Standard 168 Hours 57,888 Standard jobs
transfer N/A 48 Hours 1 Data transfer for user jobs.
Lowest background Background 120 Hours 36 Unrestricted access - no allocation charge