Unclassified Systems

Centennial is an SGI ICE XA system located at the ARL DSRC. It has 1,784 standard compute nodes, 32 large-memory compute nodes, and 32 GPU compute nodes (a total of 1,848 compute nodes or 73,920 compute cores). It has 253 TBytes of memory and is rated at 2.6 peak PFLOPS.
Available Documentation
Date / Time | Details |
---|---|
2021 Feb 16 08:00 - Feb 23 17:00 ET | Hardware Maintenance |
Login Nodes | Compute Nodes | |||
---|---|---|---|---|
Standard Memory | Large Memory | GPU Accelerated | ||
Total Cores | Nodes | 960 | 24 | 71,360 | 1,784 | 1,280 | 32 | 1,280 | 32 |
Operating System | RHEL | |||
Cores/Node | 40 | 40 + 1 GPU (1 x 2,880 GPU cores) |
||
Core Type | Intel Xeon E5-2698v4 Broadwell |
Intel Xeon E5-2698v4 Broadwell +NVIDIA Tesla K40P |
||
Core Speed | 2.2 GHz | 2.6 GHz | 2.2 GHz | |
Memory/Node | 256 GBytes | 128 GBytes | 512 GBytes | 256 GBytes |
Accessible Memory/Node | 252 GBytes | 124 GBytes | 508 GBytes | 252 GBytes |
Memory Model | Shared on node. | Shared on node. Distributed across cluster. |
||
Interconnect Type | Ethernet / InfiniBand |
Priority | Queue Name |
Max Wall Clock Time |
Max Cores Per Job |
Comments |
---|---|---|---|---|
Highest | debug | 1 Hour | N/A | User diagnostic jobs |
![]() |
transfer | 48 Hours | 1 | Data transfer jobs |
urgent | 96 Hours | N/A | Designated urgent jobs by DoD HPCMP | |
staff | 368 Hours | N/A | ARL DSRC staff testing only. System testing and user support | |
high | 96 Hours | N/A | Designated high-priority projects by service/agency | |
frontier | 168 Hours | N/A | Frontier projects only | |
cots | 96 Hours | N/A | Abaqus, Fluent, and Cobalt jobs | |
interactive | 12 Hours | N/A | Interactive jobs | |
standard-long | 200 Hours | N/A | ARL DSRC permission required | |
standard | 168 Hours | N/A | Normal user jobs | |
Lowest | background | 24 Hours | User jobs that will not be charged against the project allocation |

Excalibur is a Cray XC40 system located at the ARL DSRC. It has 3,098 standard compute nodes, 32 large-memory compute nodes, and 32 GPU compute nodes (a total of 3,162 compute nodes or 101,184 compute cores). It has 421 TBytes of memory and is rated at 3.77 peak PFLOPS.
Available Documentation
Login Nodes | Compute Nodes | |||
---|---|---|---|---|
Standard Memory | Large Memory | GPU Accelerated | ||
Total Cores | Nodes | 512 | 16 | 99,136 | 3,098 | 1,024 | 32 | 1,024 | 32 |
Operating System | SLES | Cray Linux Environment | ||
Cores/Node | 32 | 32 + 1 GPU (1 x 2,880 GPU cores) |
||
Core Type | Intel Xeon E5-2698 v3 | Intel Xeon E5-2698 v3 +NVIDIA Tesla K40 |
||
Core Speed | 2.3 GHz | 2.5 GHz | ||
Memory/Node | 256 GBytes | 128 GBytes | 512 GBytes | 256 GBytes +12 GBytes |
Accessible Memory/Node | 2 GBytes | 126 GBytes | 508 GBytes | 252 GBytes |
Memory Model | Shared on node. | Shared on node. Distributed across cluster. |
||
Interconnect Type | Ethernet / InfiniBand | Cray Aries / Dragonfly | Ethernet / InfiniBand |
Priority | Queue Name |
Max Wall Clock Time |
Max Cores Per Job |
Comments |
---|---|---|---|---|
Highest | urgent | 168 Hours | N/A | Designated urgent jobs by DoD HPCMP |
![]() |
high | 168 Hours | N/A | Designated high-priority projects by service/agency |
frontier | 168 Hours | N/A | Frontier projects only | |
standard-long | 200 Hours | 512 | Access available by request | |
debug | 1 Hour | N/A | User diagnostic jobs | |
standard | 168 Hours | N/A | Normal user jobs | |
transfer | 48 Hours | 1 | Data transfer jobs | |
Lowest | background | 24 Hours | 15,000 | User jobs that will not be charged against the project allocation |

Gaffney is an HPE SGI 8600 system located at the Navy DSRC. It has 704 standard compute nodes, 16 large-memory nodes, and 32 GPU-accelerated nodes (a total of 752 compute nodes or 36,096 compute cores). It has 154 TBytes of memory and is rated at 3.05 peak PFLOPS.
Available Documentation
Login Nodes | Compute Nodes | |||
---|---|---|---|---|
Standard Memory | Large Memory | GPU Accelerated | ||
Total Cores | Nodes | 384 | 8 | 33,792 | 704 | 768 | 16 | 1,536 | 32 |
Operating System | RHEL | |||
Cores/Node | 48 | 48 + 1 GPU (1 x 3,584 GPU cores) |
||
Core Type | Intel Xeon Platinum 8168 | Intel Xeon Platinum 8168 +NVIDIA Tesla P100 |
||
Core Speed | 2.7 GHz | |||
Memory/Node | 384 GBytes | 192 GBytes | 768 GBytes | 384 GBytes +16 GBytes |
Accessible Memory/Node | 380 GBytes | 180 GBytes | 744 GBytes | 372 GBytes |
Memory Model | Shared on node. | Shared on node. Distributed across cluster. |
||
Interconnect Type | Intel Omni-Path |
Priority | Queue Name |
Max Wall Clock Time |
Max Cores Per Job |
Comments |
---|---|---|---|---|
Highest | urgent | 24 Hours | 768 | Designated urgent projects by DoD HPCMP |
![]() |
frontier | 168 Hours | 14,400 | Designated frontier projects by DoD HPCMP |
high | 168 Hours | 15,840 | Designated high-priority jobs by Service/Agency | |
debug | 30 Minutes | 2,400 | User diagnostic jobs | |
standard | 168 Hours | 8,160 | Normal priority user jobs | |
serial | 168 Hours | 1 | Serial user jobs | |
gpu | 24 Hours | 48 | GPU-accelerated jobs | |
transfer | 48 Hours | N/A | Data transfer jobs | |
bigmem | 96 Hours | 288 | Large-memory jobs | |
Lowest | background | 4 Hours | 1,200 | User jobs that will not be charged against the project allocation |

Koehr is an HPE SGI 8600 system located at the Navy DSRC. It has 704 standard compute nodes, 16 large-memory nodes, and 32 GPU-accelerated nodes (a total of 752 compute nodes or 36,096 compute cores). It has 154 TBytes of memory and is rated at 3.05 peak PFLOPS.
Available Documentation
Login Nodes | Compute Nodes | |||
---|---|---|---|---|
Standard Memory | Large Memory | GPU Accelerated | ||
Total Cores | Nodes | 384 | 8 | 33,792 | 704 | 768 | 16 | 1,536 | 32 |
Operating System | RHEL | |||
Cores/Node | 48 | 48 + 1 GPU (1 x 3,584 GPU cores) |
||
Core Type | Intel Xeon Platinum 8168 | Intel Xeon Platinum 8168 +NVIDIA Tesla P100 |
||
Core Speed | 2.7 GHz | |||
Memory/Node | 384 GBytes | 192 GBytes | 768 GBytes | 384 GBytes +16 GBytes |
Accessible Memory/Node | 380 GBytes | 180 GBytes | 744 GBytes | 372 GBytes |
Memory Model | Shared on node. | Shared on node. Distributed across cluster. |
||
Interconnect Type | Intel Omni-Path |
Priority | Queue Name |
Max Wall Clock Time |
Max Cores Per Job |
Comments |
---|---|---|---|---|
Highest | urgent | 24 Hours | 768 | Designated urgent projects by DoD HPCMP |
![]() |
frontier | 168 Hours | 14,400 | Designated frontier projects by DoD HPCMP |
high | 168 Hours | 15,840 | Designated high-priority jobs by Service/Agency | |
debug | 30 Minutes | 2,400 | User diagnostic jobs | |
standard | 168 Hours | 8,160 | Normal priority user jobs | |
serial | 168 Hours | 1 | Serial user jobs | |
gpu | 24 Hours | 48 | GPU-accelerated jobs | |
transfer | 48 Hours | N/A | Data transfer jobs | |
bigmem | 96 Hours | 288 | Large-memory jobs | |
Lowest | background | 4 Hours | 1,200 | User jobs that will not be charged against the project allocation |

Mustang is an HPE SGI 8600 system located at the AFRL DSRC. It has 1,128 standard compute nodes, 24 large-memory compute nodes, and 24 GPU compute nodes (a total of 1,176 compute nodes or 56,448 compute cores). It has 244 TBytes of memory and is rated at 4.88 peak PFLOPS.
Available Documentation
Login Nodes | Compute Nodes | |||
---|---|---|---|---|
Standard Memory | Large Memory | GPU Accelerated | ||
Total Cores | Nodes | 576 | 12 | 54,144 | 1,128 | 1,152 | 24 | 1,152 | 24 |
Operating System | RHEL | |||
Cores/Node | 48 | 48 + 1 GPU (1 x 3,584 GPU cores) |
||
Core Type | Intel Xeon Platinum 8168 | Intel Xeon Platinum 8168 +NVIDIA Tesla P100 |
||
Core Speed | 2.7 GHz | |||
Memory/Node | 384 GBytes | 192 GBytes | 768 GBytes | 384 GBytes +16 GBytes |
Accessible Memory/Node | 380 GBytes | 180 GBytes | 744 GBytes | 372 GBytes |
Memory Model | Shared on node. | Shared on node. Distributed across cluster. |
||
Interconnect Type | Intel Omni-Path |
Priority | Queue Name |
Max Wall Clock Time |
Max Cores Per Job |
Comments |
---|---|---|---|---|
Highest | urgent | 168 Hours | 28,224 | Jobs belonging to DoD HPCMP Urgent Projects |
![]() |
debug | 1 Hour | 1,152 | User testing |
high | 168 Hours | 28,224 | Jobs belonging to DoD HPCMP High Priority Projects | |
frontier | 168 Hours | 28,224 | Jobs belonging to DoD HPCMP Frontier Projects | |
standard | 168 Hours | 28,224 | Standard jobs | |
HIE | 24 Hours | 96 | Rapid response for interactive work | |
transfer | 48 Hours | 1 | Data transfer for user jobs | |
Lowest | background | 120 Hours | 48 | Unrestricted access - no allocation charge |

Onyx is a Cray XC40/50 system. It has 4,810 standard compute nodes, 4 large-memory compute nodes, 32 GPU compute nodes, 32 Knights Landing (Phi) compute nodes, and 64 Machine Learning Accelerator (MLA) multi-GPGPU nodes (a total of 4,942 compute nodes or 217,128 compute cores). It is rated at 6.06 peak PFLOPS.
Available Documentation
Login Nodes | Compute Nodes | ||||||
---|---|---|---|---|---|---|---|
Standard Memory | Large Memory | KNL | GPU Accelerated | MLA_2 Accelerated | MLA_10 Accelerated | ||
Total Cores | Nodes | 264 | 12 | 211,640 | 4,810 | 176 | 4 | 2,048 | 32 | 704 | 32 | 2,400 | 60 | 160 | 4 |
Operating System | SLES | Cray Linux Environment | CentOS | ||||
Cores/Node | 22 | 44 | 64 | 22 + 1 GPU (1 x 3,584 CUDA cores) |
40 + 2 GPUs (2 x 5,120 CUDA cores) |
40 + 10 GPUs (10 x 5,120 CUDA cores) |
|
Core Type | Dual Intel E5-2699v4 Broadwell (22 cores/socket) |
Intel Phi 7230 Knights Landing |
Intel E5-2699v4 Broadwell +NVIDIA P100 |
Dual Intel 6148 Skylake (20 cores/socket) +NVIDIA V100 SXM2 |
Dual Intel 6148 Skylake (20 cores/socket) +NVIDIA V100 PCIe |
||
Core Speed | 2.8 GHz | 1.3 GHz | 2.8 GHz | 2.4 GHz | |||
Memory/Node | 256 GBytes | 128 GBytes | 1 TByte | 96 GBytes | 256 GBytes +16 GBytes |
192 GBytes +2 x 16 GBytes |
768 GBytes +10 x 32 GBytes |
Accessible Memory/Node | 8 GBytes | 121 GBytes | 991 GBytes | 90 GBytes | 247 GBytes +16 GBytes |
172 GBytes +2 x 16 GBytes |
735 GBytes +10 x 32 GBytes |
Memory Model | Shared on node. | Shared on node. Distributed across cluster. |
|||||
Interconnect Type | Ethernet | Cray Aries | InfiniBand |
Priority | Queue Name |
Max Wall Clock Time |
Max Jobs |
Min Cores Per Job |
Max Cores Per Job |
Comments |
---|---|---|---|---|---|---|
Highest | urgent | 24 Hours | N/A | 22 | 7,260 | Designated urgent jobs by DoD HPCMP |
![]() |
test | 24 Hours | N/A | 22 | N/A | Staff-only testing |
debug | 1 Hour | 4 | 22 | 11,484 | User testing | |
HIE | 24 Hours | 2 | 22 | 110 | Rapid response for interactive work | |
frontier_lg | 24 Hours | 2 | 7,261 | 143,968 | Frontier projects only (large jobs) | |
frontier_lw | 168 Hours | 15 | 22 | 15,708 | Frontier projects only (long walltime) | |
frontier_sm | 48 Hours | 70 | 22 | 7,260 | Frontier projects only (small jobs) | |
high_lg | 24 Hours | 2 | 8,449 | 105,820 | Designated high-priority jobs by Service/Agency (large jobs) | |
high_lw | 168 Hours | 15 | 22 | 10,824 | Designated high-priority jobs by Service/Agency (long walltime) | |
high_sm | 24 Hours | 70 | 22 | 8,448 | Designated high-priority jobs by Service/Agency (small jobs) | |
frontier_md | 96 Hours | 2 | 15,709 | 34,540 | Frontier projects only (medium sized, long walltime) | |
standard_lg | 24 Hours | 2 | 7,261 | 105,820 | Normal priority jobs (large jobs) | |
standard_lw | 168 Hours | 3 | 22 | 5,808 | Normal priority jobs (long walltime) | |
standard_sm | 24 Hours | 70 | 22 | 7,260 | Normal priority jobs (small jobs) | |
transfer | 48 Hours | 6 | 1 | 1 | Data transfer jobs. Access to the long-term storage | |
Lowest | background | 4 Hours | 6 | 22 | 7,260 | Unrestricted access - no allocation charge |

SCOUT is an IBM Power9 system located at the ARL DSRC. It has 22 Training nodes, each with 6 nVidia V100 GPUs, 128 Inference nodes, each with 4 nVidia T4 GPUs, and 2 Visualization nodes with 4 nVidia V100 GPUs each (a total of 152 compute nodes or 6,080 cores). It has 45 TBytes of memory.
Available Documentation
Login Nodes | GPU Accelerated Compute Nodes | ||||
---|---|---|---|---|---|
Training | Inference | Visualization | |||
Total Cores | Nodes | 160 | 4 | 880 | 22 | 5,120 | 128 | 80 | 2 | |
Operating System | RHEL | ||||
Cores/Node | 40 | 40 + 6 GPUs (6 x 5,120 CUDA cores, 6 x 640 Tensor cores) |
40 + 4 GPUs (4 x 2,500 CUDA cores, 4 x 320 Tensor cores) |
40 + 2 GPUs (2 x 5,120 CUDA cores, 2 x 640 Tensor cores) |
|
Core Type | IBM Power9 | IBM Power9 +NVIDIA Volta V100 |
IBM Power9 +NVIDIA T4 GPUs |
IBM Power9 +NVIDIA Volta V100 |
|
Core Speed | 2.55 GHz | ||||
Memory/Node | 512 GBytes | 700 GBytes | 256 GBytes | 512 GBytes | |
Accessible Memory/Node | 502 GBytes | 690 GBytes | 246 GBytes | 502 GBytes | |
Memory Model | Shared on node. | Shared on node. Distributed across cluster. |
|||
Interconnect Type | Ethernet / InfiniBand |
Priority | Queue Name |
Max Wall Clock Time |
Max Cores Per Job |
Comments |
---|---|---|---|---|
Highest | transfer | 48 Hours | N/A | Data transfer jobs |
![]() |
urgent | 96 Hours | N/A | Designated urgent jobs by DoD HPCMP |
debug | 1 Hour | N/A | User diagnostic jobs | |
high | 168 Hours | N/A | Designated high-priority projects by service/agency | |
frontier | 168 Hours | N/A | Frontier projects only | |
cots | 96 Hours | N/A | Abaqus and Fluent jobs | |
HIE | 24 Hours | N/A | Rapid response for interactive work | |
interactive | 12 Hours | N/A | Interactive jobs | |
standard | 168 Hours | N/A | Normal user jobs | |
Lowest | background | 24 Hours | N/A | User jobs that will not be charged against the project allocation |