Unclassified Systems

Gaffney is an HPE SGI 8600 system located at the Navy DSRC. It has 704 standard compute nodes, 16 large-memory nodes, and 32 GPU-accelerated nodes (a total of 752 compute nodes or 36,096 compute cores). It has 154 TB of memory and is rated at 3.05 peak PFLOPS.
Available Documentation
Date / Time | Details |
---|---|
2023 Sep 26 13:00 - 17:00 CT (Completed) | Network Maintenance |
Login | Standard | Large-Memory | GPU | |
---|---|---|---|---|
Total Nodes | 8 | 704 | 16 | 32 |
Processor | Intel 8168 Skylake | Intel 8168 Skylake | Intel 8168 Skylake | Intel 8168 Skylake |
Processor Speed | 2.7 GHz | 2.7 GHz | 2.7 GHz | 2.7 GHz |
Sockets / Node | 2 | 2 | 2 | 2 |
Cores / Node | 48 | 48 | 48 | 48 |
Total CPU Cores | 384 | 33,792 | 768 | 1,536 |
Usable Memory / Node | 320 GB | 170 GB | 742 GB | 367 GB |
Accelerators / Node | None | None | None | 1 |
Accelerator | n/a | n/a | n/a | NVIDIA P100 PCIe 3 |
Memory / Accelerator | n/a | n/a | n/a | 16 GB |
Storage on Node | None | None | 3.2 TB SSD | None |
Interconnect | Intel Omni-Path | Intel Omni-Path | Intel Omni-Path | Intel Omni-Path |
Operating System | RHEL | RHEL | RHEL | RHEL |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Description |
---|---|---|---|---|
Highest | urgent | 24 Hours | 768 | Designated urgent projects by DoD HPCMP |
![]() |
frontier | 168 Hours | 14,400 | Designated frontier projects by DoD HPCMP |
high | 168 Hours | 15,840 | Designated high-priority jobs by Service/Agency | |
debug | 30 Minutes | 2,400 | User diagnostic jobs | |
HIE | 24 Hours | 384 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
standard | 168 Hours | 8,160 | Normal priority user jobs | |
serial | 168 Hours | 1 | Serial user jobs | |
gpu | 24 Hours | 384 | GPU-accelerated jobs | |
transfer | 48 Hours | N/A | Data transfer for user jobs. See the Navy DSRC Archive Guide, section 5.2. | |
bigmem | 96 Hours | 720 | Large-memory jobs | |
Lowest | background | 4 Hours | 1,200 | User jobs that will not be charged against the project allocation |

Koehr is an HPE SGI 8600 system located at the Navy DSRC. It has 704 standard compute nodes, 16 large-memory nodes, and 32 GPU-accelerated nodes (a total of 752 compute nodes or 36,096 compute cores). It has 154 TB of memory and is rated at 3.05 peak PFLOPS.
Available Documentation
Date / Time | Details |
---|---|
2023 Sep 26 13:00 - 17:00 CT (Completed) | Network Maintenance |
Login | Standard | Large-Memory | GPU | |
---|---|---|---|---|
Total Nodes | 8 | 704 | 16 | 32 |
Processor | Intel 8168 Skylake | Intel 8168 Skylake | Intel 8168 Skylake | Intel 8168 Skylake |
Processor Speed | 2.7 GHz | 2.7 GHz | 2.7 GHz | 2.7 GHz |
Sockets / Node | 2 | 2 | 2 | 2 |
Cores / Node | 48 | 48 | 48 | 48 |
Total CPU Cores | 384 | 33,792 | 768 | 1,536 |
Usable Memory / Node | 320 GB | 170 GB | 742 GB | 367 GB |
Accelerators / Node | None | None | None | 1 |
Accelerator | n/a | n/a | n/a | NVIDIA P100 PCIe 3 |
Memory / Accelerator | n/a | n/a | n/a | 16 GB |
Storage on Node | None | None | 3.2 TB SSD | None |
Interconnect | Intel Omni-Path | Intel Omni-Path | Intel Omni-Path | Intel Omni-Path |
Operating System | RHEL | RHEL | RHEL | RHEL |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Description |
---|---|---|---|---|
Highest | urgent | 24 Hours | 768 | Designated urgent projects by DoD HPCMP |
![]() |
frontier | 168 Hours | 14,400 | Designated frontier projects by DoD HPCMP |
high | 168 Hours | 14,304 | Designated high-priority jobs by Service/Agency | |
debug | 30 Minutes | 2,400 | User diagnostic jobs | |
HIE | 24 Hours | 384 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
standard | 168 Hours | 8,160 | Normal priority user jobs | |
serial | 168 Hours | 1 | Serial user jobs | |
gpu | 24 Hours | 384 | GPU-accelerated jobs | |
transfer | 48 Hours | N/A | Data transfer for user jobs. See the Navy DSRC Archive Guide, section 5.2. | |
bigmem | 96 Hours | 720 | Large-memory jobs | |
Lowest | background | 4 Hours | 1,200 | User jobs that will not be charged against the project allocation |

Mustang is an HPE SGI 8600 system located at the AFRL DSRC. It has 1,128 standard compute nodes, 24 large-memory compute nodes, and 24 GPU compute nodes (a total of 1,176 compute nodes or 56,448 compute cores). It has 244 TB of memory and is rated at 4.88 peak PFLOPS.
Available Documentation
Login | Standard | Large-Memory | GPU | |
---|---|---|---|---|
Total Nodes | 12 | 1,128 | 24 | 24 |
Processor | Intel 8168 Skylake | Intel 8168 Skylake | Intel 8168 Skylake | Intel 8168 Skylake |
Processor Speed | 2.7 GHz | 2.7 GHz | 2.7 GHz | 2.7 GHz |
Sockets / Node | 2 | 2 | 2 | 2 |
Cores / Node | 48 | 48 | 48 | 48 |
Total CPU Cores | 576 | 54,144 | 1,152 | 1,152 |
Usable Memory / Node | 380 GB | 180 GB | 744 GB | 372 GB |
Accelerators / Node | None | None | None | 1 |
Accelerator | n/a | n/a | n/a | NVIDIA P100 PCIe 3 |
Memory / Accelerator | n/a | n/a | n/a | 16 GB |
Storage on Node | None | None | None | None |
Interconnect | Intel Omni-Path | Intel Omni-Path | Intel Omni-Path | Intel Omni-Path |
Operating System | RHEL | RHEL | RHEL | RHEL |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Description |
---|---|---|---|---|
Highest | urgent | 168 Hours | 28,224 | Jobs belonging to DoD HPCMP Urgent Projects |
![]() |
debug | 1 Hour | 1,152 | User testing |
high | 168 Hours | 28,224 | Jobs belonging to DoD HPCMP High Priority Projects | |
frontier | 168 Hours | 28,224 | Jobs belonging to DoD HPCMP Frontier Projects | |
standard | 168 Hours | 28,224 | Standard jobs | |
HIE | 24 Hours | 96 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
transfer | 48 Hours | 1 | Data transfer for user jobs. See the AFRL DSRC Archive Guide, section 5.2. | |
Lowest | background | 120 Hours | 48 | Unrestricted access - no allocation charge |

Narwhal is an HPE Cray EX system located at the Navy DSRC. It has 2,304 standard compute nodes, 26 large-memory nodes, 16 visualization accelerated nodes, 32 1-MLA accelerated nodes, and 32 2-MLA accelerated nodes (a total of 2,410 compute nodes or 308,480 compute cores). It has 640 TB of memory and is rated at 13.5 peak PFLOPS.
Available Documentation
Date / Time | Details |
---|---|
2023 Sep 26 13:00 - 17:00 CT (Completed) | Network Maintenance |
2023 Oct 02 08:00 - Oct 05 14:00 CT (In Progress) | System Maintenance |
Login | Standard | Large-Memory | Visualization | MLA 1-GPU | MLA 2-GPU | |
---|---|---|---|---|---|---|
Total Nodes | 11 | 2,304 | 26 | 16 | 32 | 32 |
Processor | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome |
Processor Speed | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz |
Sockets / Node | 2 | 2 | 2 | 2 | 2 | 2 |
Cores / Node | 128 | 128 | 128 | 128 | 128 | 128 |
Total CPU Cores | 1,408 | 294,912 | 3,328 | 2,048 | 4,096 | 4,096 |
Usable Memory / Node | 226 GB | 238 GB | 995 GB | 234 GB | 239 GB | 239 GB |
Accelerators / Node | None | None | None | 1 | 1 | 2 |
Accelerator | n/a | n/a | n/a | NVIDIA V100 PCIe 3 | NVIDIA V100 PCIe 3 | NVIDIA V100 PCIe 3 |
Memory / Accelerator | n/a | n/a | n/a | 32 GB | 32 GB | 32 GB |
Storage on Node | 880 GB SSD | None | 1.8 TB SSD | None | 880 GB SSD | 880 GB SSD |
Interconnect | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot |
Operating System | SLES | SLES | SLES | SLES | SLES | SLES |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Description |
---|---|---|---|---|
Highest | urgent | 24 Hours | 16,384 | Jobs belonging to DoD HPCMP Urgent Projects |
![]() |
frontier | 168 Hours | 65,536 | Jobs belonging to DoD HPCMP Frontier Projects |
high | 168 Hours | 32,768 | Jobs belonging to DoD HPCMP High Priority Projects | |
debug | 30 Minutes | 8,192 | Time/resource-limited for user testing and debug purposes | |
HIE | 24 Hours | 3,072 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
viz | 24 Hours | 128 | Visualization jobs | |
standard | 168 Hours | 32,768 | Standard jobs | |
mla | 24 Hours | 128 | Machine Learning Accelerated jobs | |
smla | 24 Hours | 128 | Machine Learning Accelerated jobs | |
dmla | 24 Hours | 128 | Machine Learning Accelerated jobs | |
serial | 168 Hours | 1 | Serial User jobs | |
bigmem | 96 Hours | 1,280 | Large-memory jobs | |
transfer | 48 Hours | N/A | Data transfer for user jobs. See the Navy DSRC Archive Guide, section 5.2. | |
Lowest | background | 4 Hours | 1,024 | User jobs that are not charged against the project allocation |

Nautilus is a Penguin TrueHPC system located at the Navy DSRC. It has 1,304 standard compute nodes, 16 large-memory nodes, 16 visualization accelerated nodes, 32 AI/ML nodes, and 32 High Core Performance nodes (a total of 1,400 compute nodes or 176,128 compute cores). It has 364 TB of memory and is rated at 8.2 peak PFLOPS.
Available Documentation
Date / Time | Details |
---|---|
2023 Sep 26 13:00 - 17:00 CT (Completed) | Network Maintenance |
Login | Standard | Large-Memory | Visualization | AI/ML | High Core Performance | |
---|---|---|---|---|---|---|
Total Nodes | 14 | 1,304 | 16 | 16 | 32 | 32 |
Processor | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 73F3 Milan |
Processor Speed | 2 GHz | 2 GHz | 2 GHz | 2 GHz | 2 GHz | 3.4 GHz |
Sockets / Node | 2 | 2 | 2 | 2 | 2 | 2 |
Cores / Node | 128 | 128 | 128 | 128 | 128 | 32 |
Total CPU Cores | 1,792 | 166,912 | 2,048 | 2,048 / 16 | 4,096 / 128 | 1,024 |
Usable Memory / Node | 433 GB | 237 GB | 998 GB | 491 GB | 491 GB | 491 GB |
Accelerators / Node | None | None | None | 1 | 4 | None |
Accelerator | n/a | n/a | n/a | NVIDIA A40 PCIe 4 | NVIDIA A100 PCIe 4 | n/a |
Memory / Accelerator | n/a | n/a | n/a | 48 GB | 160 GB | n/a |
Storage on Node | 1.92 TB NVMe SSD | None | 1.92 TB NVMe SSD | None | 1.92 TB NVMe SSD | None |
Interconnect | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand |
Operating System | RHEL | RHEL | RHEL | RHEL | RHEL | RHEL |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Description |
---|---|---|---|---|
Highest | urgent | 24 Hours | 16,384 | Jobs belonging to DoD HPCMP Urgent Projects |
![]() |
debug | 30 Minutes | 10,752 | Time/resource-limited for user testing and debug purposes |
high | 168 Hours | 65,536 | Jobs belonging to DoD HPCMP High Priority Projects | |
frontier | 168 Hours | 65,536 | Jobs belonging to DoD HPCMP Frontier Projects | |
standard | 168 Hours | 16,384 | Standard jobs | |
HIE | 24 Hours | 3,072 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
Lowest | background | 4 Hours | 4,096 | User jobs that are not charged against the project allocation |

Onyx is a Cray XC40/50 system. It has 4,806 standard compute nodes, 8 large-memory compute nodes, 32 GPU compute nodes, 32 Knights Landing (Phi) compute nodes, and 64 Machine Learning Accelerator (MLA) multi-GPGPU nodes (a total of 4,942 compute nodes or 217,128 compute cores). It is rated at 6.06 peak PFLOPS.
Available Documentation
Login | Standard | Large-Memory | KNL | GPU | 2-MLA | 10-MLA | |
---|---|---|---|---|---|---|---|
Total Nodes | 12 | 4,806 | 8 | 32 | 32 | 60 | 4 |
Processor | Intel E5-2699v4 Broadwell | Intel E5-2699v4 Broadwell | Intel E5-2699v4 Broadwell | Intel 7230 Knights Landing | Intel E5-2699v4 Broadwell | Intel 6148 Skylake | Intel 6148 Skylake |
Processor Speed | 2.8 GHz | 2.8 GHz | 2.8 GHz | 1.3 GHz | 2.8 GHz | 2.4 GHz | 2.4 GHz |
Sockets / Node | 2 | 2 | 2 | 1 | 1 | 2 | 2 |
Cores / Node | 44 | 44 | 44 | 64 | 22 | 40 | 40 |
Total CPU Cores | 528 | 211,464 | 352 | 2,048 | 704 | 2,400 | 160 |
Usable Memory / Node | 247 GB | 121 GB | 1 TB | 90 GB | 247 GB | 172 GB | 735 GB |
Accelerators / Node | None | None | None | None | 1 | 2 | 10 |
Accelerator | n/a | n/a | n/a | n/a | NVIDIA P100 PCIe 3 | NVIDIA V100 SXM2 | NVIDIA V100 PCIe 3 |
Memory / Accelerator | n/a | n/a | n/a | n/a | 16 GB | 16 GB | 32 GB |
Storage on Node | None | None | None | None | None | None | None |
Interconnect | Ethernet | Cray Aries | Cray Aries | Cray Aries | Cray Aries | InfiniBand | InfiniBand |
Operating System | SLES | CLE | CLE | CLE | CLE | CentOS | CentOS |
Priority | Queue Name | Max Wall Clock Time | Max Jobs | Min Cores Per Job | Max Cores Per Job | Description |
---|---|---|---|---|---|---|
Highest | urgent | 24 Hours | N/A | 22 | 7,260 | Designated urgent jobs by DoD HPCMP |
![]() |
debug | 1 Hour | 4 | 22 | 11,484 | User testing |
HIE | 24 Hours | 2 | 22 | 110 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
high_lg | 24 Hours | 2 | 8,449 | 105,820 | Designated high-priority jobs by Service/Agency (large jobs) | |
high_lw | 168 Hours | 15 | 22 | 10,824 | Designated high-priority jobs by Service/Agency (long walltime) | |
high_sm | 24 Hours | 70 | 22 | 8,448 | Designated high-priority jobs by Service/Agency (small jobs) | |
frontier_lg | 24 Hours | 2 | 7,261 | 143,968 | Frontier projects only (large jobs) | |
frontier_lw | 168 Hours | 15 | 22 | 15,708 | Frontier projects only (long walltime) | |
frontier_sm | 48 Hours | 70 | 22 | 7,260 | Frontier projects only (small jobs) | |
frontier_md | 96 Hours | 2 | 15,709 | 34,540 | Frontier projects only (medium sized, long walltime) | |
standard_lg | 24 Hours | 2 | 7,261 | 105,820 | Normal priority jobs (large jobs) | |
standard_lw | 168 Hours | 3 | 22 | 5,808 | Normal priority jobs (long walltime) | |
standard_sm | 24 Hours | 70 | 22 | 7,260 | Normal priority jobs (small jobs) | |
transfer | 48 Hours | 6 | 1 | 1 | Data transfer for user jobs. See the ERDC DSRC Archive Guide, section 5.2. | |
Lowest | background | 4 Hours | 6 | 22 | 7,260 | Unrestricted access - no allocation charge |

SCOUT is an IBM Power9 system located at the ARL DSRC. It has 22 Training nodes, each with 6 nVidia V100 GPUs, 128 Inference nodes, each with 4 nVidia T4 GPUs, and 2 Visualization nodes, each with 2 nVidia GPUs (a total of 152 compute nodes or 6,080 cores). It has 45 TB of memory.
Available Documentation
Login | Training | Inference | Visualization | |
---|---|---|---|---|
Total Nodes | 4 | 22 | 128 | 2 |
Processor | IBM POWER9 | IBM POWER9 | IBM POWER9 | IBM POWER9 |
Processor Speed | 2.55 GHz | 2.55 GHz | 2.55 GHz | 2.55 GHz |
Sockets / Node | 2 | 2 | 2 | 2 |
Cores / Node | 40 | 40 | 40 | 40 |
Total CPU Cores | 160 | 880 | 5,120 | 80 |
Usable Memory / Node | 502 GB | 690 GB | 246 GB | 502 GB |
Accelerators / Node | None | 6 | 4 | 2 |
Accelerator | n/a | NVIDIA V100 PCIe 3 | NVIDIA T4 PCIe 3 | NVIDIA V100 PCIe 3 |
Memory / Accelerator | n/a | 32 GB | 16 GB | 16 GB |
Storage on Node | 1.4 TB PCIe | 12 TB PCIe | 2.1 TB PCIe | 5.9 TB PCIe |
Interconnect | InfiniBand EDR | InfiniBand EDR | InfiniBand EDR | InfiniBand EDR |
Operating System | RHEL | RHEL | RHEL | RHEL |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Description |
---|---|---|---|---|
Highest | transfer | 48 Hours | N/A | Data transfer for user jobs. See the ARL DSRC Archive Guide, section 5.2. |
![]() |
urgent | 96 Hours | N/A | Designated urgent jobs by DoD HPCMP |
debug | 1 Hour | N/A | User diagnostic jobs | |
high | 168 Hours | N/A | Designated high-priority projects by service/agency | |
frontier | 168 Hours | N/A | Frontier projects only | |
HIE | 24 Hours | N/A | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
interactive | 12 Hours | N/A | Interactive jobs | |
standard | 168 Hours | N/A | Normal user jobs | |
Lowest | background | 24 Hours | N/A | User jobs that will not be charged against the project allocation |

Warhawk is an HPE Cray EX system located at the AFRL DSRC. It has 1,024 standard compute nodes, 4 large-memory nodes, 24 1-GPU visualization nodes, and 40 2-GPU Machine-Learning nodes (a total of 1,092 compute nodes or 139,776 compute cores). It has 564 TB of memory and is rated at 6.86 peak PFLOPS.
Available Documentation
Date / Time | Details |
---|---|
2023 Oct 16 08:00 - Oct 27 17:00 ET | System Maintenance |
Login | Standard | Large-Memory | Visualization | Machine-Learning Accelerated | |
---|---|---|---|---|---|
Total Nodes | 7 | 1,024 | 4 | 24 | 40 |
Processor | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome |
Processor Speed | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz |
Sockets / Node | 2 | 2 | 2 | 2 | 2 |
Cores / Node | 128 | 128 | 128 | 128 | 128 |
Total CPU Cores | 896 | 131,072 | 512 | 3,072 | 5,120 |
Usable Memory / Node | 995 GB | 503 GB | 995 GB | 503 GB | 503 GB |
Accelerators / Node | None | None | None | 1 | 2 |
Accelerator | n/a | n/a | n/a | NVIDIA V100 PCIe 3 | NVIDIA V100 PCIe 3 |
Memory / Accelerator | n/a | n/a | n/a | 32 GB | 32 GB |
Storage on Node | None | None | None | None | None |
Interconnect | Cray Slingshot | Cray Slingshot | Cray Slingshot | Cray Slingshot | Cray Slingshot |
Operating System | SLES | SLES | SLES | SLES | SLES |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Description |
---|---|---|---|---|
Highest | urgent | 168 Hours | 69,888 | Jobs belonging to DoD HPCMP Urgent Projects |
![]() |
debug | 1 Hour | 2,816 | Time/resource-limited for user testing and debug purposes |
high | 168 Hours | 69,888 | Jobs belonging to DoD HPCMP High Priority Projects | |
frontier | 168 Hours | 69,888 | Jobs belonging to DoD HPCMP Frontier Projects | |
standard | 168 Hours | 69,888 | Standard jobs | |
HIE | 24 Hours | 256 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
transfer | 48 Hours | 1 | Data transfer for user jobs. See the AFRL DSRC Archive Guide, section 5.2. | |
Lowest | background | 120 Hours | 2,816 | User jobs that are not charged against the project allocation |