Systems Summary
The HPCMP provides a variety of supercomputing platforms with an array of installed software to enable multidisciplinary computational science and engineering. Multiple vendor platforms, system architectures, programming environments, memory complement, and disk subsystems provide a variety of choices for matching your code to the appropriate HPC system. Our systems are summarized below.
Allocated
System | Site | Vendor / Model |
Node Type | OS | Processor | Processor Speed |
Total Nodes |
Cores per Node |
Total Cores |
Useable Memory per Node |
Peak PFLOPS |
Scheduler | End of Life |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Gaffney | NAVY | HPE SGI 8600 |
Login | RHEL | Intel 8168 Skylake | 2.7 GHz | 8 | 48 | 384 | 320 GB | 3.05 | Portable Batch System (PBS) | 12/10/2023 |
Standard | RHEL | Intel 8168 Skylake | 2.7 GHz | 704 | 48 | 33,792 | 170 GB | ||||||
Large-Memory | RHEL | Intel 8168 Skylake | 2.7 GHz | 16 | 48 | 768 | 742 GB | ||||||
GPU | RHEL | Intel 8168 Skylake | 2.7 GHz | 32 | 48 +1 NVIDIA P100 PCIe 3 |
1,536 | 367 GB | ||||||
Koehr | NAVY | HPE SGI 8600 |
Login | RHEL | Intel 8168 Skylake | 2.7 GHz | 8 | 48 | 384 | 320 GB | 3.05 | Portable Batch System (PBS) | 12/10/2023 |
Standard | RHEL | Intel 8168 Skylake | 2.7 GHz | 704 | 48 | 33,792 | 170 GB | ||||||
Large-Memory | RHEL | Intel 8168 Skylake | 2.7 GHz | 16 | 48 | 768 | 742 GB | ||||||
GPU | RHEL | Intel 8168 Skylake | 2.7 GHz | 32 | 48 +1 NVIDIA P100 PCIe 3 |
1,536 | 367 GB | ||||||
Mustang | AFRL | HPE SGI 8600 |
Login | RHEL | Intel 8168 Skylake | 2.7 GHz | 12 | 48 | 576 | 380 GB | 4.87 | Portable Batch System (PBS) | 05/30/2024 |
Standard | RHEL | Intel 8168 Skylake | 2.7 GHz | 1,128 | 48 | 54,144 | 180 GB | ||||||
Large-Memory | RHEL | Intel 8168 Skylake | 2.7 GHz | 24 | 48 | 1,152 | 744 GB | ||||||
GPU | RHEL | Intel 8168 Skylake | 2.7 GHz | 24 | 48 +1 NVIDIA P100 PCIe 3 |
1,152 | 372 GB | ||||||
Narwhal | NAVY | HPE Cray EX |
Login | SLES | AMD 7H12 Rome | 2.6 GHz | 11 | 128 | 1,408 | 226 GB | 13.6 | Portable Batch System (PBS) | 05/06/2026 |
Standard | SLES | AMD 7H12 Rome | 2.6 GHz | 2,304 | 128 | 294,912 | 238 GB | ||||||
Large-Memory | SLES | AMD 7H12 Rome | 2.6 GHz | 26 | 128 | 3,328 | 995 GB | ||||||
Visualization | SLES | AMD 7H12 Rome | 2.6 GHz | 16 | 128 +1 NVIDIA V100 PCIe 3 |
2,048 | 234 GB | ||||||
MLA 1-GPU | SLES | AMD 7H12 Rome | 2.6 GHz | 32 | 128 +1 NVIDIA V100 PCIe 3 |
4,096 | 239 GB | ||||||
MLA 2-GPU | SLES | AMD 7H12 Rome | 2.6 GHz | 32 | 128 +2 NVIDIA V100 PCIe 3 |
4,096 | 239 GB | ||||||
Nautilus | NAVY | Penguin TrueHPC |
Login | RHEL | AMD 7713 Milan | 2 GHz | 14 | 128 | 1,792 | 433 GB | 8.2 | Slurm Workload Manager (Slurm) | 04/01/2028 |
Standard | RHEL | AMD 7713 Milan | 2 GHz | 1,304 | 128 | 166,912 | 237 GB | ||||||
Large-Memory | RHEL | AMD 7713 Milan | 2 GHz | 16 | 128 | 2,048 | 998 GB | ||||||
Visualization | RHEL | AMD 7713 Milan | 2 GHz | 16 | 128 +1 NVIDIA A40 PCIe 4 |
2,048 / 16 | 491 GB | ||||||
AI/ML | RHEL | AMD 7713 Milan | 2 GHz | 32 | 128 +4 NVIDIA A100 PCIe 4 |
4,096 / 128 | 491 GB | ||||||
High Core Performance | RHEL | AMD 73F3 Milan | 3.4 GHz | 32 | 32 | 1,024 | 491 GB | ||||||
Onyx | ERDC | Cray XC40/50 |
Login | SLES | Intel E5-2699v4 Broadwell | 2.8 GHz | 12 | 44 | 528 | 247 GB | 6.06 | Portable Batch System (PBS) | 02/18/2024 |
Standard | CLE | Intel E5-2699v4 Broadwell | 2.8 GHz | 4,806 | 44 | 211,464 | 121 GB | ||||||
Large-Memory | CLE | Intel E5-2699v4 Broadwell | 2.8 GHz | 8 | 44 | 352 | 1 TB | ||||||
KNL | CLE | Intel 7230 Knights Landing | 1.3 GHz | 32 | 64 | 2,048 | 90 GB | ||||||
GPU | CLE | Intel E5-2699v4 Broadwell | 2.8 GHz | 32 | 22 +1 NVIDIA P100 PCIe 3 |
704 | 247 GB | ||||||
2-MLA | CentOS | Intel 6148 Skylake | 2.4 GHz | 60 | 40 +2 NVIDIA V100 SXM2 |
2,400 | 172 GB | ||||||
10-MLA | CentOS | Intel 6148 Skylake | 2.4 GHz | 4 | 40 +10 NVIDIA V100 PCIe 3 |
160 | 735 GB | ||||||
SCOUT | ARL | IBM POWER9 |
Login | RHEL | IBM POWER9 | 2.55 GHz | 4 | 40 | 160 | 502 GB | 1.2 | IBM Spectrum Load Sharing Facility (LSF) | 12/25/2024 |
Training | RHEL | IBM POWER9 | 2.55 GHz | 22 | 40 +6 NVIDIA V100 PCIe 3 |
880 | 690 GB | ||||||
Inference | RHEL | IBM POWER9 | 2.55 GHz | 128 | 40 +4 NVIDIA T4 PCIe 3 |
5,120 | 246 GB | ||||||
Visualization | RHEL | IBM POWER9 | 2.55 GHz | 2 | 40 +2 NVIDIA V100 PCIe 3 |
80 | 502 GB | ||||||
Warhawk | AFRL | HPE Cray EX |
Login | SLES | AMD 7H12 Rome | 2.6 GHz | 7 | 128 | 896 | 995 GB | 6.8 | Portable Batch System (PBS) | 08/08/2026 |
Standard | SLES | AMD 7H12 Rome | 2.6 GHz | 1,024 | 128 | 131,072 | 503 GB | ||||||
Large-Memory | SLES | AMD 7H12 Rome | 2.6 GHz | 4 | 128 | 512 | 995 GB | ||||||
Visualization | SLES | AMD 7H12 Rome | 2.6 GHz | 24 | 128 +1 NVIDIA V100 PCIe 3 |
3,072 | 503 GB | ||||||
Machine-Learning Accelerated | SLES | AMD 7H12 Rome | 2.6 GHz | 40 | 128 +2 NVIDIA V100 PCIe 3 |
5,120 | 503 GB |
Non-Allocated
System | Site | Vendor / Model |
Node Type | OS | Processor | Processor Speed |
Total Nodes |
Cores per Node |
Total Cores |
Useable Memory per Node |
Peak PFLOPS |
Scheduler | End of Life |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Builder | MHPCC | Aspen Systems Gigabyte R282-Z93 |
Login/GPU | RHEL 8.5 | AMD 7742 Rome | 2.25 GHz | 1 | 128 +2 NVIDIA V100 |
128 | 1 TB | 4.6 | None | 09/30/2025 |
Reef | MHPCC | Aspen Systems Supermicro Server |
Login | RHEL 7 | Intel 6248 Cascade Lake | 2.5 GHz | 1 | 38 | 40 | 640 GB | N/A | Slurm Workload Manager (Slurm) | 09/14/2025 |
Standard | RHEL 7 | Intel 6248 Cascade Lake | 2.5 GHz | 5 | 38 | 200 | 640 GB | ||||||
Tesla V100 | RHEL 7 | Intel 6248 Cascade Lake | 2.5 GHz | 9 | 38 +2 NVIDIA V100 PCIe 3 |
360 | 640 GB | ||||||
NVIDIA RTX 8000 | RHEL 7 | Intel 6248 Cascade Lake | 2.5 GHz | 2 | 38 +2 NVIDIA RTX 8000 PCIe 3 |
80 | 640 GB |
Upcoming
Site | Estimated Date | Description |
---|---|---|
ARL DSRC | 1st Quarter, FY 2024 | Jean will contain 57,696 Intel XEON "Cascade Lake Advanced Performance" compute cores and 280 NVIDIA Ampere A100 General-Purpose Graphics Processing Units (GPGPUs), interconnected with a 200 Gigabit per second InfiniBand network and supported by 323 terabytes of memory, and 12.5 petabytes of usable non-volatile memory express (NVMe)-based solid state storage. |
ARL DSRC | 2nd Quarter, FY 2024 | Kay will contain 48,480 Intel XEON "Cascade Lake Advanced Performance" compute cores and 76 NVIDIA Ampere A100 General-Purpose Graphics Processing Units (GPGPUs), interconnected with a 200 Gigabit per second InfiniBand network and supported by 240 terabytes of memory, and 10 petabytes of usable non-volatile memory express (NVMe)-based solid state storage. |
ERDC DSRC | 2nd Quarter, FY 2024 | Wheat will contain 890 Intel 9242 Cascade Lake AP compute nodes with 96 cores per node (85,440 cores). 770 nodes will be standard compute nodes, with 384 GB of memory per node; 16 will be visualization nodes, with 768 GB of memory and one nVidia A100 (Ampere) GPU each; 96 will be AI/ML nodes, with 384 GB of memory and one nVidia A100 (Ampere) GPU each; 8 will be large-memory nodes, with 2 TB of memory each. The /home file system will have 720 TB of usable NVMe storage, and the /app file system will have 360 TB. |
AFRL DSRC | 1st Quarter, FY 2024 | Raider will contain 189,440 total compute cores comprised of future generation AMD EPYC processors, and 152 NVIDIA Ampere A100 GPUs interconnected by a 200 Gigabit per second Mellanox HDR-200 InfiniBand network, and supported by 26 PB of usable Data Direct Networks storage and 455 TB of system memory. |
ARL DSRC | 2nd Quarter, FY 2024 | Ruth, an HPE Cray EX system, will contain 640 standard compute nodes with dual 96-core AMD EPYC Genoa CPUs and 384 GB of memory; 24 large-memory nodes with dual 96-core AMD EPYC Genoa CPUs and 1,536 GB of memory plus 19.2 TB of on-node self-encrypting drive (SED) SSD storage; 16 AI/ML nodes with one 64-core AMD Trento CPU with 512 GB of memory plus 7.68 TB of on-node SED SSD storage, and four AMD MI250X GPGPUs, each with 128 GB of high-bandwidth memory (HBM); 24 visualization nodes with dual 64-core AMD EPYC Milan CPUs with 512 GB of memory and 1 Nvidia A40 GPU with 48 GB of HBM; and 64 high core performance nodes with dual 16-core AMD EPYC Milan CPUs and 512 GB of memory. Two home file systems will provide a combined 1.5 TB of usable storage; the work1 file system will provide 13.4 PB of HDD-based usable storage; and the work2 file system will provide 1.5 PB of NVMe-based usable storage. The system will utilize a 200 Gbps Slingshot interconnect. |
ARL DSRC | 2nd Quarter, FY 2024 | Marlyn, an HPE Cray EX system, will contain 288 standard compute nodes with dual 96-core AMD EPYC Genoa CPUs and 384 GB of memory; 12 large-memory nodes with dual 96-core AMD EPYC Genoa CPUs and 1,536 GB of memory plus 19.2 TB of on-node self-encrypting drive (SED) SSD storage; and 12 visualization nodes with dual 64-core AMD EPYC Milan CPUs with 512 GB of memory and 1 Nvidia A40 GPU with 48 GB of HBM. Two home file systems will provide a combined 1.5 TB of usable storage; the work1 file system will provide 9.4 PB of HDD-based usable storage; and the work2 file system will provide 1 PB of NVMe-based usable storage. The system will utilize a 200 Gbps Slingshot interconnect. |
ERDC DSRC | 2nd Quarter, FY 2024 | Carpenter, an HPE Cray EX system, will contain 1,440 standard compute nodes with dual 96-core AMD EPYC Genoa CPUs and 384 GB of memory; 4 large-memory nodes with dual 96-core AMD EPYC Genoa CPUs and 3,072 GB of memory plus 9.6 TB of on-node SSD storage; and 8 visualization nodes with dual 64-core AMD EPYC Milan CPUs with 512 GB of memory and 1 NVIDIA A40 GPU with 48 GB of HBM. The home file system will provide 980 TB of NVMe-based usable storage, and the work1 file system will provide 4 PB of NVMe-based usable storage. The system will utilize a 200 Gbps Slingshot interconnect. |
Navy DSRC | Late FY 2024 | Blueback, an HPE Cray EX4000 system with 256,512 total compute cores, composed of AMD EPYC Genoa processors, 128 AMD MI300A Accelerator Processing Units (APUs), and 24 NVIDIA L40 GPGPUs connected by a 200 Gigabit per second Cray Slingshot-11 interconnect and supported by 20 PB of usable Cray ClusterStor E1000 Lustre storage, including 2 PB of NVMe-based solid state storage, and 538 TB of system memory. |
AFRL DSRC | Late FY 2024 | An 8.7 petaFLOP HPE Cray XD system with 884 compute nodes containing 169,782 AMD EPYC "Genoa" compute cores, 64 NVIDIA H100 General-Purpose Graphics Processing Units (GPGPUs), and 16 NVIDIA L40 GPGPUs, interconnected with a 200 Gigabit per second Cray Slingshot network and supported by 738.9 TB of memory, and 19.4 PB of usable disk storage. |
AFRL DSRC | Late FY 2024 | An 3.3 petaFLOP HPE Cray XD system with 300 compute nodes containing 57,600 AMD EPYC "Genoa" compute cores, 32 NVIDIA H100 General-Purpose Graphics Processing Units (GPGPUs), and 2 NVIDIA L40 GPGPUs, interconnected with a 200 Gigabit per second Cray Slingshot network and supported by 245.7 TB of memory, and 5.9 PB of usable disk storage. |