Systems Summary
The HPCMP provides a variety of supercomputing platforms with an array of installed software to enable multidisciplinary computational science and engineering. Multiple vendor platforms, system architectures, programming environments, memory complement, and disk subsystems provide a variety of choices for matching your code to the appropriate HPC system. Our systems are summarized below.
Allocated
System | Site | Vendor / Model |
Node Type | OS | Processor | Processor Speed |
Total Nodes |
Cores per Node |
Total Cores |
Useable Memory per Node |
Peak PFLOPS |
Scheduler | End of Life |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Carpenter | ERDC | HPE Cray EX4000 |
Login | SLES 15 | AMD 9654 Genoa | 2.4 GHz | 10 | 192 | 1,920 | 8 GB | 17.65 | PBS | 02/2028 |
Standard | SLES 15 | AMD 9654 Genoa | 2.4 GHz | 1,632 | 192 | 313,344 | 349 GB | ||||||
Large-Memory | SLES 15 | AMD 9654 Genoa | 2.4 GHz | 4 | 192 | 768 | 2.973 TB | ||||||
Visualization | SLES 15 | AMD 7713 Milan | 2.0 GHz | 8 | 128 +1 NVIDIA A40 PCIe 4 |
1,024 | 467 GB | ||||||
Narwhal | NAVY | HPE Cray EX |
Login | SLES | AMD 7H12 Rome | 2.6 GHz | 11 | 128 | 1,408 | 226 GB | 13.6 | PBS | 03/31/2029 |
Standard | SLES | AMD 7H12 Rome | 2.6 GHz | 2,304 | 128 | 294,912 | 238 GB | ||||||
Large-Memory | SLES | AMD 7H12 Rome | 2.6 GHz | 26 | 128 | 3,328 | 995 GB | ||||||
Visualization | SLES | AMD 7H12 Rome | 2.6 GHz | 16 | 128 +1 NVIDIA V100 PCIe 3 |
2,048 | 234 GB | ||||||
Single-GPU MLA | SLES | AMD 7H12 Rome | 2.6 GHz | 32 | 128 +1 NVIDIA V100 PCIe 3 |
4,096 | 239 GB | ||||||
Dual-GPU MLA | SLES | AMD 7H12 Rome | 2.6 GHz | 32 | 128 +2 NVIDIA V100 PCIe 3 |
4,096 | 239 GB | ||||||
Nautilus | NAVY | Penguin Computing TrueHPC |
Login | RHEL | AMD 7713 Milan | 2 GHz | 14 | 128 | 1,792 | 433 GB | 8.5 | Slurm | 04/01/2028 |
Standard | RHEL | AMD 7713 Milan | 2 GHz | 1,384 | 128 | 166,912 | 231 GB | ||||||
Large-Memory | RHEL | AMD 7713 Milan | 2 GHz | 16 | 128 | 2,048 | 998 GB | ||||||
Visualization | RHEL | AMD 7713 Milan | 2 GHz | 16 | 128 +1 NVIDIA A40 PCIe 4 |
2,048 / 16 | 491 GB | ||||||
AI/ML | RHEL | AMD 7713 Milan | 2 GHz | 32 | 128 +4 NVIDIA A100 SXM 4 |
4,096 / 128 | 491 GB | ||||||
High Core Performance | RHEL | AMD 73F3 Milan | 3.4 GHz | 32 | 32 | 1,024 | 491 GB | ||||||
Raider | AFRL | Penguin Computing TrueHPC |
Login | RHEL | AMD 7713 Milan | 2.0 GHz | 6 | 128 +1 NVIDIA A40 PCIe 4 |
768 | 503 GB | 9 | Slurm | 12/15/2028 |
Login-viz | RHEL | AMD 7713 Milan | 2.0 GHz | 4 | 128 +1 NVIDIA A100 SXM 4 |
512 | 503 GB | ||||||
Standard | RHEL | AMD 7713 Milan | 2.0 GHz | 1,480 | 128 | 189,440 | 251 GB | ||||||
Large-Memory | RHEL | AMD 7713 Milan | 2.0 GHz | 8 | 128 | 1,024 | 2.0 TB | ||||||
Visualization | RHEL | AMD 7713 Milan | 2.0 GHz | 24 | 128 +1 NVIDIA A40 PCIe 4 |
3,072 | 503 GB | ||||||
MLA | RHEL | AMD 7713 Milan | 2.0 GHz | 32 | 128 +4 NVIDIA A100 SXM 4 |
4,096 | 503 GB | ||||||
High Clock | RHEL | AMD 73F3 Milan | 3.4 GHz | 64 | 32 | 2,048 | 503 GB | ||||||
Transfer | RHEL | AMD 7713 Milan | 2.0 GHz | 2 | 128 | 256 | 503 GB | ||||||
SCOUT | ARL | IBM POWER9 |
Login | RHEL | IBM POWER9 | 2.55 GHz | 4 | 40 | 160 | 502 GB | 1.2 | LSF | 12/25/2024 |
Training | RHEL | IBM POWER9 | 2.55 GHz | 22 | 40 +6 NVIDIA V100 PCIe 3 |
880 | 690 GB | ||||||
Inference | RHEL | IBM POWER9 | 2.55 GHz | 128 | 40 +4 NVIDIA T4 PCIe 3 |
5,120 | 246 GB | ||||||
Visualization | RHEL | IBM POWER9 | 2.55 GHz | 2 | 40 +2 NVIDIA V100 PCIe 3 |
80 | 502 GB | ||||||
Warhawk | AFRL | HPE Cray EX |
Login | SLES | AMD 7H12 Rome | 2.6 GHz | 7 | 128 | 896 | 995 GB | 6.8 | PBS | 08/08/2026 |
Standard | SLES | AMD 7H12 Rome | 2.6 GHz | 1,024 | 128 | 131,072 | 503 GB | ||||||
Large-Memory | SLES | AMD 7H12 Rome | 2.6 GHz | 4 | 128 | 512 | 995 GB | ||||||
Visualization | SLES | AMD 7H12 Rome | 2.6 GHz | 24 | 128 +1 NVIDIA V100 PCIe 3 |
3,072 | 503 GB | ||||||
Machine-Learning Accelerated | SLES | AMD 7H12 Rome | 2.6 GHz | 40 | 128 +2 NVIDIA V100 PCIe 3 |
5,120 | 503 GB |
Non-Allocated
System | Site | Vendor / Model |
Node Type | OS | Processor | Processor Speed |
Total Nodes |
Cores per Node |
Total Cores |
Useable Memory per Node |
Peak PFLOPS |
Scheduler | End of Life |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Builder | MHPCC | Aspen Systems Gigabyte R282-Z93 |
Login/GPU | RHEL 8.5 | AMD 7742 Rome | 2.25 GHz | 1 | 128 +2 NVIDIA V100 |
128 | 1 TB | 4.6 | None | 09/30/2025 |
Reef | MHPCC | Aspen Systems Supermicro Server |
Login | RHEL 7 | Intel 6248 Cascade Lake | 2.5 GHz | 1 | 38 | 40 | 640 GB | N/A | Slurm | 09/14/2025 |
Standard | RHEL 7 | Intel 6248 Cascade Lake | 2.5 GHz | 5 | 38 | 200 | 640 GB | ||||||
Tesla V100 | RHEL 7 | Intel 6248 Cascade Lake | 2.5 GHz | 9 | 38 +2 NVIDIA V100 PCIe 3 |
360 | 640 GB | ||||||
NVIDIA RTX 8000 | RHEL 7 | Intel 6248 Cascade Lake | 2.5 GHz | 2 | 38 +2 NVIDIA RTX 8000 PCIe 3 |
80 | 640 GB |
Upcoming
Site | Estimated Date | Description |
---|---|---|
ARL DSRC | 4th Quarter, FY 2024 | Ruth, an HPE Cray EX system, will contain 640 standard compute nodes with dual 96-core AMD EPYC Genoa CPUs and 384 GB of memory; 24 large-memory nodes with dual 96-core AMD EPYC Genoa CPUs and 1,536 GB of memory plus 19.2 TB of on-node self-encrypting drive (SED) SSD storage; 16 AI/ML nodes with one 64-core AMD Trento CPU with 512 GB of memory plus 7.68 TB of on-node SED SSD storage, and four AMD MI250X GPGPUs, each with 128 GB of high-bandwidth memory (HBM); 24 visualization nodes with dual 64-core AMD EPYC Milan CPUs with 512 GB of memory and 1 Nvidia A40 GPU with 48 GB of HBM; and 64 high core performance nodes with dual 16-core AMD EPYC Milan CPUs and 512 GB of memory. Two home file systems will provide a combined 1.5 TB of usable storage; the work1 file system will provide 13.4 PB of HDD-based usable storage; and the work2 file system will provide 1.5 PB of NVMe-based usable storage. The system will utilize a 200 Gbps Slingshot interconnect. |
ARL DSRC | 4th Quarter, FY 2024 | Marlyn, an HPE Cray EX system, will contain 288 standard compute nodes with dual 96-core AMD EPYC Genoa CPUs and 384 GB of memory; 12 large-memory nodes with dual 96-core AMD EPYC Genoa CPUs and 1,536 GB of memory plus 19.2 TB of on-node self-encrypting drive (SED) SSD storage; and 12 visualization nodes with dual 64-core AMD EPYC Milan CPUs with 512 GB of memory and 1 Nvidia A40 GPU with 48 GB of HBM. Two home file systems will provide a combined 1.5 TB of usable storage; the work1 file system will provide 9.4 PB of HDD-based usable storage; and the work2 file system will provide 1 PB of NVMe-based usable storage. The system will utilize a 200 Gbps Slingshot interconnect. |
ARL DSRC | 1st Quarter, FY 2025 | Jean will contain 57,696 Intel XEON "Cascade Lake Advanced Performance" compute cores and 280 NVIDIA Ampere A100 General-Purpose Graphics Processing Units (GPGPUs), interconnected with a 200 Gigabit per second InfiniBand network and supported by 323 terabytes of memory, and 12.5 petabytes of usable non-volatile memory express (NVMe)-based solid state storage. |
ARL DSRC | 1st Quarter, FY 2025 | Kay will contain 48,480 Intel XEON "Cascade Lake Advanced Performance" compute cores and 76 NVIDIA Ampere A100 General-Purpose Graphics Processing Units (GPGPUs), interconnected with a 200 Gigabit per second InfiniBand network and supported by 240 terabytes of memory, and 10 petabytes of usable non-volatile memory express (NVMe)-based solid state storage. |
AFRL DSRC | 1st Quarter, FY 2025 | An 8.7 petaFLOP HPE Cray XD system named "Flyer" with 884 compute nodes containing 169,782 AMD EPYC "Genoa" compute cores, 64 NVIDIA H100 General-Purpose Graphics Processing Units (GPGPUs), and 16 NVIDIA L40 GPGPUs, interconnected with a 200 Gigabit per second Cray Slingshot network and supported by 738.9 TB of memory, and 19.4 PB of usable disk storage. |
AFRL DSRC | 1st Quarter, FY 2025 | An 3.3 petaFLOP HPE Cray XD system called "Raven" with 300 compute nodes containing 57,600 AMD EPYC "Genoa" compute cores, 32 NVIDIA H100 General-Purpose Graphics Processing Units (GPGPUs), and 2 NVIDIA L40 GPGPUs, interconnected with a 200 Gigabit per second Cray Slingshot network and supported by 245.7 TB of memory, and 5.9 PB of usable disk storage. |
ERDC DSRC | 1st Quarter, FY 2025 | Wheat will contain 890 Intel 9242 Cascade Lake AP compute nodes with 96 cores per node (85,440 cores). 770 nodes will be standard compute nodes, with 384 GB of memory per node; 16 will be visualization nodes, with 768 GB of memory and one nVidia A100 (Ampere) GPU each; 96 will be AI/ML nodes, with 384 GB of memory and one nVidia A100 (Ampere) GPU each; 8 will be large-memory nodes, with 2 TB of memory each. The /home file system will have 720 TB of usable NVMe storage, and the /app file system will have 360 TB. |
Navy DSRC | 1st Quarter, FY 2025 | Blueback, an HPE Cray EX4000 system with 256,512 total compute cores, composed of AMD EPYC Genoa processors, 128 AMD MI300A Accelerator Processing Units (APUs), and 24 NVIDIA L40 GPGPUs connected by a 200 Gigabit per second Cray Slingshot-11 interconnect and supported by 20 PB of usable Cray ClusterStor E1000 Lustre storage, including 2 PB of NVMe-based solid state storage, and 538 TB of system memory. |
AFRL DSRC | 2nd Quarter, FY 2025 | Spitfire, a Dell system, will contain 600 standard compute nodes with dual 84-core AMD EPYC Genoa CPUs and 768 GB of memory; 25 AI/ML nodes with dual 32-core Intel Xeon Sapphire Rapids CPUs, 512 GB of memory, 3.84 TB of on-node NVMe storage, and 4 NVIDIA H100 GPGPUs with 80 GB of high bandwidth memory; 25 Visualization nodes with dual 84-core AMD EPYC Genoa CPUs and 768 GB of memory, and 1 NVIDIA A16 GPGPU with 64 GB of GDDR6 memory. Spitfire will include a 9 PB /work weka filesystem and a 600 TB /home Weka filesystem. The system will utilize a 200 Gbps Nvidia NDR Infiniband interconnect. |
AFRL DSRC | 2nd Quarter, FY 2025 | Hurricane, a Dell system, will contain 24 standard compute nodes with dual 84-core AMD EPYC Genoa CPUs and 768 GB of memory; 1 AI/ML nodes with dual 32-core Intel Xeon Sapphire Rapids CPUs, 512 GB of memory, 3.84 TB of on-node NVMe storage, and 4 NVIDIA H100 GPGPUs with 80 GB of high bandwidth memory; and 1 Visualization nodes with dual 84-core AMD EPYC Genoa CPUs and 768 GB of memory, and 1 NVIDIA A16 GPGPU with 64 GB of GDDR6 memory. Hurricane will include a 1 PB /work Weka filesystem and a 30 TB /home Weka filesystem. The system will utilize a 200 Gbps Nvidia NDR Infiniband interconnect. |
AFRL DSRC | 2nd Quarter, FY 2025 | Cobra, a Dell system, will contain 24 standard compute nodes with dual 84-core AMD EPYC Genoa CPUs and 768 GB of memory; 1 AI/ML nodes with dual 32-core Intel Xeon Sapphire Rapids CPUs, 512 GB of memory, 3.84 TB of on-node NVMe storage, and 4 NVIDIA H100 GPGPUs with 80 GB of high bandwidth memory; and 1 Visualization nodes with dual 84-core AMD EPYC Genoa CPUs and 768 GB of memory, and 1 NVIDIA A16 GPGPU with 64 GB of GDDR6 memory. Cobra will include a 1 PB /work Weka filesystem and a 30 TB /home Weka filesystem. The system will utilize a 200 Gbps Nvidia NDR Infiniband interconnect. |
MHPCC DSRC | 2nd Quarter, FY 2025 | Crux is a coalition adaptable technology transition computing system featuring 240 standard nodes each with dual 2.25-GHz, 84-core AMD Genoa processors (168 cores/node) and 768GB RAM; 10 AI/ML nodes each with dual 2.8-GHz, 32-core Intel Sapphire Rapids processors (64 cores/node), 512GB RAM, four H100/80GB NVLink GPUs, and 3.84TB local NVMe SSD; and 10 visualization nodes with dual 2.25-GHz, 84-core AMD Genoa processors (168 cores/node), 768GB RAM and one A16/64GB PCIe GPU. The system also includes five login or landing zone nodes with dual 2.25-GHz, 84-core AMD Genoa processors (168 cores/node), 768GB RAM and 3.84TB local NVMe SSD, with one reserved for HPCMP and DSRC use featuring dual 2.8-GHz, 32-core Intel Sapphire Rapids processors (64 cores/node), 512GB RAM, four H100/80GB NVLink GPUs, and 3.84TB local NVMe SSD. Crux provides a 5.2 petabyte, all-NVMe, high-bandwidth, high-IOPS WEKA parallel file system for user data. |
ARL DSRC | 3rd Quarter, FY 2025 | Fran, an HPE Cray EX4000 system with 173,184 total compute cores, composed of AMD EPYC Genoa processors, 12 NVIDIA L40S GPUs connected by a 200 Gigabit per second Cray Slingshot-11 interconnect and supported by 25 PB of usable Lustre storage, including 4 PB of NVMe-based solid state storage, and 431 TB of system memory. |
ERDC DSRC | 3rd Quarter, FY 2025 | An HPE Cray EX4000 system with 212,736 total compute cores, composed of AMD EPYC Genoa processors, 12 NVIDIA L40S GPUs connected by a 200 Gigabit per second Cray Slingshot-11 interconnect and supported by 3 PB of usable Lustre storage and 438 TB of system memory. |