Systems Summary
The HPCMP provides a variety of supercomputing platforms with an array of installed software to enable multidisciplinary computational science and engineering. Multiple vendor platforms, system architectures, programming environments, memory complement, and disk subsystems provide a variety of choices for matching your code to the appropriate HPC system. Our systems are summarized below.
Allocated
System | Site | Vendor / Model |
Node Type | OS | Processor | Processor Speed |
Total Nodes |
Cores per Node |
Total Cores |
Useable Memory per Node |
Peak PFLOPS |
Scheduler | End of Life |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Centennial | ARL | SGI ICE XA |
Login | RHEL | Intel E5-2698v4 Broadwell | 2.2 GHz | 24 | 40 | 960 | 252 GB | 2.6 | PBS | 6/30/2023 |
Standard | RHEL | Intel E5-2698v4 Broadwell | 2.2 GHz | 1,784 | 40 | 71,360 | 124 GB | ||||||
Large-Memory | RHEL | Intel E5-2698v4 Broadwell | 2.6 GHz | 32 | 40 | 1,280 | 508 GB | ||||||
GPU | RHEL | Intel E5-2698v4 Broadwell | 2.2 GHz | 32 | 40 +1 NVIDIA K40c PCIe 3 |
1,280 | 252 GB | ||||||
Gaffney | NAVY | HPE SGI 8600 |
Login | RHEL | Intel 8168 Skylake | 2.7 GHz | 8 | 48 | 384 | 320 GB | 3.05 | PBS | 01/01/2024 |
Standard | RHEL | Intel 8168 Skylake | 2.7 GHz | 704 | 48 | 33,792 | 170 GB | ||||||
Large-Memory | RHEL | Intel 8168 Skylake | 2.7 GHz | 16 | 48 | 768 | 742 GB | ||||||
GPU | RHEL | Intel 8168 Skylake | 2.7 GHz | 32 | 48 +1 NVIDIA P100 PCIe 3 |
1,536 | 367 GB | ||||||
Koehr | NAVY | HPE SGI 8600 |
Login | RHEL | Intel 8168 Skylake | 2.7 GHz | 8 | 48 | 384 | 320 GB | 3.05 | PBS | 01/01/2024 |
Standard | RHEL | Intel 8168 Skylake | 2.7 GHz | 704 | 48 | 33,792 | 170 GB | ||||||
Large-Memory | RHEL | Intel 8168 Skylake | 2.7 GHz | 16 | 48 | 768 | 742 GB | ||||||
GPU | RHEL | Intel 8168 Skylake | 2.7 GHz | 32 | 48 +1 NVIDIA P100 PCIe 3 |
1,536 | 367 GB | ||||||
Mustang | AFRL | HPE SGI 8600 |
Login | RHEL | Intel 8168 Skylake | 2.7 GHz | 12 | 48 | 576 | 380 GB | 4.87 | PBS | 12/08/2023 |
Standard | RHEL | Intel 8168 Skylake | 2.7 GHz | 1,128 | 48 | 54,144 | 180 GB | ||||||
Large-Memory | RHEL | Intel 8168 Skylake | 2.7 GHz | 24 | 48 | 1,152 | 744 GB | ||||||
GPU | RHEL | Intel 8168 Skylake | 2.7 GHz | 24 | 48 +1 NVIDIA P100 PCIe 3 |
1,152 | 372 GB | ||||||
Narwhal | NAVY | HPE Cray EX |
Login | SLES | AMD 7H12 Rome | 2.6 GHz | 11 | 128 | 1,408 | 226 GB | 13.6 | PBS | 05/20/2026 |
Standard | SLES | AMD 7H12 Rome | 2.6 GHz | 2,304 | 128 | 294,912 | 238 GB | ||||||
Large-Memory | SLES | AMD 7H12 Rome | 2.3 GHz | 26 | 128 | 3,328 | 995 GB | ||||||
Visualization | SLES | AMD 7H12 Rome | 2.3 GHz | 16 | 128 +1 NVIDIA V100 PCIe 3 |
2,048 | 234 GB | ||||||
MLA 1-GPU | SLES | AMD 7H12 Rome | 2.3 GHz | 32 | 128 +1 NVIDIA V100 PCIe 3 |
4,096 | 239 GB | ||||||
MLA 2-GPU | SLES | AMD 7H12 Rome | 2.3 GHz | 32 | 128 +2 NVIDIA V100 PCIe 3 |
4,096 | 239 GB | ||||||
Onyx | ERDC | Cray XC40/50 |
Login | SLES | Intel E5-2699v4 Broadwell | 2.8 GHz | 12 | 44 | 528 | 247 GB | 6.06 | PBS | 11/18/2023 |
Standard | CLE | Intel E5-2699v4 Broadwell | 2.8 GHz | 4,806 | 44 | 211,464 | 121 GB | ||||||
Large-Memory | CLE | Intel E5-2699v4 Broadwell | 2.8 GHz | 8 | 44 | 352 | 1 TB | ||||||
KNL | CLE | Intel 7230 Knights Landing | 1.3 GHz | 32 | 64 | 2,048 | 90 GB | ||||||
GPU | CLE | Intel E5-2699v4 Broadwell | 2.8 GHz | 32 | 22 +1 NVIDIA P100 PCIe 3 |
704 | 247 GB | ||||||
2-MLA | CentOS | Intel 6148 Skylake | 2.4 GHz | 60 | 40 +2 NVIDIA V100 SXM2 |
2,400 | 172 GB | ||||||
10-MLA | CentOS | Intel 6148 Skylake | 2.4 GHz | 4 | 40 +10 NVIDIA V100 PCIe 3 |
160 | 735 GB | ||||||
SCOUT | ARL | IBM POWER9 |
Login | RHEL | IBM POWER9 | 2.55 GHz | 4 | 40 | 160 | 502 GB | 1.2 | LSF | 12/25/2024 |
Training | RHEL | IBM POWER9 | 2.55 GHz | 22 | 40 +6 NVIDIA V100 PCIe 3 |
880 | 690 GB | ||||||
Inference | RHEL | IBM POWER9 | 2.55 GHz | 128 | 40 +4 NVIDIA T4 PCIe 3 |
5,120 | 246 GB | ||||||
Visualization | RHEL | IBM POWER9 | 2.55 GHz | 2 | 40 +2 NVIDIA V100 PCIe 3 |
80 | 502 GB | ||||||
Warhawk | AFRL | HPE Cray EX |
Login | SLES | AMD 7H12 Rome | 2.6 GHz | 7 | 128 | 896 | 995 GB | 6.8 | PBS | 05/20/2026 |
Standard | SLES | AMD 7H12 Rome | 2.6 GHz | 1,024 | 128 | 131,072 | 503 GB | ||||||
Large-Memory | SLES | AMD 7H12 Rome | 2.6 GHz | 4 | 128 | 512 | 995 GB | ||||||
Visualization | SLES | AMD 7H12 Rome | 2.6 GHz | 24 | 128 +1 NVIDIA V100 PCIe 3 |
3,072 | 503 GB | ||||||
Machine-Learning Accelerated | SLES | AMD 7H12 Rome | 2.6 GHz | 40 | 128 +2 NVIDIA V100 PCIe 3 |
5,120 | 503 GB |
Non-Allocated
System | Site | Vendor / Model |
Node Type | OS | Processor | Processor Speed |
Total Nodes |
Cores per Node |
Total Cores |
Useable Memory per Node |
Peak PFLOPS |
Scheduler | End of Life |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Builder | MHPCC | Aspen Systems Gigabyte R282-Z93 |
Login/GPU | RHEL 8.5 | AMD 7742 Rome | 2.25 GHz | 1 | 128 +2 NVIDIA V100 |
128 | 1 TB | 4.6 | None | 09/30/2025 |
Reef | MHPCC | Aspen Systems Supermicro Server |
Login | RHEL 7 | Intel 6248 Cascade Lake | 2.5 GHz | 1 | 38 | 40 | 640 GB | N/A | Slurm | N/A |
Standard | RHEL 7 | Intel 6248 Cascade Lake | 2.5 GHz | 5 | 38 | 200 | 640 GB | ||||||
Tesla V100 | RHEL 7 | Intel 6248 Cascade Lake | 2.5 GHz | 9 | 38 +2 NVIDIA V100 PCIe 3 |
360 | 640 GB | ||||||
NVIDIA RTX 8000 | RHEL 7 | Intel 6248 Cascade Lake | 2.5 GHz | 2 | 38 +2 NVIDIA RTX 8000 PCIe 3 |
80 | 640 GB |
Upcoming
Site | Estimated Date | Description |
---|---|---|
ARL DSRC | 2nd Quarter, FY 2023 | Jean will contain 57,696 Intel XEON "Cascade Lake Advanced Performance" compute cores and 280 NVIDIA Ampere A100 General-Purpose Graphics Processing Units (GPGPUs), interconnected with a 200 Gigabit per second InfiniBand network and supported by 323 terabytes of memory, and 12.5 petabytes of usable non-volatile memory express (NVMe)-based solid state storage. |
ARL DSRC | 2nd Quarter, FY 2023 | Kay will contain 48,480 Intel XEON "Cascade Lake Advanced Performance" compute cores and 76 NVIDIA Ampere A100 General-Purpose Graphics Processing Units (GPGPUs), interconnected with a 200 Gigabit per second InfiniBand network and supported by 240 terabytes of memory, and 10 petabytes of usable non-volatile memory express (NVMe)-based solid state storage. |
ERDC DSRC | 2nd Quarter, FY 2023 | Wheat will contain 890 Intel 9242 Cascade Lake AP compute nodes with 96 cores per node (85,440 cores). 770 nodes will be standard compute nodes, with 384 GB of memory per node; 16 will be visualization nodes, with 768 GB of memory and one nVidia A100 (Ampere) GPU each; 96 will be AI/ML nodes, with 384 GB of memory and one nVidia A100 (Ampere) GPU each; 8 will be large-memory nodes, with 2 TB of memory each. The /home file system will have 720 TB of usable NVMe storage, and the /app file system will have 360 TB. |
Navy DSRC | 2nd Quarter, FY 2023 | Nautilus will contain 176,128 total compute cores comprised of future generation AMD EPYC processors, and 144 NVIDIA Ampere A100 GPUs interconnected by a 200 gigabit per second Mellanox HDR-200 InfiniBand network, and supported by 26 PB of usable Data Direct Networks storage, including over 4 PB of NVMe-based solid-state storage, and 382 TB of system memory. |
AFRL DSRC | 2nd Quarter, FY 2023 | Raider will contain 189,440 total compute cores comprised of future generation AMD EPYC processors, and 152 NVIDIA Ampere A100 GPUs interconnected by a 200 gigabit per second Mellanox HDR-200 InfiniBand network, and supported by 26 PB of usable Data Direct Networks storage and 455 TB of system memory. |
ARL DSRC | 4th Quarter, FY 2023 | Ruth, an HPE Cray EX system, will contain 640 standard compute nodes with dual 96-core AMD EPYC Genoa CPUs and 384 GB of memory; 24 large-memory nodes with dual 96-core AMD EPYC Genoa CPUs and 1,536 GB of memory plus 19.2 TB of on-node self-encrypting drive (SED) SSD storage; 16 AI/ML nodes with one 64-core AMD Trento CPU with 512 GB of memory plus 7.68 TB of on-node SED SSD storage, and four AMD MI250X GPGPUs, each with 128 GB of high-bandwidth memory (HBM); 24 visualization nodes with dual 64-core AMD EPYC Milan CPUs with 512 GB of memory and 1 Nvidia A40 GPU with 48 GB of HBM; and 64 high core performance nodes with dual 16-core AMD EPYC Milan CPUs and 512 GB of memory. Two home file systems will provide a combined 1.5 TB of usable storage; the work1 file system will provide 13.4 PB of HDD-based usable storage; and the work2 file system will provide 1.5 PB of NVMe-based usable storage. The system will utilize a 200 Gbps Slingshot interconnect. |
ARL DSRC | 4th Quarter, FY 2023 | Marlyn, an HPE Cray EX system, will contain 288 standard compute nodes with dual 96-core AMD EPYC Genoa CPUs and 384 GB of memory; 12 large-memory nodes with dual 96-core AMD EPYC Genoa CPUs and 1,536 GB of memory plus 19.2 TB of on-node self-encrypting drive (SED) SSD storage; and 12 visualization nodes with dual 64-core AMD EPYC Milan CPUs with 512 GB of memory and 1 Nvidia A40 GPU with 48 GB of HBM. Two home file systems will provide a combined 1.5 TB of usable storage; the work1 file system will provide 9.4 PB of HDD-based usable storage; and the work2 file system will provide 1 PB of NVMe-based usable storage. The system will utilize a 200 Gbps Slingshot interconnect. |
ERDC DSRC | 4th Quarter, FY 2023 | Carpenter, an HPE Cray EX system, will contain 1,440 standard compute nodes with dual 96-core AMD EPYC Genoa CPUs and 384 GB of memory; 4 large-memory nodes with dual 96-core AMD EPYC Genoa CPUs and 3,072 GB of memory plus 9.6 TB of on-node SSD storage; and 8 visualization nodes with dual 64-core AMD EPYC Milan CPUs with 512 GB of memory and 1 NVIDIA A40 GPU with 48 GB of HBM. The home file system will provide 980 TB of NVMe-based usable storage, and the work1 file system will provide 4 PB of NVMe-based usable storage. The system will utilize a 200 Gbps Slingshot interconnect. |