Systems Summary

The HPCMP provides a variety of supercomputing platforms with an array of installed software to enable multidisciplinary computational science and engineering. Multiple vendor platforms, system architectures, programming environments, memory complement, and disk subsystems provide a variety of choices for matching your code to the appropriate HPC system. Our systems are summarized below.

Allocated

Non-Allocated

Upcoming

Implementation Schedule
SiteEstimated DateDescription
ARL DSRC 4th Quarter, FY 2024 Ruth, an HPE Cray EX system, will contain 640 standard compute nodes with dual 96-core AMD EPYC Genoa CPUs and 384 GB of memory; 24 large-memory nodes with dual 96-core AMD EPYC Genoa CPUs and 1,536 GB of memory plus 19.2 TB of on-node self-encrypting drive (SED) SSD storage; 16 AI/ML nodes with one 64-core AMD Trento CPU with 512 GB of memory plus 7.68 TB of on-node SED SSD storage, and four AMD MI250X GPGPUs, each with 128 GB of high-bandwidth memory (HBM); 24 visualization nodes with dual 64-core AMD EPYC Milan CPUs with 512 GB of memory and 1 Nvidia A40 GPU with 48 GB of HBM; and 64 high core performance nodes with dual 16-core AMD EPYC Milan CPUs and 512 GB of memory. Two home file systems will provide a combined 1.5 TB of usable storage; the work1 file system will provide 13.4 PB of HDD-based usable storage; and the work2 file system will provide 1.5 PB of NVMe-based usable storage. The system will utilize a 200 Gbps Slingshot interconnect.
ARL DSRC 4th Quarter, FY 2024 Marlyn, an HPE Cray EX system, will contain 288 standard compute nodes with dual 96-core AMD EPYC Genoa CPUs and 384 GB of memory; 12 large-memory nodes with dual 96-core AMD EPYC Genoa CPUs and 1,536 GB of memory plus 19.2 TB of on-node self-encrypting drive (SED) SSD storage; and 12 visualization nodes with dual 64-core AMD EPYC Milan CPUs with 512 GB of memory and 1 Nvidia A40 GPU with 48 GB of HBM. Two home file systems will provide a combined 1.5 TB of usable storage; the work1 file system will provide 9.4 PB of HDD-based usable storage; and the work2 file system will provide 1 PB of NVMe-based usable storage. The system will utilize a 200 Gbps Slingshot interconnect.
ARL DSRC 1st Quarter, FY 2025 Jean will contain 57,696 Intel XEON "Cascade Lake Advanced Performance" compute cores and 280 NVIDIA Ampere A100 General-Purpose Graphics Processing Units (GPGPUs), interconnected with a 200 Gigabit per second InfiniBand network and supported by 323 terabytes of memory, and 12.5 petabytes of usable non-volatile memory express (NVMe)-based solid state storage.
ARL DSRC 1st Quarter, FY 2025 Kay will contain 48,480 Intel XEON "Cascade Lake Advanced Performance" compute cores and 76 NVIDIA Ampere A100 General-Purpose Graphics Processing Units (GPGPUs), interconnected with a 200 Gigabit per second InfiniBand network and supported by 240 terabytes of memory, and 10 petabytes of usable non-volatile memory express (NVMe)-based solid state storage.
AFRL DSRC 1st Quarter, FY 2025 An 8.7 petaFLOP HPE Cray XD system named "Flyer" with 884 compute nodes containing 169,782 AMD EPYC "Genoa" compute cores, 64 NVIDIA H100 General-Purpose Graphics Processing Units (GPGPUs), and 16 NVIDIA L40 GPGPUs, interconnected with a 200 Gigabit per second Cray Slingshot network and supported by 738.9 TB of memory, and 19.4 PB of usable disk storage.
AFRL DSRC 1st Quarter, FY 2025 An 3.3 petaFLOP HPE Cray XD system called "Raven" with 300 compute nodes containing 57,600 AMD EPYC "Genoa" compute cores, 32 NVIDIA H100 General-Purpose Graphics Processing Units (GPGPUs), and 2 NVIDIA L40 GPGPUs, interconnected with a 200 Gigabit per second Cray Slingshot network and supported by 245.7 TB of memory, and 5.9 PB of usable disk storage.
ERDC DSRC 1st Quarter, FY 2025 Wheat will contain 890 Intel 9242 Cascade Lake AP compute nodes with 96 cores per node (85,440 cores). 770 nodes will be standard compute nodes, with 384 GB of memory per node; 16 will be visualization nodes, with 768 GB of memory and one nVidia A100 (Ampere) GPU each; 96 will be AI/ML nodes, with 384 GB of memory and one nVidia A100 (Ampere) GPU each; 8 will be large-memory nodes, with 2 TB of memory each. The /home file system will have 720 TB of usable NVMe storage, and the /app file system will have 360 TB.
Navy DSRC 1st Quarter, FY 2025 Blueback, an HPE Cray EX4000 system with 256,512 total compute cores, composed of AMD EPYC Genoa processors, 128 AMD MI300A Accelerator Processing Units (APUs), and 24 NVIDIA L40 GPGPUs connected by a 200 Gigabit per second Cray Slingshot-11 interconnect and supported by 20 PB of usable Cray ClusterStor E1000 Lustre storage, including 2 PB of NVMe-based solid state storage, and 538 TB of system memory.
AFRL DSRC 2nd Quarter, FY 2025 Spitfire, a Dell system, will contain 600 standard compute nodes with dual 84-core AMD EPYC Genoa CPUs and 768 GB of memory; 25 AI/ML nodes with dual 32-core Intel Xeon Sapphire Rapids CPUs, 512 GB of memory, 3.84 TB of on-node NVMe storage, and 4 NVIDIA H100 GPGPUs with 80 GB of high bandwidth memory; 25 Visualization nodes with dual 84-core AMD EPYC Genoa CPUs and 768 GB of memory, and 1 NVIDIA A16 GPGPU with 64 GB of GDDR6 memory. Spitfire will include a 9 PB /work weka filesystem and a 600 TB /home Weka filesystem. The system will utilize a 200 Gbps Nvidia NDR Infiniband interconnect.
AFRL DSRC 2nd Quarter, FY 2025 Hurricane, a Dell system, will contain 24 standard compute nodes with dual 84-core AMD EPYC Genoa CPUs and 768 GB of memory; 1 AI/ML nodes with dual 32-core Intel Xeon Sapphire Rapids CPUs, 512 GB of memory, 3.84 TB of on-node NVMe storage, and 4 NVIDIA H100 GPGPUs with 80 GB of high bandwidth memory; and 1 Visualization nodes with dual 84-core AMD EPYC Genoa CPUs and 768 GB of memory, and 1 NVIDIA A16 GPGPU with 64 GB of GDDR6 memory. Hurricane will include a 1 PB /work Weka filesystem and a 30 TB /home Weka filesystem. The system will utilize a 200 Gbps Nvidia NDR Infiniband interconnect.
AFRL DSRC 2nd Quarter, FY 2025 Cobra, a Dell system, will contain 24 standard compute nodes with dual 84-core AMD EPYC Genoa CPUs and 768 GB of memory; 1 AI/ML nodes with dual 32-core Intel Xeon Sapphire Rapids CPUs, 512 GB of memory, 3.84 TB of on-node NVMe storage, and 4 NVIDIA H100 GPGPUs with 80 GB of high bandwidth memory; and 1 Visualization nodes with dual 84-core AMD EPYC Genoa CPUs and 768 GB of memory, and 1 NVIDIA A16 GPGPU with 64 GB of GDDR6 memory. Cobra will include a 1 PB /work Weka filesystem and a 30 TB /home Weka filesystem. The system will utilize a 200 Gbps Nvidia NDR Infiniband interconnect.
MHPCC DSRC 2nd Quarter, FY 2025 Crux is a coalition adaptable technology transition computing system featuring 240 standard nodes each with dual 2.25-GHz, 84-core AMD Genoa processors (168 cores/node) and 768GB RAM; 10 AI/ML nodes each with dual 2.8-GHz, 32-core Intel Sapphire Rapids processors (64 cores/node), 512GB RAM, four H100/80GB NVLink GPUs, and 3.84TB local NVMe SSD; and 10 visualization nodes with dual 2.25-GHz, 84-core AMD Genoa processors (168 cores/node), 768GB RAM and one A16/64GB PCIe GPU. The system also includes five login or landing zone nodes with dual 2.25-GHz, 84-core AMD Genoa processors (168 cores/node), 768GB RAM and 3.84TB local NVMe SSD, with one reserved for HPCMP and DSRC use featuring dual 2.8-GHz, 32-core Intel Sapphire Rapids processors (64 cores/node), 512GB RAM, four H100/80GB NVLink GPUs, and 3.84TB local NVMe SSD. Crux provides a 5.2 petabyte, all-NVMe, high-bandwidth, high-IOPS WEKA parallel file system for user data.
ARL DSRC 3rd Quarter, FY 2025 Fran, an HPE Cray EX4000 system with 173,184 total compute cores, composed of AMD EPYC Genoa processors, 12 NVIDIA L40S GPUs connected by a 200 Gigabit per second Cray Slingshot-11 interconnect and supported by 25 PB of usable Lustre storage, including 4 PB of NVMe-based solid state storage, and 431 TB of system memory.
ERDC DSRC 3rd Quarter, FY 2025 An HPE Cray EX4000 system with 212,736 total compute cores, composed of AMD EPYC Genoa processors, 12 NVIDIA L40S GPUs connected by a 200 Gigabit per second Cray Slingshot-11 interconnect and supported by 3 PB of usable Lustre storage and 438 TB of system memory.