Non-Allocated Systems

The HPCMP provides a limited number of HPC systems for non-allocated use. These systems are typically experimental or unique in nature, and do not include a full complement of application software, however they provide users an opportunity to explore nontraditional architectures while operating free of the constraints of allocated use.

Builder is currently Up.

Builder is a single-node Aspen Systems Linux Gigabyte server intended primarily to provide a platform for building and sharing Singularity software containers. The system is populated with AMD 7742 processors and Nvidia V100 graphics processing units. Builder uses Intel Gigabit Ethernet as its high-speed network for I/O traffic. Builder uses AVAGO MegaRAID to manage its local file system that targets 130 TB of disk storage. Builder is equipped with two 64-core processors (128 total cores) running the RHEL 8 operating system, two GPUs, 1,024 gigabytes of memory, with no user-accessible swap space.

More Info

Node Configuration
Login/GPU
Total Nodes 1
Processor AMD 7742 Rome
Processor Speed 2.25 GHz
Sockets / Node 2
Cores / Node 128
Total CPU Cores 128
Usable Memory / Node 1 TB
Accelerators / Node 2
Accelerator NVIDIA V100
Memory / Accelerator 32 GB
Storage on Node 130 TB SAS HDD Array
Interconnect XL710 for 40GbE QSFP+ to cwfs, I350 Gigabit Network Connection to DREN
Operating System RHEL 8.5

Queue Information

Builder does not employ a job scheduler, so there are no queues.

Coral is currently Up.

Coral is an Aspen Systems Linux Cluster with x86_64 and ARM nodes, enabling architecture comparisons. It features 8-way A100 GPUs, AMD MI100/MI250 GPUs, and DPUs for accelerated computing. With standard and large-memory configurations, InfiniBand interconnect, and a Weka parallel file system, Coral supports diverse HPC tasks requiring batch scheduling and parallel processing. It serves as a testbed for ARM-based HPC evaluation.

More Info

Maintenance
Date / TimeDetails
2026 Mar 02 07:30 - Mar 03 00:00 HI (Completed) System Maintenance
Node Configuration
Login Compute Nodes
A100 GPU AMD M100 GPU AMD MI250 GPU ARM + A100 + DPU ARM DPU
Total Nodes 2 8 2 2 4 4 2
Processor Intel 6326 Ice Lake Intel 6338 Ice Lake AMD 7513 Milan AMD 7513 Milan Ampere Altra Q80 Ampere Altra Q64 Intel 6338 Ice Lake
Processor Speed 2.9 GHz 2.0 GHz 2.6 GHz 2.6 GHz 3.0 GHz 3.0 GHz 2.0 GHz
Sockets / Node 2 2 2 2 1 2 2
Cores / Node 32 64 64 64 80 128 64
Total CPU Cores 64 512 128 128 320 512 128
Usable Memory / Node 480 GB 2 TB 1 TB 1 TB 480 GB 480 GB 480 GB
Accelerators / Node 1 8 8 4 2 N/A N/A
Accelerator NVIDIA A40 NVIDIA A100 SXM4 (NVLINK) NVIDIA A100 PCIe AMD MI100 AMD MI250 NVIDIA A100 N/A N/A
Memory / Accelerator 48 GB 80 GB 80 GB 32 GB 128 GB 80 GB N/A N/A
Storage on Node 3.84 TB NVMe SSD 3.84 TB NVMe SSD 3.84 TB NVMe SSD 3.84 TB NVMe SSD 3.84 TB NVMe SSD 6 TB SAS 3.84 TB NVMe SSD 3.84 TB NVMe SSD
Interconnect HDR InfiniBand HDR InfiniBand HDR InfiniBand HDR InfiniBand HDR InfiniBand HDR InfiniBand HDR InfiniBand HDR InfiniBand
Operating System RHEL8 RHEL8 RHEL8 RHEL8 RHEL8 RHEL8 RHEL8 RHEL8
Workspace 240 TBytes WekaIO (all NVMe) + 200 TBytes Object HDD Tier
Node Names coral1
coral2
cr2cn1
cr2cn2
cr2cn3
cr2cn4
cr3cn5
cr3cn6
cr3cn7
cr3cn8
cr4mi1
cr4mi2
cr4mi3
cr4mi4
cr2arm1
cr2arm2
cr2arm3
cr2arm4
cr4arm5
cr4arm6
cr4arm7
cr4arm8
cr1dpu1
cr1dpu2
L3 Cache (MB) N/A 48 48 128 128 32 32 48
Queue Descriptions and Limits on Coral
Priority Queue Name Max Wall Clock Time Max Cores Per Job Max Queued Per User Max Running Per User Description
Highest
Down arrow for decreasing priority debug 30 Minutes N/A N/A N/A Time/resource-limited for user testing and debug purposes
standard 7 Days N/A N/A N/A Standard jobs
transfer 2 Days N/A N/A N/A Data transfer for user jobs. Not charged against project allocation. See the AFRL DSRC Archive Guide, section 5.2.
Lowest
Reef is currently Up.

Reef is an Aspen Systems Linux Cluster. The login and compute nodes are populated with Intel 2.5-Ghz Cascade Lake processors. Reef has 5 CPU-only and 11 GPU compute nodes. Each compute node has two 20-core processors, sharing 768 GB of DDR4 memory, with no user accessible swap space. Reef has 109 TB (formatted) of disk storage. Reef is intended to be used as a batch scheduled HPC system.

Note: the configuration of Reef is subject to change without notice.

More Info

Node Configuration
Login Standard Tesla V100 NVIDIA RTX 8000
Total Nodes 1 5 9 2
Processor Intel 6248 Cascade Lake Intel 6248 Cascade Lake Intel 6248 Cascade Lake Intel 6248 Cascade Lake
Processor Speed 2.5 GHz 2.5 GHz 2.5 GHz 2.5 GHz
Sockets / Node 2 2 2 2
Cores / Node 38 38 38 38
Total CPU Cores 40 200 360 80
Usable Memory / Node 650 GB 650 GB 650 GB 650 GB
Accelerators / Node None None 2 2
Accelerator N/A N/A NVIDIA V100 PCIe 3 NVIDIA RTX 8000 PCIe 3
Memory / Accelerator N/A N/A 32 GB 48 GB
Storage on Node None None None None
Interconnect Mellanox HDR100, EDR InfiniBand Mellanox HDR100, EDR InfiniBand Mellanox HDR100, EDR InfiniBand Mellanox HDR100, EDR InfiniBand
Operating System RHEL 7 RHEL 7 RHEL 7 RHEL 7
Queue Descriptions and Limits on Reef
Queue Name Max Wall Clock Time Max Jobs Min Cores Per Job Max Cores Per Job Max Queued Per User Max Running Per User Description
standard None N/A 1 190 N/A N/A 5 Non-GPU Compute nodes
tesla None None 1 342 N/A N/A 9 GPU Compute nodes (Dual-Tesla V100)
rtx None None 1 76 N/A N/A 2 GPU Compute nodes (Dual-Quadro RTX 8000)