Non-Allocated Systems

The HPCMP provides a limited number of HPC systems for non-allocated use. These systems are typically experimental or unique in nature, and do not include a full complement of application software, however they provide users an opportunity to explore nontraditional architectures while operating free of the constraints of allocated use.

Builder is currently Up.

Builder is a single-node Aspen Systems Linux Gigabyte server intended primarily to provide a platform for building and sharing Singularity software containers. The system is populated with AMD 7742 processors and Nvidia V100 graphics processing units. Builder uses Intel Gigabit Ethernet as its high-speed network for I/O traffic. Builder uses AVAGO MegaRAID to manage its local file system that targets 130 TB of disk storage. Builder is equipped with two 64-core processors (128 total cores) running the RHEL 8 operating system, two GPUs, 1,024 gigabytes of memory, with no user-accessible swap space.

More Info

Node Configuration
Login/GPU
Total Nodes 1
Processor AMD 7742 Rome
Processor Speed 2.25 GHz
Sockets / Node 2
Cores / Node 128
Total CPU Cores 128
Usable Memory / Node 1 TB
Accelerators / Node 2
Accelerator NVIDIA V100
Memory / Accelerator 32 GB
Storage on Node 130 TB SAS HDD Array
Interconnect XL710 for 40GbE QSFP+ to cwfs, I350 Gigabit Network Connection to DREN
Operating System RHEL 8.5

Queue Information

Builder does not employ a job scheduler, so there are no queues.

Coral is currently Up.

Coral is an Aspen Systems Linux Cluster system. It has 2 login nodes, 14 x86_64 and 8 ARM compute nodes for job execution. Coral uses 200 Gbps HDR InfiniBand as its high-speed interconnect for MPI messages and IO traffic. Coral uses Weka File-system to manage its parallel file system. Coral's login nodes use Intel(R) Xeon(R) Gold 6326 processors with 489 GB of usable memory. Coral's Standard compute nodes use Ampere(R) Altra(R) processors. Each node contains 489 GB of usable shared memory. Coral's Large Memory compute nodes use Intel(R) Xeon(R) Gold 6338 processors. Each node contains 1.9 TB of usable shared memory. Coral is intended as a batch-scheduled HPC system with numerous nodes.

Note: the configuration of Coral is subject to change without notice.

More Info

Maintenance
Date / TimeDetails
2025 Mar 24 07:30 - Mar 25 00:00 HI (Completed) System Maintenance
Node Configuration
Login Standard Large-Memory GPU GPU GPU DPU DPU
Total Nodes 2 4 8 8 4 4 2 4
Processor Intel Xeon Gold 6326 Ampere Altra Intel Xeon Gold 6338 Intel Xeon Gold 6338 AMD EPYC 7513 Ampere Altra Intel Xeon Gold 6338 Ampere Altra
Processor Speed 2.9 GHz 3.0 GHz 2.0 GHz 2.0 GHz 2.6 GHz 3.0 GHz 2.0 GHz 3.0 GHz
Sockets / Node 2 2 2 2 2 1 2 1
Cores / Node 16 32 32 32 32 80 32 80
Total CPU Cores 32 128 256 256 128 320 64 320
Usable Memory / Node 489 GB 489 GB 1.9 TB 1.9 TB 981 GB 489 GB 489 GB 489 GB
Accelerators / Node None None None None None None 1 2
Accelerator N/A N/A N/A N/A N/A N/A NVIDIA Infiniband BlueField-2(Rev1) NVIDIA Infiniband BlueField-2(Rev1)
Memory / Accelerator N/A N/A N/A N/A N/A N/A 16 GB 16 GB
Storage on Node 3.5 TB NVMe 3.5 TB NVMe 3.5 TB NVMe 3.5 TB NVMe 3.5 TB NVMe 5 TB NVMe 3.5 TB NVMe 5 TB NVMe
Interconnect 200 Gbps HDR InfiniBand 200 Gbps HDR InfiniBand 200 Gbps HDR InfiniBand 200 Gbps HDR InfiniBand 200 Gbps HDR InfiniBand 200 Gbps HDR InfiniBand 200 Gbps HDR InfiniBand 200 Gbps HDR InfiniBand
Operating System RHEL8 RHEL8 RHEL8 RHEL8 RHEL8 RHEL8 RHEL8 RHEL8
Queue Descriptions and Limits on Coral
Priority Queue Name Max Wall Clock Time Max Cores Per Job Max Queued Per User Max Running Per User Description
Highest
Down arrow for decreasing priority debug 30 Minutes 1 N/A N/A Time/resource-limited for user testing and debug purposes
standard 7 Days 7 N/A N/A Standard jobs
transfer 2 Days 1 N/A N/A Data transfer for user jobs. Not charged against project allocation. See the AFRL DSRC Archive Guide, section 5.2.
Lowest
Reef is currently Up.

Reef is an Aspen Systems Linux Cluster. The login and compute nodes are populated with Intel 2.5-Ghz Cascade Lake processors. Reef has 5 CPU-only and 11 GPU compute nodes. Each compute node has two 20-core processors, sharing 768 GB of DDR4 memory, with no user accessible swap space. Reef has 109 TB (formatted) of disk storage. Reef is intended to be used as a batch scheduled HPC system.

Note: the configuration of Reef is subject to change without notice.

More Info

Node Configuration
Login Standard Tesla V100 NVIDIA RTX 8000
Total Nodes 1 5 9 2
Processor Intel 6248 Cascade Lake Intel 6248 Cascade Lake Intel 6248 Cascade Lake Intel 6248 Cascade Lake
Processor Speed 2.5 GHz 2.5 GHz 2.5 GHz 2.5 GHz
Sockets / Node 2 2 2 2
Cores / Node 38 38 38 38
Total CPU Cores 40 200 360 80
Usable Memory / Node 650 GB 650 GB 650 GB 650 GB
Accelerators / Node None None 2 2
Accelerator N/A N/A NVIDIA V100 PCIe 3 NVIDIA RTX 8000 PCIe 3
Memory / Accelerator N/A N/A 32 GB 48 GB
Storage on Node None None None None
Interconnect Mellanox HDR100, EDR InfiniBand Mellanox HDR100, EDR InfiniBand Mellanox HDR100, EDR InfiniBand Mellanox HDR100, EDR InfiniBand
Operating System RHEL 7 RHEL 7 RHEL 7 RHEL 7
Queue Descriptions and Limits on Reef
Queue Name Max Wall Clock Time Max Jobs Min Cores Per Job Max Cores Per Job Max Queued Per User Max Running Per User Description
standard None N/A 1 190 N/A N/A 5 Non-GPU Compute nodes
tesla None None 1 342 N/A N/A 9 GPU Compute nodes (Dual-Tesla V100)
rtx None None 1 76 N/A N/A 2 GPU Compute nodes (Dual-Quadro RTX 8000)