Non-Allocated Systems
The HPCMP provides a limited number of HPC systems for non-allocated use. These systems are typically experimental or unique in nature, and do not include a full complement of application software, however they provide users an opportunity to explore nontraditional architectures while operating free of the constraints of allocated use.
Builder is a single-node Aspen Systems Linux Gigabyte server intended primarily to provide a platform for building and sharing Singularity software containers. The system is populated with AMD 7742 processors and Nvidia V100 graphics processing units. Builder uses Intel Gigabit Ethernet as its high-speed network for I/O traffic. Builder uses AVAGO MegaRAID to manage its local file system that targets 130 TB of disk storage. Builder is equipped with two 64-core processors (128 total cores) running the RHEL 8 operating system, two GPUs, 1,024 gigabytes of memory, with no user-accessible swap space.
Available Documentation
| Date / Time | Details |
|---|---|
| 2025 Nov 19 07:30 - Nov 20 00:00 HI | System Maintenance |
| Login/GPU | |
|---|---|
| Total Nodes | 1 |
| Processor | AMD 7742 Rome |
| Processor Speed | 2.25 GHz |
| Sockets / Node | 2 |
| Cores / Node | 128 |
| Total CPU Cores | 128 |
| Usable Memory / Node | 1 TB |
| Accelerators / Node | 2 |
| Accelerator | NVIDIA V100 |
| Memory / Accelerator | 32 GB |
| Storage on Node | 130 TB SAS HDD Array |
| Interconnect | XL710 for 40GbE QSFP+ to cwfs, I350 Gigabit Network Connection to DREN |
| Operating System | RHEL 8.5 |
Queue Information
Builder does not employ a job scheduler, so there are no queues.
Coral is an Aspen Systems Linux Cluster with x86_64 and ARM nodes, enabling architecture comparisons. It features 8-way A100 GPUs, AMD MI100/MI250 GPUs, and DPUs for accelerated computing. With standard and large-memory configurations, InfiniBand interconnect, and a Weka parallel file system, Coral supports diverse HPC tasks requiring batch scheduling and parallel processing. It serves as a testbed for ARM-based HPC evaluation.
Available Documentation
| Date / Time | Details |
|---|---|
| 2025 Nov 24 07:30 - Nov 25 00:00 HI | System Maintenance |
| Login | Standard | Large-Memory | GPU | GPU | GPU | DPU | DPU | |
|---|---|---|---|---|---|---|---|---|
| Total Nodes | 2 | 4 | 8 | 8 | 4 | 4 | 2 | 4 |
| Processor | Intel Xeon Gold 6326 | Ampere Altra | Intel Xeon Gold 6338 | Intel Xeon Gold 6338 | AMD EPYC 7513 | Ampere Altra | Intel Xeon Gold 6338 | Ampere Altra |
| Processor Speed | 2.9 GHz | 3.0 GHz | 2.0 GHz | 2.0 GHz | 2.6 GHz | 3.0 GHz | 2.0 GHz | 3.0 GHz |
| Sockets / Node | 2 | 2 | 2 | 2 | 2 | 1 | 2 | 1 |
| Cores / Node | 16 | 32 | 32 | 32 | 32 | 80 | 32 | 80 |
| Total CPU Cores | 32 | 128 | 256 | 256 | 128 | 320 | 64 | 320 |
| Usable Memory / Node | 489 GB | 489 GB | 1.9 TB | 1.9 TB | 981 GB | 489 GB | 489 GB | 489 GB |
| Accelerators / Node | None | None | None | None | None | None | 1 | 2 |
| Accelerator | N/A | N/A | N/A | N/A | N/A | N/A | NVIDIA Infiniband BlueField-2(Rev1) | NVIDIA Infiniband BlueField-2(Rev1) |
| Memory / Accelerator | N/A | N/A | N/A | N/A | N/A | N/A | 16 GB | 16 GB |
| Storage on Node | 3.5 TB NVMe | 3.5 TB NVMe | 3.5 TB NVMe | 3.5 TB NVMe | 3.5 TB NVMe | 5 TB NVMe | 3.5 TB NVMe | 5 TB NVMe |
| Interconnect | 200 Gbps HDR InfiniBand | 200 Gbps HDR InfiniBand | 200 Gbps HDR InfiniBand | 200 Gbps HDR InfiniBand | 200 Gbps HDR InfiniBand | 200 Gbps HDR InfiniBand | 200 Gbps HDR InfiniBand | 200 Gbps HDR InfiniBand |
| Operating System | RHEL8 | RHEL8 | RHEL8 | RHEL8 | RHEL8 | RHEL8 | RHEL8 | RHEL8 |
| Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
|---|---|---|---|---|---|---|
| Highest | ||||||
|
debug | 30 Minutes | N/A | N/A | N/A | Time/resource-limited for user testing and debug purposes |
| standard | 7 Days | N/A | N/A | N/A | Standard jobs | |
| transfer | 2 Days | N/A | N/A | N/A | Data transfer for user jobs. Not charged against project allocation. See the AFRL DSRC Archive Guide, section 5.2. | |
| Lowest |
Reef is an Aspen Systems Linux Cluster. The login and compute nodes are populated with Intel 2.5-Ghz Cascade Lake processors. Reef has 5 CPU-only and 11 GPU compute nodes. Each compute node has two 20-core processors, sharing 768 GB of DDR4 memory, with no user accessible swap space. Reef has 109 TB (formatted) of disk storage. Reef is intended to be used as a batch scheduled HPC system.
Note: the configuration of Reef is subject to change without notice.
Available Documentation
| Date / Time | Details |
|---|---|
| 2025 Nov 17 07:30 - Nov 28 00:00 HI | System Maintenance |
| Login | Standard | Tesla V100 | NVIDIA RTX 8000 | |
|---|---|---|---|---|
| Total Nodes | 1 | 5 | 9 | 2 |
| Processor | Intel 6248 Cascade Lake | Intel 6248 Cascade Lake | Intel 6248 Cascade Lake | Intel 6248 Cascade Lake |
| Processor Speed | 2.5 GHz | 2.5 GHz | 2.5 GHz | 2.5 GHz |
| Sockets / Node | 2 | 2 | 2 | 2 |
| Cores / Node | 38 | 38 | 38 | 38 |
| Total CPU Cores | 40 | 200 | 360 | 80 |
| Usable Memory / Node | 650 GB | 650 GB | 650 GB | 650 GB |
| Accelerators / Node | None | None | 2 | 2 |
| Accelerator | N/A | N/A | NVIDIA V100 PCIe 3 | NVIDIA RTX 8000 PCIe 3 |
| Memory / Accelerator | N/A | N/A | 32 GB | 48 GB |
| Storage on Node | None | None | None | None |
| Interconnect | Mellanox HDR100, EDR InfiniBand | Mellanox HDR100, EDR InfiniBand | Mellanox HDR100, EDR InfiniBand | Mellanox HDR100, EDR InfiniBand |
| Operating System | RHEL 7 | RHEL 7 | RHEL 7 | RHEL 7 |
| Queue Name | Max Wall Clock Time | Max Jobs | Min Cores Per Job | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
|---|---|---|---|---|---|---|---|
| standard | None | N/A | 1 | 190 | N/A | N/A | 5 Non-GPU Compute nodes |
| tesla | None | None | 1 | 342 | N/A | N/A | 9 GPU Compute nodes (Dual-Tesla V100) |
| rtx | None | None | 1 | 76 | N/A | N/A | 2 GPU Compute nodes (Dual-Quadro RTX 8000) |