Unclassified Systems
Barfoot is an HPE Cray EX4000 system located at the ERDC DSRC. It has 1,092 standard compute nodes, 4 large-memory nodes, and 12 1-GPU visualization nodes (a total of 212,736 compute cores). It has 382 TB of usable memory and is rated at 8.2 peak PFLOPS.
| Login | Standard | Large-Memory | Visualization | |
|---|---|---|---|---|
| Total Nodes | 10 | 1,092 | 4 | 12 |
| Processor | AMD 9654 Genoa | AMD 9654 Genoa | AMD 9654 Genoa | AMD 9654 Genoa |
| Processor Speed | 2.4 GHz | 2.4 GHz | 2.4 GHz | 2.4 GHz |
| Sockets / Node | 2 | 2 | 2 | 2 |
| Cores / Node | 192 | 192 | 192 | 192 |
| Total CPU Cores | 1,920 | 209,664 | 768 | 2,304 |
| Usable Memory / Node | 8 GB | 335 GB | 2,050 GB | 700 GB |
| Accelerators / Node | None | None | None | 1 |
| Accelerator | N/A | N/A | N/A | NVIDIA L40S PCIe 5 |
| Memory / Accelerator | N/A | N/A | N/A | 48 GB |
| Storage on Node | 985 GB NVMe SSD | None | 10 TB NVMe SSD | None |
| Interconnect | Ethernet | HPE Slingshot | HPE Slingshot | HPE Slingshot |
| Operating System | SLES 15 | SLES 15 | SLES 15 | SLES 15 |
| Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Description |
|---|---|---|---|---|
| Highest | urgent | 168 Hours | 9,408 | Jobs belonging to DoD HPCMP Urgent Projects |
|
debug^ | 1 Hour | 13,824 | Time/resource-limited for user testing and debug purposes |
| HIE | 24 Hours | 192 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
| high_lw | 168 Hours | 7,488 | Long-walltime jobs belonging to DoD HPCMP High Priority Projects | |
| high_lg | 24 Hours | 100,032 | Large jobs belonging to DoD HPCMP High Priority Projects | |
| high_sm | 24 Hours | 9,408 | Small jobs belonging to DoD HPCMP High Priority Projects | |
| frontier_lw | 168 Hours | 7,488 | Long-walltime jobs belonging to DoD HPCMP Frontier Projects | |
| frontier_lg | 24 Hours | 100,032 | Large jobs belonging to DoD HPCMP Frontier Projects | |
| frontier_sm | 24 Hours | 9,408 | Small jobs belonging to DoD HPCMP Frontier Projects | |
| standard_lw | 168 Hours | 7,488 | Long-walltime standard jobs | |
| standard_lg | 24 Hours | 100,032 | Large standard jobs | |
| standard_sm | 24 Hours | 9,408 | Small standard jobs | |
| serial | 168 Hours | 1 | Single-core serial jobs. 1 core per hour charged to project allocation. | |
| transfer | 48 Hours | 1 | Data transfer for user jobs. Not charged against project allocation. See the ERDC DSRC Archive Guide, section 5.2. | |
| Lowest | background^^ | 4 Hours | 9,408 | User jobs that are not charged against the project allocation |
Blueback is an HPE Cray EX4000 system located at the Navy DSRC. It has 1,280 standard compute nodes, 16 large-memory nodes, 32 AI/ML nodes, and 24 visualization nodes (a total of 1,352 compute nodes or 256,512 compute cores).
Available Documentation
| Date / Time | Details |
|---|---|
| 2026 Mar 03 06:00 - 13:40 CT (Completed) | Facility |
| Login | Standard | Large-Memory | Visualization | AI/ML | Transfer | |
|---|---|---|---|---|---|---|
| Total Nodes | 12 | 1,280 | 16 | 24 | 32 | 6 |
| Processor | AMD 9654 Genoa | AMD 9654 Genoa | AMD 9654 Genoa | AMD 9654 Genoa | AMD Instinct MI300A APU | AMD 9654 Genoa |
| Processor Speed | 2.4 GHz | 2.4 GHz | 2.4 GHz | 2.4 GHz | 2.4 GHz | 2.4 GHz |
| Sockets / Node | 2 | 2 | 2 | 2 | 2 | 2 |
| Cores / Node | 192 | 192 | 192 | 192 | 96 | 192 |
| Total CPU Cores | 2,304 | 245,760 | 3,072 | 4,608 | 3,072 | 1,152 |
| Usable Memory / Node | 719 GB | 360 GB | 1.4 TB | 733 GB | 478 GB | 703 GB |
| Accelerators / Node | None | None | None | 1 | 4 | None |
| Accelerator | N/A | N/A | N/A | NVIDIA L40 | AMD MI300A | N/A |
| Memory / Accelerator | N/A | N/A | N/A | 48 GB | 128 GB | N/A |
| Storage on Node | None | None | 3.2 TB NVMe | None | 2 x 3.84 TB NVMe | None |
| Interconnect | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot |
| Operating System | SLES | SLES | SLES | SLES | SLES | SLES |
| Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Description |
|---|---|---|---|---|
| Highest | urgent | 24 Hours | 24,756 | Jobs belonging to DoD HPCMP Urgent Projects |
|
debug | 30 Minutes | 16,128 | Time/resource-limited for user testing and debug purposes |
| HIE | 24 Hours | 4,608 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
| frontier | 168 Hours | 98,304 | Jobs belonging to DoD HPCMP Frontier Projects | |
| high | 168 Hours | 98,304 | Jobs belonging to DoD HPCMP High Priority Projects | |
| serial | 168 Hours | 1 | Single-core serial jobs.1 core per hour charged to project allocation. | |
| standard | 168 Hours | 49,152 | Standard jobs | |
| transfer | 48 Hours | 1 | Data transfer for user jobs. Not charged against project allocation. See the Navy DSRC Archive Guide, section 5.2. | |
| Lowest | background | 4 Hours | 6,144 | User jobs that are not charged against the project allocation |
Carpenter is an HPE Cray EX4000 system located at the ERDC DSRC. It has 1,632 standard compute nodes, 4 large-memory nodes, and 8 GPU nodes (a total of 313,344 compute cores). It has 585 TB of useable memory and is rated at 17.65 peak PFLOPS.
| Login | Standard | Large-Memory | Visualization | |
|---|---|---|---|---|
| Total Nodes | 10 | 1,632 | 4 | 8 |
| Processor | AMD 9654 Genoa | AMD 9654 Genoa | AMD 9654 Genoa | AMD 7713 Milan |
| Processor Speed | 2.4 GHz | 2.4 GHz | 2.4 GHz | 2.0 GHz |
| Sockets / Node | 2 | 2 | 2 | 2 |
| Cores / Node | 192 | 192 | 192 | 128 |
| Total CPU Cores | 1,920 | 313,344 | 768 | 1,024 |
| Usable Memory / Node | 8 GB | 349 GB | 2.973 TB | 467 GB |
| Accelerators / Node | None | None | None | 1 |
| Accelerator | N/A | N/A | N/A | NVIDIA A40 PCIe 4 |
| Memory / Accelerator | N/A | N/A | N/A | 48 GB |
| Storage on Node | 1.3 TB NVMe SSD | None | 8.8 TB NVMe SSD | None |
| Interconnect | Ethernet | HPE Slingshot | HPE Slingshot | HPE Slingshot |
| Operating System | SLES 15 | SLES 15 | SLES 15 | SLES 15 |
| Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
|---|---|---|---|---|---|---|
| Highest | urgent | 24 Hours | 9,408 | N/A | N/A | Jobs belonging to DoD HPCMP Urgent Projects |
|
debug^ | 1 Hour | 13,824 | N/A | 2 | Time/resource-limited for user testing and debug purposes |
| HIE | 24 Hours | 192 | 1 | 1 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
| high_lw | 168 Hours | 7,488 | N/A | 3 | Long-walltime jobs belonging to DoD HPCMP High Priority Projects | |
| high_lg | 24 Hours | 100,032 | N/A | 2 | Large jobs belonging to DoD HPCMP High Priority Projects | |
| high_sm | 24 Hours | 9,408 | N/A | 17 | Small jobs belonging to DoD HPCMP High Priority Projects | |
| frontier_lw | 168 Hours | 7,488 | N/A | 3 | Long-walltime jobs belonging to DoD HPCMP Frontier Projects | |
| frontier_lg | 24 Hours | 100,032 | N/A | 2 | Large jobs belonging to DoD HPCMP Frontier Projects | |
| frontier_sm | 24 Hours | 9,408 | N/A | 17 | Small jobs belonging to DoD HPCMP Frontier Projects | |
| standard_lw | 168 Hours | 7,488 | N/A | 3 | Long-walltime standard jobs | |
| standard_lg | 24 Hours | 100,032 | N/A | 2 | Large standard jobs | |
| standard_sm | 24 Hours | 9,408 | N/A | 17 | Small standard jobs | |
| serial | 168 Hours | 1 | N/A | 10 | Single-core serial jobs. 1 core per hour charged to project allocation. | |
| transfer | 48 Hours | 1 | N/A | 10 | Data transfer for user jobs. Not charged against project allocation. See the ERDC DSRC Archive Guide, section 5.2. | |
| Lowest | background^^ | 4 Hours | 9,408 | N/A | 3 | User jobs that are not charged against the project allocation |
Jean is a Liqid system located at the ARL DSRC. It has 494 standard compute nodes, 24 large-memory nodes, 36 AI/ML nodes, and 24 visualization nodes.
| Login | Standard | Large-Memory | Visualization | AI/ML | Transfer | |
|---|---|---|---|---|---|---|
| Total Nodes | 9 | 494 | 24 | 24 | 29 | 2 |
| Processor | Intel 9242 Cascade Lake | Intel 9242 Cascade Lake | Intel 9242 Cascade Lake | Intel 9242 Cascade Lake | Intel 9242 Cascade Lake | Intel 9242 Cascade Lake |
| Processor Speed | 2.3 GHz | 2.3 GHz | 2.3 GHz | 2.3 GHz | 2.3 GHz | 2.3 GHz |
| Sockets / Node | 2 | 2 | 2 | 2 | 2 | 2 |
| Cores / Node | 96^ | 96^ | 96^ | 96^ | 96^ | 96^ |
| Total CPU Cores | 864 | 47,242 | 2,304 | 2,304 | 2,784 | 192 |
| Usable Memory / Node | 748 GB | 385 GB or 757 GB | 1546 GB | 772 GB | 772 GB | 772 GB |
| Accelerators / Node | 1 or 4 | None | None | 1 | 4 | None |
| Accelerator | NVIDIA A100 PCIe4 | N/A | N/A | NVIDIA A100 PCIe 4 | NVIDIA A100 PCIe 4 | N/A |
| Memory / Accelerator | 40 GB | N/A | N/A | 40 GB | 40 GB | N/A |
| Storage on Node | 7 TB NVMe SSD | None | 14 TB NVMe SSD | 7 TB NVMe SSD | 7 TB NVMe SSD | N/A |
| Interconnect | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR Infiniband | HDR InfiniBand |
| Operating System | RHEL 8 | RHEL 8 | RHEL 8 | RHEL 8 | RHEL 8 | RHEL 8 |
| Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
|---|---|---|---|---|---|---|
| Highest | urgent | 96 Hours | 52,992^ | N/A | N/A | Jobs belonging to DoD HPCMP Urgent Projects |
|
HIE | 24 Hours | 1152^ | N/A | N/A | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. |
| transfer | 48 Hours | 192^ | N/A | N/A | Data transfer for user jobs. Not charged against project allocation. See the ARL DSRC Archive Guide, section 5.2. | |
| debug^^ | 1 Hour | 52,992^ | N/A | N/A | Time/resource-limited for user testing and debug purposes | |
| high | 168 Hours | 52,992^ | N/A | N/A | Jobs belonging to DoD HPCMP High Priority Projects | |
| frontier | 168 Hours | 52,992^ | N/A | N/A | Jobs belonging to DoD HPCMP Frontier Projects | |
| cots | 96 Hours | 52,992^ | N/A | N/A | Jobs using COTS applications | |
| interactive | 12 Hours | 52,992^ | N/A | N/A | Rapid response for interactive work. | |
| standard | 168 Hours | 52,992^ | N/A | N/A | Standard jobs | |
| standard-long | 336 Hours | 52,992^ | N/A | N/A | Standard jobs needing additional wall time | |
| background^^^ | 4 Hours | 52,992^ | N/A | N/A | User jobs that are not charged against the project allocation | |
| Lowest | serial | 96 Hours | 192^ | N/A | N/A | User jobs that are not charged against the project allocation |
Narwhal is an HPE Cray EX system located at the Navy DSRC. It has 2,304 standard compute nodes, 26 large-memory nodes, 16 visualization accelerated nodes, 32 Single-GPU MLA accelerated nodes, and 32 Dual-GPU MLA accelerated nodes (a total of 2,410 compute nodes or 308,480 compute cores). It has 640 TB of memory and is rated at 13.5 peak PFLOPS.
Available Documentation
| Date / Time | Details |
|---|---|
| 2026 Mar 05 09:00 - 17:00 CT (Completed) | System Maintenance |
| Login | Standard | Large-Memory | Visualization | Single-GPU MLA | Dual-GPU MLA | |
|---|---|---|---|---|---|---|
| Total Nodes | 11 | 2,304 | 26 | 16 | 32 | 32 |
| Processor | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome |
| Processor Speed | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz |
| Sockets / Node | 2 | 2 | 2 | 2 | 2 | 2 |
| Cores / Node | 128 | 128 | 128 | 128 | 128 | 128 |
| Total CPU Cores | 1,408 | 294,912 | 3,328 | 2,048 | 4,096 | 4,096 |
| Usable Memory / Node | 226 GB | 238 GB | 995 GB | 234 GB | 239 GB | 239 GB |
| Accelerators / Node | None | None | None | 1 | 1 | 2 |
| Accelerator | N/A | N/A | N/A | NVIDIA V100 PCIe 3 | NVIDIA V100 PCIe 3 | NVIDIA V100 PCIe 3 |
| Memory / Accelerator | N/A | N/A | N/A | 32 GB | 32 GB | 32 GB |
| Storage on Node | 880 GB SSD | None | 1.8 TB SSD | None | 880 GB SSD | 880 GB SSD |
| Interconnect | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot |
| Operating System | SLES | SLES | SLES | SLES | SLES | SLES |
| Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
|---|---|---|---|---|---|---|
| Highest | urgent | 24 Hours | 16,384 | Jobs belonging to DoD HPCMP Urgent Projects | ||
|
debug | 30 Minutes | 8,192 | Time/resource-limited for user testing and debug purposes | ||
| HIE | 24 Hours | 3,072 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |||
| frontier | 168 Hours | 16,384 | Jobs belonging to DoD HPCMP Frontier Projects | |||
| high | 168 Hours | 16,384 | Jobs belonging to DoD HPCMP High Priority Projects | |||
| serial | 168 Hours | 1 | Single-core serial jobs. 1 core per hour charged to project allocation. | |||
| standard | 168 Hours | 16,384 | Standard jobs | |||
| transfer | 48 Hours | 1 | Data transfer for user jobs. Not charged against project allocation. See the Navy DSRC Archive Guide, section 5.2. | |||
| Lowest | background | 4 Hours | 8,192 | User jobs that are not charged against the project allocation |
Nautilus is a Penguin Computing TrueHPC system located at the Navy DSRC. It has 1,384 standard compute nodes, 16 large-memory nodes, 16 visualization accelerated nodes, 32 AI/ML nodes, and 32 High Core Performance nodes (a total of 1,480 compute nodes or 186,368 compute cores). It has 386 TB of memory and is rated at 8.5 peak PFLOPS.
| Login | Standard | Large-Memory | Visualization | AI/ML | High Core Performance | |
|---|---|---|---|---|---|---|
| Total Nodes | 14 | 1,384 | 16 | 16 | 32 | 32 |
| Processor | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 73F3 Milan |
| Processor Speed | 2 GHz | 2 GHz | 2 GHz | 2 GHz | 2 GHz | 3.4 GHz |
| Sockets / Node | 2 | 2 | 2 | 2 | 2 | 2 |
| Cores / Node | 128 | 128 | 128 | 128 | 128 | 32 |
| Total CPU Cores | 1,792 | 166,912 | 2,048 | 2,048 / 16 | 4,096 / 128 | 1,024 |
| Usable Memory / Node | 433 GB | 231 GB | 998 GB | 491 GB | 491 GB | 491 GB |
| Accelerators / Node | None | None | None | 1 | 4 | None |
| Accelerator | N/A | N/A | N/A | NVIDIA A40 PCIe 4 | NVIDIA A100 SXM 4 | N/A |
| Memory / Accelerator | N/A | N/A | N/A | 48 GB | 40 GB | N/A |
| Storage on Node | 49 GB NVMe SSD | None | 49 GB NVMe SSD | None | 49 GB NVMe SSD | None |
| Interconnect | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand |
| Operating System | RHEL | RHEL | RHEL | RHEL | RHEL | RHEL |
| Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
|---|---|---|---|---|---|---|
| Highest | urgent | 24 Hours | 16,384 | N/A | 10 | Jobs belonging to DoD HPCMP Urgent Projects |
|
debug | 30 Minutes | 10,752 | N/A | 4 | Time/resource-limited for user testing and debug purposes |
| HIE | 24 Hours | 3,072 | N/A | 3 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
| frontier | 168 Hours | 65,536 | N/A | 50 | Jobs belonging to DoD HPCMP Frontier Projects | |
| high | 168 Hours | 65,536 | N/A | 50 | Jobs belonging to DoD HPCMP High Priority Projects | |
| serial | 168 Hours | 1 | N/A | NA | Single-core serial jobs. 1 core per hour charged to project allocation. | |
| standard | 168 Hours | 16,384 | N/A | 150 | Standard jobs | |
| transfer | 48 Hours | 1 | N/A | NA | Data transfer for user jobs. Not charged against project allocation. See the Navy DSRC Archive Guide, section 5.2. | |
| Lowest | background | 4 Hours | 4,096 | N/A | 10 | User jobs that are not charged against the project allocation |
Raider is a Penguin Computing TrueHPC system located at the AFRL DSRC. It has 1,480 standard compute nodes, 8 large-memory nodes, and 24 Visualization nodes, 32 MLA nodes, and 64 High Clock nodes (a total of 199,680 compute cores). It has 447 TB of memory and is rated at 9 peak PFLOPS.
| Login | Login-viz | Standard | Large-Memory | Visualization | MLA | High Clock | Transfer | |
|---|---|---|---|---|---|---|---|---|
| Total Nodes | 6 | 4 | 1,480 | 8 | 24 | 32 | 64 | 2 |
| Processor | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 73F3 Milan | AMD 7713 Milan |
| Processor Speed | 2.0 GHz | 2.0 GHz | 2.0 GHz | 2.0 GHz | 2.0 GHz | 2.0 GHz | 3.4 GHz | 2.0 GHz |
| Sockets / Node | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 |
| Cores / Node | 128 | 128 | 128 | 128 | 128 | 128 | 32 | 128 |
| Total CPU Cores | 768 | 512 | 189,440 | 1,024 | 3,072 | 4,096 | 2,048 | 256 |
| Usable Memory / Node | 503 GB | 503 GB | 251 GB | 2.0 TB | 503 GB | 503 GB | 503 GB | 503 GB |
| Accelerators / Node | 1 | 1 | None | None | 1 | 4 | None | None |
| Accelerator | NVIDIA A40 PCIe 4 | NVIDIA A100 SXM 4 | N/A | N/A | NVIDIA A40 PCIe 4 | NVIDIA A100 SXM 4 | N/A | N/A |
| Memory / Accelerator | 45 GB | 40 GB | N/A | N/A | 45 GB | 40 GB | N/A | N/A |
| Storage on Node | 960 GB NVMe SSD | 960 GB NVMe SSD | 1.91 TB NVMe SSD | 7.68 TB NVMe SSD | None | 3.84 TB NVMe SSD | None | None |
| Interconnect | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand |
| Operating System | RHEL | RHEL | RHEL | RHEL | RHEL | RHEL | RHEL | RHEL |
| Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
|---|---|---|---|---|---|---|
| Highest | urgent | 168 Hours | 92,160 | N/A | N/A | Jobs belonging to DoD HPCMP Urgent Projects |
|
debug | 1 Hour | 3,840 | 15 | 4 | Time/resource-limited for user testing and debug purposes |
| high | 168 Hours | 92,160 | N/A | N/A | Jobs belonging to DoD HPCMP High Priority Projects | |
| frontier | 168 Hours | 92,160 | N/A | N/A | Jobs belonging to DoD HPCMP Frontier Projects | |
| standard | 168 Hours | 92,160 | N/A | N/A | Standard jobs | |
| HIE | 24 Hours | 256 | 2 | 2 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
| transfer | 48 Hours | 1 | N/A | 12 | Data transfer for user jobs. Not charged against project allocation. See the AFRL DSRC Archive Guide, section 5.2. | |
| Lowest | background | 24 Hours | 3,840 | 35 | 10 | User jobs that are not charged against the project allocation |
Ruth is an HPE Cray EX system located at the ARL DSRC. It has 640 standard compute nodes, 24 large-memory nodes, 16 AI/ML nodes, and 24 visualization nodes.
| Login | Standard | Large-Memory | High Performance | AI/ML | Visualization | Transfer | |
|---|---|---|---|---|---|---|---|
| Total Nodes | 12 | 640 | 24 | 64 | 16 | 24 | 4 |
| Processor | AMD 9654 Genoa | AMD 9654 Genoa | AMD 9654 Genoa | AMD 73F3 Milan | AMD 7A53 Trento | AMD 7713 Milan | AMD 9654 Genoa |
| Processor Speed | 2.4 GHz | 2.4 GHz | 2.4 GHz | 3.5 GHz | 2.0 GHz | 2.0 GHz | 2.4 GHz |
| Sockets / Node | 2 | 2 | 2 | 2 | 1 | 2 | 2 |
| Cores / Node | 192 | 192 | 192 | 32 | 64 | 128 | 64 |
| Total CPU Cores | 2,304 | 122,880 | 4,608 | 2,048 | 1,024 | 3,072 | 256 |
| Usable Memory / Node | 780 GB | 360 GB | 1,500 GB | 500 GB | 500 GB | 500 GB | 780 GB |
| Accelerators / Node | None | None | None | None | 4 | 1 | None |
| Accelerator | N/A | N/A | N/A | N/A | AMD MI250X Infinity | NVIDIA A40 PCIe 4 | N/A |
| Memory / Accelerator | N/A | N/A | N/A | N/A | 128 GB HBM2e Memory | 48 GB | N/A |
| Storage on Node | None | None | 18 TB NVMe SSD | None | 3.5 TB NVMe SSD | None | 35 TB NVMe SSD |
| Interconnect | Ethernet | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot |
| Operating System | SLES 15 | SLES 15 | SLES 15 | SLES 15 | SLES 15 | SLES 15 | SLES 15 |
| Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
|---|---|---|---|---|---|---|
| Highest | urgent | 24 Hours | N/A | N/A | N/A | Jobs belonging to DoD HPCMP Urgent Projects |
|
transfer | 48 Hours | N/A | N/A | 3 | Data transfer for user jobs. Not charged against project allocation. See the ARL DSRC Archive Guide, section 5.2. |
| debug | 1 Hour | 16,384 | N/A | 2 | Time/resource-limited for user testing and debug purposes | |
| high | 168 Hours | N/A | N/A | 100,000 | Jobs belonging to DoD HPCMP High Priority Projects | |
| frontier | 168 Hours | N/A | N/A | 42 | Jobs belonging to DoD HPCMP Frontier Projects | |
| cots | 96 Hours | N/A | N/A | 20 | Jobs belonging to DoD HPCMP Frontier Projects | |
| HIE | 24 Hours | N/A | N/A | 2 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
| interactive | 12 Hours | N/A | N/A | 3 | Rapid response for interactive work. | |
| standard | 168 Hours | 100,000 | N/A | N/A | Standard jobs | |
| standard-long | 200 Hours | 4,096 | N/A | 2 | Standard jobs needing additional wall time | |
| Lowest | background^ | 24 Hours | 10,000 | N/A | 2 | User jobs that are not charged against the project allocation |
Warhawk is an HPE Cray EX system located at the AFRL DSRC. It has 1,024 standard compute nodes, 4 large-memory nodes, 24 1-GPU visualization nodes, and 40 2-GPU Machine-Learning nodes (a total of 1,092 compute nodes or 139,776 compute cores). It has 564 TB of memory and is rated at 6.86 peak PFLOPS.
| Login | Standard | Large-Memory | Visualization | Machine-Learning Accelerated | |
|---|---|---|---|---|---|
| Total Nodes | 7 | 1,024 | 4 | 24 | 40 |
| Processor | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome |
| Processor Speed | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz |
| Sockets / Node | 2 | 2 | 2 | 2 | 2 |
| Cores / Node | 128 | 128 | 128 | 128 | 128 |
| Total CPU Cores | 896 | 131,072 | 512 | 3,072 | 5,120 |
| Usable Memory / Node | 995 GB | 503 GB | 995 GB | 503 GB | 503 GB |
| Accelerators / Node | None | None | None | 1 | 2 |
| Accelerator | N/A | N/A | N/A | NVIDIA V100 PCIe 3 | NVIDIA V100 PCIe 3 |
| Memory / Accelerator | N/A | N/A | N/A | 32 GB | 32 GB |
| Storage on Node | None | None | None | None | None |
| Interconnect | Cray Slingshot | Cray Slingshot | Cray Slingshot | Cray Slingshot | Cray Slingshot |
| Operating System | SLES | SLES | SLES | SLES | SLES |
| Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
|---|---|---|---|---|---|---|
| Highest | urgent | 168 Hours | 69,888 | N/A | N/A | Jobs belonging to DoD HPCMP Urgent Projects |
|
debug | 1 Hour | 2,816 | 15 | 4 | Time/resource-limited for user testing and debug purposes |
| high | 168 Hours | 69,888 | N/A | N/A | Jobs belonging to DoD HPCMP High Priority Projects | |
| frontier | 168 Hours | 69,888 | N/A | N/A | Jobs belonging to DoD HPCMP Frontier Projects | |
| standard | 168 Hours | 69,888 | N/A | N/A | Standard jobs | |
| HIE | 24 Hours | 256 | N/A | 2 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
| transfer | 48 Hours | 1 | N/A | 12 | Data transfer for user jobs. Not charged against project allocation. See the AFRL DSRC Archive Guide, section 5.2. | |
| Lowest | background | 24 Hours | 2,816 | N/A | 35 | User jobs that are not charged against the project allocation |
Wheat is a Liqid system located at the ERDC DSRC. It has 770 standard compute nodes, 7 large-memory nodes, 16 1-GPU visualization nodes, 68 4-GPU MLA nodes, and 28 6-GPU MLA nodes (a total of 81,502 compute cores). It has 376 TB of useable memory and is rated at 4.2 PFLOPS.
| Login | Standard | Large-Memory | Visualization | 4-GPU MLA | 6-GPU MLA | |
|---|---|---|---|---|---|---|
| Total Nodes | 6 | 770 | 7 | 16 | 64 | 32 |
| Processor | Intel 9242 Cascade Lake | Intel 9242 Cascade Lake | Intel 9242 Cascade Lake | Intel 9242 Cascade Lake | Intel 9242 Cascade Lake | Intel 9242 Cascade Lake |
| Processor Speed | 2.3 GHz | 2.3 GHz | 2.3 GHz | 2.3 GHz | 2.3 GHz | 2.3 GHz |
| Sockets / Node | 2 | 2 | 2 | 2 | 2 | 2 |
| Cores / Node | 92 | 92 | 92 | 92 | 92 | 92 |
| Total CPU Cores | 552 | 70,840 | 644 | 1,472 | 5,888 | 2,944 |
| Usable Memory / Node | 8 GB | 356 GB | 3 TB | 731 GB | 731 GB | 731 GB |
| Accelerators / Node | 1 | None | 4 | 1 | 4 | 6 |
| Accelerator | NVIDIA A100 | N/A | NVIDIA A100 | NVIDIA A100 | NVIDIA A100 | NVIDIA A100 |
| Memory / Accelerator | 40 GB | N/A | 40 GB | 40 GB | 40 GB | 40 GB |
| Storage on Node | None | None | None | None | None | None |
| Interconnect | Ethernet | InfiniBand | InfiniBand | InfiniBand | InfiniBand | InfiniBand |
| Operating System | RHEL 8 | RHEL 8 | RHEL 8 | RHEL 8 | RHEL 8 | RHEL 8 |
| Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Description |
|---|---|---|---|---|
| Highest | urgent | 168 Hours | 2,392 | Jobs belonging to DoD HPCMP Urgent Projects |
|
debug^ | 1 Hour | 3,680 | Time/resource-limited for user testing and debug purposes |
| HIE | 24 Hours | 92 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
| high_lw | 168 Hours | 1,932 | Long-walltime jobs belonging to DoD HPCMP High Priority Projects | |
| high_lg | 24 Hours | 35,420 | Large jobs belonging to DoD HPCMP High Priority Projects | |
| high_sm | 24 Hours | 2,932 | Small jobs belonging to DoD HPCMP High Priority Projects | |
| frontier_lw | 168 Hours | 1,932 | Long-walltime jobs belonging to DoD HPCMP Frontier Projects | |
| frontier_lg | 24 Hours | 35,420 | Large jobs belonging to DoD HPCMP Frontier Projects | |
| frontier_sm | 24 Hours | 2,932 | Small jobs belonging to DoD HPCMP Frontier Projects | |
| standard_lw | 168 Hours | 1,932 | Long-walltime standard jobs | |
| standard_lg | 24 Hours | 35,420 | Large standard jobs | |
| standard_sm | 24 Hours | 2,932 | Small standard jobs | |
| transfer | 48 Hours | 1 | Data transfer for user jobs. Not charged against project allocation. See the ERDC DSRC Archive Guide, section 5.2. | |
| Lowest | background^^ | 4 Hours | 48 | User jobs that are not charged against the project allocation |