Unclassified Systems

Carpenter is an HPE Cray EX4000 system located at the ERDC DSRC. It has 1,632 standard compute nodes, 4 large-memory nodes, and 8 GPU nodes (a total of 313,344 compute cores). It has 585 TB of useable memory and is rated at 17.65 peak PFLOPS.
Login | Standard | Large-Memory | Visualization | |
---|---|---|---|---|
Total Nodes | 10 | 1,632 | 4 | 8 |
Processor | AMD 9654 Genoa | AMD 9654 Genoa | AMD 9654 Genoa | AMD 7713 Milan |
Processor Speed | 2.4 GHz | 2.4 GHz | 2.4 GHz | 2.0 GHz |
Sockets / Node | 2 | 2 | 2 | 2 |
Cores / Node | 192 | 192 | 192 | 128 |
Total CPU Cores | 1,920 | 313,344 | 768 | 1,024 |
Usable Memory / Node | 8 GB | 349 GB | 2.973 TB | 467 GB |
Accelerators / Node | None | None | None | 1 |
Accelerator | N/A | N/A | N/A | NVIDIA A40 PCIe 4 |
Memory / Accelerator | N/A | N/A | N/A | 48 GB |
Storage on Node | 1.3 TB NVMe SSD | None | 8.8 TB NVMe SSD | None |
Interconnect | Ethernet | HPE Slingshot | HPE Slingshot | HPE Slingshot |
Operating System | SLES 15 | SLES 15 | SLES 15 | SLES 15 |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
---|---|---|---|---|---|---|
Highest | urgent | 24 Hours | 9,408 | N/A | N/A | Jobs belonging to DoD HPCMP Urgent Projects |
![]() |
debug* | 1 Hour | 13,824 | N/A | 2 | Time/resource-limited for user testing and debug purposes |
HIE | 24 Hours | 192 | 1 | 1 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
high_lw | 168 Hours | 7,488 | N/A | 3 | Long-walltime jobs belonging to DoD HPCMP High Priority Projects | |
high_lg | 24 Hours | 100,032 | N/A | 2 | Large jobs belonging to DoD HPCMP High Priority Projects | |
high_sm | 24 Hours | 9,408 | N/A | 17 | Small jobs belonging to DoD HPCMP High Priority Projects | |
frontier_lw | 168 Hours | 7,488 | N/A | 3 | Long-walltime jobs belonging to DoD HPCMP Frontier Projects | |
frontier_lg | 24 Hours | 100,032 | N/A | 2 | Large jobs belonging to DoD HPCMP Frontier Projects | |
frontier_sm | 24 Hours | 9,408 | N/A | 17 | Small jobs belonging to DoD HPCMP Frontier Projects | |
standard_lw | 168 Hours | 7,488 | N/A | 3 | Long-walltime standard jobs | |
standard_lg | 24 Hours | 100,032 | N/A | 2 | Large standard jobs | |
standard_sm | 24 Hours | 9,408 | N/A | 17 | Small standard jobs | |
serial | 168 Hours | 1 | N/A | 10 | Single-core serial jobs. 1 core per hour charged to project allocation. | |
transfer | 48 Hours | 1 | N/A | 10 | Data transfer for user jobs. Not charged against project allocation. See the ERDC DSRC Archive Guide, section 5.2. | |
Lowest | background** | 4 Hours | 9,408 | N/A | 3 | User jobs that are not charged against the project allocation |

Narwhal is an HPE Cray EX system located at the Navy DSRC. It has 2,304 standard compute nodes, 26 large-memory nodes, 16 visualization accelerated nodes, 32 Single-GPU MLA accelerated nodes, and 32 Dual-GPU MLA accelerated nodes (a total of 2,410 compute nodes or 308,480 compute cores). It has 640 TB of memory and is rated at 13.5 peak PFLOPS.
Available Documentation
Date / Time | Details |
---|---|
2025 May 06 10:00 - 16:00 CT | Archive Maintenance |
Login | Standard | Large-Memory | Visualization | Single-GPU MLA | Dual-GPU MLA | |
---|---|---|---|---|---|---|
Total Nodes | 11 | 2,304 | 26 | 16 | 32 | 32 |
Processor | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome |
Processor Speed | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz |
Sockets / Node | 2 | 2 | 2 | 2 | 2 | 2 |
Cores / Node | 128 | 128 | 128 | 128 | 128 | 128 |
Total CPU Cores | 1,408 | 294,912 | 3,328 | 2,048 | 4,096 | 4,096 |
Usable Memory / Node | 226 GB | 238 GB | 995 GB | 234 GB | 239 GB | 239 GB |
Accelerators / Node | None | None | None | 1 | 1 | 2 |
Accelerator | N/A | N/A | N/A | NVIDIA V100 PCIe 3 | NVIDIA V100 PCIe 3 | NVIDIA V100 PCIe 3 |
Memory / Accelerator | N/A | N/A | N/A | 32 GB | 32 GB | 32 GB |
Storage on Node | 880 GB SSD | None | 1.8 TB SSD | None | 880 GB SSD | 880 GB SSD |
Interconnect | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot |
Operating System | SLES | SLES | SLES | SLES | SLES | SLES |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
---|---|---|---|---|---|---|
Highest | urgent | 24 Hours | 16,384 | N/A | 100 | Jobs belonging to DoD HPCMP Urgent Projects |
![]() |
frontier | 168 Hours | 65,536 | N/A | 100 | Jobs belonging to DoD HPCMP Frontier Projects |
high | 168 Hours | 32,768 | N/A | 100 | Jobs belonging to DoD HPCMP High Priority Projects | |
debug | 30 Minutes | 8,192 | N/A | 4 | Time/resource-limited for user testing and debug purposes | |
HIE | 24 Hours | 3,072 | N/A | 1 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
viz | 24 Hours | 128 | N/A | 8 | Visualization jobs | |
standard | 168 Hours | 32,768 | N/A | 100 | Standard jobs | |
mla | 24 Hours | 128 | N/A | 8 | Machine Learning Accelerated jobs that require a GPU node; PBS assigns the next available smla (1-GPU) or dmla (2-GPU) node. | |
smla | 24 Hours | 128 | N/A | 8 | Machine Learning Accelerated jobs that require an smla (Single-GPU MLA) node. | |
dmla | 24 Hours | 128 | N/A | 8 | Machine Learning Accelerated jobs that require a dmla (Dual-GPU MLA) node. | |
serial | 168 Hours | 1 | N/A | 26 | Single-core serial jobs. 1 core per hour charged to project allocation. | |
bigmem | 96 Hours | 1,280 | N/A | 2 | Large-memory jobs | |
transfer | 48 Hours | 1 | N/A | 10 | Data transfer for user jobs. Not charged against project allocation. See the Navy DSRC Archive Guide, section 5.2. | |
Lowest | background | 4 Hours | 1,024 | N/A | 10 | User jobs that are not charged against the project allocation |

Nautilus is a Penguin Computing TrueHPC system located at the Navy DSRC. It has 1,384 standard compute nodes, 16 large-memory nodes, 16 visualization accelerated nodes, 32 AI/ML nodes, and 32 High Core Performance nodes (a total of 1,480 compute nodes or 186,368 compute cores). It has 386 TB of memory and is rated at 8.5 peak PFLOPS.
Available Documentation
Date / Time | Details |
---|---|
2025 May 06 10:00 - 16:00 CT | Archive Maintenance |
Login | Standard | Large-Memory | Visualization | AI/ML | High Core Performance | |
---|---|---|---|---|---|---|
Total Nodes | 14 | 1,384 | 16 | 16 | 32 | 32 |
Processor | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 73F3 Milan |
Processor Speed | 2 GHz | 2 GHz | 2 GHz | 2 GHz | 2 GHz | 3.4 GHz |
Sockets / Node | 2 | 2 | 2 | 2 | 2 | 2 |
Cores / Node | 128 | 128 | 128 | 128 | 128 | 32 |
Total CPU Cores | 1,792 | 166,912 | 2,048 | 2,048 / 16 | 4,096 / 128 | 1,024 |
Usable Memory / Node | 433 GB | 231 GB | 998 GB | 491 GB | 491 GB | 491 GB |
Accelerators / Node | None | None | None | 1 | 4 | None |
Accelerator | N/A | N/A | N/A | NVIDIA A40 PCIe 4 | NVIDIA A100 SXM 4 | N/A |
Memory / Accelerator | N/A | N/A | N/A | 48 GB | 40 GB | N/A |
Storage on Node | 49 GB NVMe SSD | None | 49 GB NVMe SSD | None | 49 GB NVMe SSD | None |
Interconnect | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand |
Operating System | RHEL | RHEL | RHEL | RHEL | RHEL | RHEL |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
---|---|---|---|---|---|---|
Highest | urgent | 24 Hours | 16,384 | N/A | 10 | Jobs belonging to DoD HPCMP Urgent Projects |
![]() |
debug | 30 Minutes | 10,752 | N/A | 4 | Time/resource-limited for user testing and debug purposes |
HIE | 24 Hours | 3,072 | N/A | 3 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
frontier | 168 Hours | 65,536 | N/A | 50 | Jobs belonging to DoD HPCMP Frontier Projects | |
high | 168 Hours | 65,536 | N/A | 50 | Jobs belonging to DoD HPCMP High Priority Projects | |
serial | 168 Hours | 1 | N/A | NA | Single-core serial jobs. 1 core per hour charged to project allocation. | |
standard | 168 Hours | 16,384 | N/A | 150 | Standard jobs | |
transfer | 48 Hours | 1 | N/A | NA | Data transfer for user jobs. Not charged against project allocation. See the Navy DSRC Archive Guide, section 5.2. | |
Lowest | background | 4 Hours | 4,096 | N/A | 10 | User jobs that are not charged against the project allocation |

Raider is a Penguin Computing TrueHPC system located at the AFRL DSRC. It has 1,480 standard compute nodes, 8 large-memory nodes, and 24 Visualization nodes, 32 MLA nodes, and 64 High Clock nodes (a total of 199,680 compute cores). It has 447 TB of memory and is rated at 9 peak PFLOPS.
Login | Login-viz | Standard | Large-Memory | Visualization | MLA | High Clock | Transfer | |
---|---|---|---|---|---|---|---|---|
Total Nodes | 6 | 4 | 1,480 | 8 | 24 | 32 | 64 | 2 |
Processor | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 7713 Milan | AMD 73F3 Milan | AMD 7713 Milan |
Processor Speed | 2.0 GHz | 2.0 GHz | 2.0 GHz | 2.0 GHz | 2.0 GHz | 2.0 GHz | 3.4 GHz | 2.0 GHz |
Sockets / Node | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 |
Cores / Node | 128 | 128 | 128 | 128 | 128 | 128 | 32 | 128 |
Total CPU Cores | 768 | 512 | 189,440 | 1,024 | 3,072 | 4,096 | 2,048 | 256 |
Usable Memory / Node | 503 GB | 503 GB | 251 GB | 2.0 TB | 503 GB | 503 GB | 503 GB | 503 GB |
Accelerators / Node | 1 | 1 | None | None | 1 | 4 | None | None |
Accelerator | NVIDIA A40 PCIe 4 | NVIDIA A100 SXM 4 | N/A | N/A | NVIDIA A40 PCIe 4 | NVIDIA A100 SXM 4 | N/A | N/A |
Memory / Accelerator | 45 GB | 40 GB | N/A | N/A | 45 GB | 40 GB | N/A | N/A |
Storage on Node | 960 GB NVMe SSD | 960 GB NVMe SSD | 1.91 TB NVMe SSD | 7.68 TB NVMe SSD | None | 3.84 TB NVMe SSD | None | None |
Interconnect | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand | HDR InfiniBand |
Operating System | RHEL | RHEL | RHEL | RHEL | RHEL | RHEL | RHEL | RHEL |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
---|---|---|---|---|---|---|
Highest | urgent | 168 Hours | 92,160 | N/A | N/A | Jobs belonging to DoD HPCMP Urgent Projects |
![]() |
debug | 1 Hour | 3,840 | 15 | 4 | Time/resource-limited for user testing and debug purposes |
high | 168 Hours | 92,160 | N/A | N/A | Jobs belonging to DoD HPCMP High Priority Projects | |
frontier | 168 Hours | 92,160 | N/A | N/A | Jobs belonging to DoD HPCMP Frontier Projects | |
standard | 168 Hours | 92,160 | N/A | N/A | Standard jobs | |
HIE | 24 Hours | 256 | 2 | 2 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
transfer | 48 Hours | 1 | N/A | 12 | Data transfer for user jobs. Not charged against project allocation. See the AFRL DSRC Archive Guide, section 5.2. | |
Lowest | background | 24 Hours | 3,840 | 35 | 10 | User jobs that are not charged against the project allocation |

Ruth is an HPE Cray EX system located at the ARL DSRC. It has 640 standard compute nodes, 4 large-memory nodes, 16 AI/ML nodes, and 24 visualization nodes.
Available Documentation
Login | Standard | Large-Memory | High Performance | AIML | Visualization | Transfer | |
---|---|---|---|---|---|---|---|
Total Nodes | 12 | 640 | 24 | 64 | 16 | 24 | 4 |
Processor | AMD 9654 Genoa | AMD 9654 Genoa | AMD 9654 Genoa | AMD 73F3 Milan | AMD 7A53 Trento | AMD 7713 Milan | AMD 9654 Genoa |
Processor Speed | 2.4 GHz | 2.4 GHz | 2.4 GHz | 3.5 GHz | 2.0 GHz | 2.0 GHz | 2.4 GHz |
Sockets / Node | 2 | 2 | 2 | 2 | 1 | 2 | 2 |
Cores / Node | 192 | 192 | 192 | 32 | 64 | 128 | 192 |
Total CPU Cores | 2,304 | 122,880 | 4,608 | 2,048 | 1,024 | 3,072 | 768 |
Usable Memory / Node | 780 GB | 360 GB | 1,500 GB | 500 GB | 500 GB | 500 GB | 780 GB |
Accelerators / Node | None | None | None | None | 4 | 1 | None |
Accelerator | N/A | N/A | N/A | N/A | AMD MI250X Infinity | NVIDIA A40 PCIe 4 | N/A |
Memory / Accelerator | N/A | N/A | N/A | N/A | 128 GB HBM2e Memory | 48 GB | N/A |
Storage on Node | None | None | 18 TB NVMe SSD | None | 3.5 TB NVMe SSD | None | 35 TB NVMe SSD |
Interconnect | Ethernet | HPC Slingshot | HPC Slingshot | HPC Slingshot | HPE Slingshot | HPE Slingshot | HPE Slingshot |
Operating System | SLES 15 | SLES 15 | SLES 15 | SLES 15 | SLES 15 | SLES 15 | SLES 15 |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
---|---|---|---|---|---|---|
Highest | urgent | 24 Hours | N/A | N/A | N/A | Jobs belonging to DoD HPCMP Urgent Projects |
![]() |
transfer | 48 Hours | N/A | N/A | 3 | Data transfer for user jobs. Not charged against project allocation. See the ARL DSRC Archive Guide, section 5.2. |
debug | 1 Hour | 16,384 | N/A | 2 | Time/resource-limited for user testing and debug purposes | |
high | 168 Hours | 100,000 | N/A | 100,000 | Jobs belonging to DoD HPCMP High Priority Projects | |
frontier | 168 Hours | N/A | N/A | 42 | Jobs belonging to DoD HPCMP Frontier Projects | |
cots | 96 Hours | 512 | N/A | 20 | Jobs belonging to DoD HPCMP Frontier Projects | |
HIE | 24 Hours | 768 | N/A | 2 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
interactive | 12 Hours | N/A | N/A | 3 | Rapid response for interactive work. | |
staff | 168 Hours | N/A | N/A | N/A | Jobs belonging to DoD HPCMP Staff Projects | |
standard | 168 Hours | 100,000 | N/A | N/A | Standard jobs | |
standard-long | 200 Hours | 4,096 | N/A | 2 | Standard jobs needing additional wall time | |
Lowest | background* | 24 Hours | 10,000 | N/A | 2 | User jobs that are not charged against the project allocation |

Warhawk is an HPE Cray EX system located at the AFRL DSRC. It has 1,024 standard compute nodes, 4 large-memory nodes, 24 1-GPU visualization nodes, and 40 2-GPU Machine-Learning nodes (a total of 1,092 compute nodes or 139,776 compute cores). It has 564 TB of memory and is rated at 6.86 peak PFLOPS.
Available Documentation
Date / Time | Details |
---|---|
2025 May 19 08:00 - May 30 17:00 ET | System Maintenance |
Login | Standard | Large-Memory | Visualization | Machine-Learning Accelerated | |
---|---|---|---|---|---|
Total Nodes | 7 | 1,024 | 4 | 24 | 40 |
Processor | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome | AMD 7H12 Rome |
Processor Speed | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz | 2.6 GHz |
Sockets / Node | 2 | 2 | 2 | 2 | 2 |
Cores / Node | 128 | 128 | 128 | 128 | 128 |
Total CPU Cores | 896 | 131,072 | 512 | 3,072 | 5,120 |
Usable Memory / Node | 995 GB | 503 GB | 995 GB | 503 GB | 503 GB |
Accelerators / Node | None | None | None | 1 | 2 |
Accelerator | N/A | N/A | N/A | NVIDIA V100 PCIe 3 | NVIDIA V100 PCIe 3 |
Memory / Accelerator | N/A | N/A | N/A | 32 GB | 32 GB |
Storage on Node | None | None | None | None | None |
Interconnect | Cray Slingshot | Cray Slingshot | Cray Slingshot | Cray Slingshot | Cray Slingshot |
Operating System | SLES | SLES | SLES | SLES | SLES |
Priority | Queue Name | Max Wall Clock Time | Max Cores Per Job | Max Queued Per User | Max Running Per User | Description |
---|---|---|---|---|---|---|
Highest | urgent | 168 Hours | 69,888 | N/A | N/A | Jobs belonging to DoD HPCMP Urgent Projects |
![]() |
debug | 1 Hour | 2,816 | 15 | 4 | Time/resource-limited for user testing and debug purposes |
high | 168 Hours | 69,888 | N/A | N/A | Jobs belonging to DoD HPCMP High Priority Projects | |
frontier | 168 Hours | 69,888 | N/A | N/A | Jobs belonging to DoD HPCMP Frontier Projects | |
standard | 168 Hours | 69,888 | N/A | N/A | Standard jobs | |
HIE | 24 Hours | 256 | N/A | 2 | Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide. | |
transfer | 48 Hours | 1 | N/A | 12 | Data transfer for user jobs. Not charged against project allocation. See the AFRL DSRC Archive Guide, section 5.2. | |
Lowest | background | 24 Hours | 2,816 | N/A | 35 | User jobs that are not charged against the project allocation |