Unclassified Systems

Armstrong is currently Up.

Armstrong is a Cray XC30 System located at the Navy DSRC. Armstrong has 1,215 compute nodes each with 24 cores (29,160 total compute cores), 77.7 TBytes of memory, and is rated at 786 peak TFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Accelerator Nodes
Phi
Total Cores | Nodes 144 | 6 29,160 | 1,215 192 | 8 1,240 | 124
Operating System SUSE Linux Cray Linux Environment
Cores/Node 24 10 + 1 Phi
Core Type Intel Xeon E5-2697v2 Intel Xeon E5-2697v2 Intel Xeon E5-2697v2 Intel Xeon E5-2670v2
+Intel Xeon 5120D Phi
Core Speed 2.7 GHz 2.5 GHz
+1.05 GHz
Memory/Node 256 GBytes 64 GBytes 128 GBytes 64 GBytes
+8 GBytes
Accessible Memory/Node 240 GBytes 61 GBytes 126 GBytes 61 GBytes
+7.5 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Aries
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 2,400 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority frontier-short Frontier 24 Hours 14,400 Designated Frontier projects by DoD HPCMP
frontier Frontier 168 Hours 7,200 Designated Frontier projects by DoD HPCMP
high High 168 Hours 7,200 Jobs belonging to DoD HPCMP High Priority Projects
debug Debug 30 Minutes 1,536 User diagnostic jobs.
standard Standard 168 Hours 4,800 Standard jobs
phi N/A 168 Hours 1,240 Phi-accelerated jobs.
bigmem N/A 24 Hours 192 Large-memory jobs
transfer N/A 24 Hours 1 Data transfer for user jobs.
Lowest background Background 4 Hours 768 Unrestricted access - no allocation charge
Conrad is currently Up.

Conrad is a Cray XC40 System located at the Navy DSRC. Conrad has 1,523 compute nodes each with 32 cores (48,736 total compute cores), 207 TBytes of memory, and is rated at 2 peak PFLOPS.

More Info

Maintenance
Date / TimeDetails
2016 Feb 02 09:00 - 11:00 CT (Completed) System Maintenance
Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Accelerator Nodes
Phi
Total Cores | Nodes 192 | 6 48,736 | 1,523 256 | 8 1,680 | 168
Operating System SUSE Linux Cray Linux Environment
Cores/Node 32 10 + 1 Phi
Core Type Intel Xeon E5-2698v3 Intel Xeon E5-2696v2
+Intel Xeon 5120D Phi
Core Speed 2.3 GHz 2.4 GHz
+1.05 GHz
Memory/Node 256 GBytes 128 GBytes 512 GBytes 64 GBytes
+8 GBytes
Accessible Memory/Node 240 GBytes 125 GBytes 510 GBytes 63 GBytes
+7.5 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Aries / Dragonfly
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 768 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority frontier Frontier 168 Hours 16,000 Designated Frontier projects by DoD HPCMP
high High 168 Hours 16,000 Jobs belonging to DoD HPCMP High Priority Projects
debug Debug 30 Minutes 3,072 User diagnostic jobs.
standard Standard 168 Hours 8,000 Standard jobs
phi N/A 168 Hours 2,376 Phi-accelerated jobs.
bigmem N/A 24 Hours 224 Large-memory jobs
transfer N/A 24 Hours N/A Data transfer for user jobs.
Lowest background Background 4 Hours 2,048 Unrestricted access - no allocation charge
Copper is currently Up.

Copper is a Cray XE6m Open Research System located at the ERDC DSRC. Copper has 460 compute nodes with 32 cores each (14,720 total compute cores), 29.9 TBytes of memory, and is rated at 138 peak TFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Total Cores | Nodes 32 | 2 14,720 | 460
Operating System SLES 11 Cray Linux Environment
Cores/Node 16 32
Core Type AMD 64-bit Opteron AMD Interlagos Opteron
Core Speed 2.7 GHz 2.3 GHz
Memory/Node 128 GBytes 64 GBytes
Accessible Memory/Node 8 GBytes 60 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Gemini
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 168 Hours 4,096 Designated urgent jobs by DoD HPCMP
Down arrow for decreasing priority test N/A 48 Hours N/A Staff-only testing
debug* Debug 1 Hour 512 User testing
high High 168 Hours 4,096 Designated high-priority jobs by Service/Agency
challenge Challenge 168 Hours 4,096 Challenge projects only
standard Standard 168 Hours 4,096 Normal priority jobs
transfer N/A 24 Hours 1 Data transfer for user jobs. Access to the long-term storage.
Lowest background** Background 4 Hours 4,096 Unrestricted access - no allocation charge. Number of cores can be altered depending on the current system load.
Excalibur is currently Up.

Excalibur is a Cray XC40 System located at the ARL DSRC. Excalibur has 3,098 compute nodes each with 32 cores (99,136 total compute cores), 399 TBytes of memory, and is rated at 3.7 peak PFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Accelerator Nodes
GPU
Total Cores | Nodes 512 | 16 99,136 | 3,098 1,024 | 32
Operating System SUSE Linux Cray Linux Environment
Cores/Node 32 32 + 1 GPU
Core Type Intel Xeon E5-2698 v3 Intel Xeon E5-2698 v3 Intel Xeon E5-2698 v3 Intel Xeon E5-2698 v3
+NVIDIA Tesla K40
Core Speed 2.3 GHz 2.5 GHz
Memory/Node 256 GBytes 128 GBytes 512 GBytes 256 GBytes
+12 GBytes
Accessible Memory/Node 2 GBytes 126 GBytes 508 GBytes 252 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet / InfiniBand Cray Aries / Dragonfly Ethernet / InfiniBand
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 168 Hours N/A Designated urgent jobs by DoD HPCMP
Down arrow for decreasing priority high High 168 Hours N/A Designated high-priority projects by service/agency.
frontier Frontier 168 Hours N/A Frontier projects only
standard-long Standard 200 Hours 512 Access available by request.
debug Debug 1 Hour N/A User diagnostic jobs
standard Standard 168 Hours N/A Normal user jobs.
transfer N/A 48 Hours 1 Data transfer jobs.
Lowest background Background 120 Hours 15,000 User jobs that will not be charged against the project allocation.
Garnet is currently Up.

Garnet is a Cray XE6 located at the ERDC DSRC. Garnet has 4,716 compute nodes each with 32 cores (150,912 total compute cores), 301 TBytes of memory, and is rated at 1.5 peak PFLOPS.

More Info

Maintenance
Date / TimeDetails
2016 Feb 04 00:00 - Feb 06 00:00 CT (Completed) Archive Maintenance
Node Configuration
Login Nodes Compute Nodes
Total Cores | Nodes 128 | 8 150,912 | 4,716
Operating System SLES 11 Cray Linux Environment
Cores/Node 16 32
Core Type AMD 64-bit Opteron AMD Interlagos Opteron
Core Speed 2.7 GHz 2.5 GHz
Memory/Node 128 GBytes 64 GBytes
Accessible Memory/Node 8 GBytes 60 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Gemini
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 5,120 Designated urgent jobs by DoD HPCMP
Down arrow for decreasing priority test N/A 24 Hours N/A Staff-only testing
debug Debug 1 Hour 8,192 User testing
high_lg High 24 Hours 102,400 Designated high-priority jobs by Service/Agency (large jobs)
high_sm High 24 Hours 5,120 Designated high-priority jobs by Service/Agency (small jobs)
high_lw High 168 Hours 2,048 Designated high-priority jobs by Service/Agency (long walltime)
frontier_lg Frontier 24 Hours 102,400 Frontier projects only (large jobs)
frontier_sm Frontier 24 Hours 5,120 Frontier projects only (small jobs)
frontier_lw Frontier 168 Hours 2,048 Frontier projects only (long walltime)
challenge_lg Challenge 24 Hours 102,400 Challenge projects only (large jobs)
challenge_sm Challenge 24 Hours 5,120 Challenge projects only (small jobs)
challenge_lw Challenge 168 Hours 2,048 Challenge projects only (long walltime)
standard_lg Standard 24 Hours 102,400 Normal priority jobs (large jobs)
standard_sm Standard 24 Hours 5,120 Normal priority jobs (small jobs)
standard_lw Standard 168 Hours 2,048 Normal priority jobs (long walltime)
transfer N/A 12 Hours 1 Data transfer jobs. Access to the long-term storage.
Lowest background Background 4 Hours 5,120 Unrestricted access - no allocation charge. Number of cores can be altered depending on the current system load.
Gordon is currently Up.

Gordon is a Cray XC40 System located at the Navy DSRC. Gordon has 1,523 compute nodes each with 32 cores (48,736 total compute cores), 207 TBytes of memory, and is rated at 2 peak PFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Accelerator Nodes
Phi
Total Cores | Nodes 192 | 6 48,736 | 1,523 256 | 8 1,680 | 168
Operating System SUSE Linux Cray Linux Environment
Cores/Node 32 10 + 1 Phi
Core Type Intel Xeon E5-2698v3 Intel Xeon E5-2696v2
+Intel Xeon 5120D Phi
Core Speed 2.3 GHz 2.4 GHz
+1.05 GHz
Memory/Node 256 GBytes 128 GBytes 512 GBytes 64 GBytes
+8 GBytes
Accessible Memory/Node 240 GBytes 125 GBytes 510 GBytes 63 GBytes
+7.5 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Aries / Dragonfly
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 768 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority frontier Frontier 168 Hours 16,000 Designated Frontier projects by DoD HPCMP
high High 168 Hours 16,000 Jobs belonging to DoD HPCMP High Priority Projects
debug Debug 30 Minutes 3,072 User diagnostic jobs.
standard Standard 168 Hours 8,000 Standard jobs
phi N/A 168 Hours 4,032 Phi-accelerated jobs.
bigmem N/A 24 Hours 224 Large-memory jobs
transfer N/A 24 Hours N/A Data transfer for user jobs.
Lowest background Background 4 Hours 768 Unrestricted access - no allocation charge
Haise is currently Up.

Haise is an IBM iDataPlex located at the Navy DSRC. Haise has 1,220 compute nodes each with 16 cores (19,520 total compute cores), 39.04 TBytes of memory, and is rated at 435 peak TFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Accelerator Nodes
Phi
Total Cores | Nodes 32 | 2 19,520 | 1,220 64 | 4 192 | 12
Operating System RHEL 6
Cores/Node 16 16 + 2 Phi
Core Type Intel Xeon Sandy Bridge Intel Xeon Sandy Bridge E5-2670
+Intel Xeon Phi 5110P
Core Speed 2.6 GHz 2.6 GHz
+1.053 GHz
Memory/Node 64 GBytes 32 GBytes 256 GBytes 64 GBytes
+8 GBytes
Accessible Memory/Node 8 GBytes 27 GBytes 247 GBytes 59 GBytes
+7.5 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type 10 GigEthernet FDR-10 InfiniBand
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 4,096 Designated urgent projects by DoD HPCMP
Down arrow for decreasing priority frontier Frontier 168 Hours 6,144 Frontier projects only
high High 168 Hours 6,144 Designated high-priority jobs by Service/Agency
debug Debug 30 Minutes 1,024 User diagnostic jobs
standard Standard 168 Hours 4,096 Normal user jobs
phi N/A 24 Hours 192
+1,440 Phi
Phi-accelerated jobs
bigmem N/A 24 Hours 32 Large-memory jobs
transfer N/A 24 Hours 1 Data transfer jobs
Lowest background Background 4 Hours 512 User jobs that will not be charged against the project allocation
Kilrain is currently Up.

Kilrain is an IBM iDataPlex located at the Navy DSRC. Kilrain has 1,220 compute nodes each with 16 cores (19,520 total compute cores), 39.04 TBytes of memory, and is rated at 435 peak TFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Accelerator Nodes
Phi
Total Cores | Nodes 32 | 2 19,520 | 1,220 64 | 4 192 | 12
Operating System RHEL 6
Cores/Node 16 16 + 2 Phi
Core Type Intel Xeon Sandy Bridge Intel Xeon Sandy Bridge E5-2670
+Intel Xeon Phi 5110P
Core Speed 2.6 GHz 2.6 GHz
+1.053 GHz
Memory/Node 64 GBytes 32 GBytes 256 GBytes 64 GBytes
+8 GBytes
Accessible Memory/Node 8 GBytes 27 GBytes 247 GBytes 59 GBytes
+7.5 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type 10 GigEthernet FDR-10 InfiniBand
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 4,096 Designated urgent projects by DoD HPCMP
Down arrow for decreasing priority frontier Frontier 168 Hours 6,144 Frontier projects only
high High 168 Hours 6,144 Designated high-priority jobs by Service/Agency
debug Debug 30 Minutes 1,024 User diagnostic jobs
standard Standard 168 Hours 4,096 Normal user jobs
phi N/A 24 Hours 192
+1,440 Phi
Phi-accelerated jobs
bigmem N/A 24 Hours 32 Large-memory jobs
transfer N/A 24 Hours 1 Data transfer jobs
Lowest background Background 4 Hours 512 User jobs that will not be charged against the project allocation
Lightning is currently Up.

Lightning is a Cray XC30 System located at the AFRL DSRC. Lightning has 2,370 compute nodes each with 24 cores (56,880 total compute cores), 144.95 TBytes of memory, and is rated at 1,281 peak TFLOPS.

More Info

Maintenance
Date / TimeDetails
2016 Feb 03 08:49 - 17:30 ET (Completed) DREN Outage
Node Configuration
Login Nodes Compute Nodes
Standard Memory Accelerator Nodes
GPU
Total Cores | Nodes 288 | 12 56,880 | 2,370 320 | 32
Operating System SUSE Linux Cray Linux Environment
Cores/Node 24 10 + 1 GPU
Core Type Intel Xeon E5-2697v2 Intel Xeon E5-2697v2 Intel Xeon E5-2670v2
+NVIDIA Tesla K40
Core Speed 2.7 GHz 2.5 GHz
Memory/Node 256 GBytes 64 GBytes 32 GBytes
+12 GBytes
Accessible Memory/Node 240 GBytes 63 GBytes 30 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Aries
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 168 Hours 28,056 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority debug Debug 1 Hour
1/2 Hour
576
1,152
User testing; In the debug queue, you may use 576 cores for 1 hour or 1,152 cores for 1/2 hour
high High 168 Hours 28,056 Jobs belonging to DoD HPCMP High Priority Projects
challenge Challenge 168 Hours 28,056 Jobs belonging to the DoD HPCMP Challenge Projects
standard Standard 168 Hours 28,056 Standard jobs
transfer N/A 12 Hours 1 Data transfer for user jobs.
Lowest background Background 120 Hours 32 Unrestricted access - no allocation charge
Predator is currently Up.

Predator is an SGI UV System located at the AFRL DSRC. Predator has 1 compute node with 1,004 cores.

More Info

Maintenance
Date / TimeDetails
2016 Feb 03 08:49 - 17:30 ET (Completed) DREN Outage
Node Configuration
Login Nodes Compute Nodes
Total Cores | Nodes 8 | 1 1,004 | 1
Operating System RHEL
Cores/Node 8 1,004
Core Type Intel E5 Sandy Bridge
Core Speed 2.7 GHz
Memory/Core 4 GBytes
Accessible Memory/Core 3.5 GBytes
Memory Model Shared on node.
Interconnect Type Numalink v6
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 168 Hours 508 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority debug Debug 1 Hour
1/2 Hour
16
32
User testing; In the debug queue, you may use 736 cores for 1 hour or 1,472 cores for 1/2 hour
high High 168 Hours 508 Jobs belonging to DoD HPCMP High Priority Projects
challenge Challenge 168 Hours 508 Jobs belonging to the DoD HPCMP Challenge Projects
standard Standard 168 Hours 508 Standard jobs
transfer N/A 12 Hours 1 Data transfer for user jobs.
Lowest background Background 120 Hours 32 Unrestricted access - no allocation charge
Riptide is currently Up.

Riptide is an IBM iDataPlex located at the MHPCC DSRC. Riptide has 756 compute nodes each with 16 cores (12,096 total compute cores), 24.2 TBytes of memory, and is rated at 251.6 peak TFLOPS.

More Info

Maintenance
Date / TimeDetails
2016 Feb 18 22:00 - Feb 19 03:00 HI Network Maintenance
2016 Feb 24 07:30 - 17:00 HI Software Maintenance
2016 Mar 23 07:30 - 17:00 HI Software Maintenance
2016 Apr 27 07:30 - 17:00 HI Software Maintenance
2016 May 25 07:30 - 17:00 HI Software Maintenance
2016 Jun 22 07:30 - 17:00 HI Software Maintenance
2016 Jul 27 07:30 - 17:00 HI Software Maintenance
2016 Aug 24 07:30 - 17:00 HI Software Maintenance
2016 Sep 28 07:30 - 17:00 HI Software Maintenance
Node Configuration
Login Nodes Compute Nodes
Total Cores | Nodes 64 | 4 12,096 | 756
Operating System RHEL 6
Cores/Node 16
Core Type Intel Xeon Sandy Bridge
Core Speed 2.6 GHz
Memory/Node 64 GBytes 32 GBytes
Accessible Memory/Node 8 GBytes 27 GBytes
Memory Model Distributed
Interconnect Type Mellanox SX6036 FDR/10 InfiniBand
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest transfer N/A 24 Hours N/A Data transfer for user jobs
Down arrow for decreasing priority urgent Urgent 24 Hours 2,296 Jobs belonging to DoD HPCMP Urgent Projects
debug Debug 30 Minutes 1,024 User diagnostic jobs
high High 168 Hours 2,296 Jobs belonging to DoD HPCMP High Priority Projects
challenge Challenge 168 Hours 2,296 Jobs belonging to the DoD HPCMP Challenge Projects
standard Standard 168 Hours 2,296 Standard jobs
Lowest background Background 4 Hours 768 User jobs that will not be charged against the project allocation.
Shepard is currently Up.

Shepard is a Cray XC30 System located at the Navy DSRC. Shepard has 1,183 compute nodes each with 24 cores (28,632 total compute cores), 77.7 TBytes of memory, and is rated at 817 peak TFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Accelerator Nodes
Phi GPU
Total Cores | Nodes 144 | 6 28,632 | 1,183 192 | 8 1,240 | 124 320 | 32
Operating System SUSE Linux Cray Linux Environment
Cores/Node 24 10 + 1 Phi 10 + 1 GPU
Core Type Intel Xeon E5-2697v2 Intel Xeon E5-2670v2
+Intel Xeon 5120D Phi
Intel Xeon E5-2670v2
+NVIDIA Tesla K40
Core Speed 2.7 GHz 2.5 GHz
+1.05 GHz
2.5 GHz
Memory/Node 256 GBytes 64 GBytes 128 GBytes 64 GBytes
+8 GBytes
32 GBytes
+12 GBytes
Accessible Memory/Node 240 GBytes 61 GBytes 126 GBytes 61 GBytes
+7.5 GBytes
30 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Aries
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 768 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority frontier-short Frontier 24 Hours 14,400 Jobs belonging to the DoD HPCMP Frontier Projects
frontier Frontier 168 Hours 7,200 Jobs belonging to the DoD HPCMP Frontier Projects
high High 168 Hours 7,200 Jobs belonging to DoD HPCMP High Priority Projects
debug Debug 30 Minutes 1,536 User diagnostic jobs.
standard Standard 168 Hours 4,800 Standard jobs
phi N/A 168 Hours 1,240 Phi-accelerated jobs.
gpu N/A 8 Hours 320 CUDA-enabled jobs.
bigmem N/A 24 Hours 192 Large-memory jobs
transfer N/A 24 Hours 1 Data transfer for user jobs.
Lowest background Background 4 Hours 512 Unrestricted access - no allocation charge
Spirit is currently Up.

Spirit is an SGI ICE X System located at the AFRL DSRC. Spirit has 4,590 compute nodes each with 16 cores (73,440 total compute cores), 146.88 TBytes of memory, and is rated at 1.5 peak PFLOPS.

More Info

Maintenance
Date / TimeDetails
2016 Feb 03 08:49 - 17:30 ET (Completed) DREN Outage
2016 Apr 18 08:00 - Apr 22 17:00 ET System Maintenance
Node Configuration
Login Nodes Compute Nodes
Total Cores | Nodes 128 | 8 73,440 | 4,590
Operating System RHEL 6
Cores/Node 16
Core Type Intel Xeon Sandy Bridge
Core Speed 2.6 GHz
Memory/Node 64 GBytes 32 GBytes
Accessible Memory/Node 62 GBytes 30 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type FDR 14x InfiniBand FDR 14x InfiniBand;
Enhanced LX Hypercube
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 168 Hours 36,720 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority debug Debug 1 Hour
1/2 Hour
736 User testing; In the debug queue, you may use 736 cores for 1 hour or 1,472 cores for 1/2 hour
high High 168 Hours 36,720 Jobs belonging to DoD HPCMP High Priority Projects
challenge Challenge 168 Hours 36,720 Jobs belonging to the DoD HPCMP Challenge Projects
standard Standard 168 Hours 36,720 Standard jobs
transfer N/A 12 Hours 1 Data transfer for user jobs.
Lowest background Background 120 Hours 32 Unrestricted access - no allocation charge
Thunder is currently Up.

Thunder is an SGI ICE X System located at the AFRL DSRC. Thunder has 3,216 standard compute nodes each with 36 cores (115,776 standard compute cores), 442.37 TBytes of memory, and is rated at 5.62 peak PFLOPS.

More Info

Maintenance
Date / TimeDetails
2016 Feb 03 08:49 - 17:30 ET (Completed) DREN Outage
2016 Mar 21 08:00 - Mar 25 17:00 ET System Maintenance
Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Accelerator Nodes
GPU Phi
Total Cores | Nodes 448 | 16 115,776 | 3,216 144 | 4 4,984 | 178
Operating System SLES
Cores/Node 28 36 28 + 2 GPU 28 + 1 Phi
Core Type Intel E5-2699v3 Intel E5-2697v3
+NVIDIA Tesla K40P
Intel E5-2697v3
+Intel 7120P
Core Speed 2.3 GHz 2.6 GHz
Memory/Node 128 GBytes 768 GBytes 128 GBytes
+12 GBytes
128 GBytes
+16 GBytes
Accessible Memory/Node 126 GBytes 766 GBytes 126 GBytes 126 GBytes
+15 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type FDR 14x InfiniBand FDR 14x InfiniBand;
Enhanced LX Hypercube
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 168 Hours 57,888 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority debug Debug 1 Hour 2,340 User testing.
frontier Frontier 168 Hours 57,888 Jobs belonging to DoD HPCMP Frontier Projects.
high High 168 Hours 57,888 Jobs belonging to DoD HPCMP High Priority Projects
challenge Challenge 168 Hours 57,888 Jobs belonging to the DoD HPCMP Challenge Projects
standard Standard 168 Hours 57,888 Standard jobs
transfer N/A 24 Hours 1 Data transfer for user jobs.
Lowest background Background 120 Hours 500 Unrestricted access - no allocation charge
Topaz is currently Up.

Topaz is an SGI ICE X System located at the ERDC DSRC. Topaz has 3,456 standard compute nodes each with 36 cores (124,416 total compute cores), 442.37 TBytes of memory, and is rated at 4.62 peak PFLOPS.

More Info

Maintenance
Date / TimeDetails
2016 Feb 04 00:00 - Feb 06 00:00 CT (Completed) Archive Maintenance
Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Accelerator Nodes
GPU
Total Cores | Nodes 196 | 7 124,416 | 3,456 128 | 4 896 | 32
Operating System SLES
Cores/Node 28 36 32 28 + 1 GPU
Core Type Intel Xeon
E5-2697v3 Haswell
Intel Xeon
E5-2699v3 Haswell
Intel Xeon
E5-4620v2 Ivy Bridge
Intel Xeon
E5-2697v3 Haswell
+NVIDIA Tesla K40P
Core Speed 2.6 GHz 2.3 GHz 2.6 GHz
Memory/Node 128 GBytes 1 TByte 128 GBytes
+12 GBytes
Accessible Memory/Node 8 GBytes 117 GBytes 990 GBytes 117 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type FDR 14x InfiniBand FDR 14x InfiniBand;
Hypercube
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 4,266 Designated urgent jobs by DoD HPCMP
Down arrow for decreasing priority test N/A 48 Hours N/A Staff-only testing
debug Debug 1 Hour 6,822 User testing
frontier_lg Frontier 24 85,320 Frontier projects only (large jobs)
frontier_sm Frontier 24 4,266 Frontier projects only (small jobs)
frontier_lw Frontier 168 9,216 Frontier projects only (long walltime)
high_lg High 24 62,208 Designated high-priority jobs by Service/Agency (large jobs)
high_sm High 24 4,266 Designated high-priority jobs by Service/Agency (small jobs)
high_lw High 168 3,384 Designated high-priority jobs by Service/Agency (long walltime)
challenge_lg Challenge 24 62,208 Challenge projects only (large jobs)
challenge_sm Challenge 24 4,266 Challenge projects only (small jobs)
challenge_lw Challenge 168 3,384 Challenge projects only (long walltime)
standard_lg Standard 24 62,208 Normal priority jobs (large jobs)
standard_sm Standard 24 4,266 Normal priority jobs (small jobs)
standard_lw Standard 168 3,384 Normal priority jobs (long walltime)
transfer N/A 24 1 Data transfer jobs. Access to the long-term storage.
Lowest background Background 4 4,266 Unrestricted access - no allocation charge. Number of cores can be altered depending on the current system load.