Unclassified Systems

Copper is currently Up.

Copper is a Cray XE6m Open Research System located at the ERDC DSRC. Copper has 460 compute nodes with 32 cores each (14720 total compute cores), 29.9 TBytes of memory, and is rated at 138 peak TFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Total Cores | Nodes 32 | 2 14720 | 460
Operating System SLES 11 CLE
Cores/Node 16 32
Core Type AMD 64-bit Opteron AMD Interlagos Opteron
Core Speed 2.7 GHz 2.3 GHz
Memory/Node 128 GBytes 64 GBytes
Accessible Memory/Node 8 GBytes 62 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Gemini
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest test N/A N/A Staff-only testing
Down arrow for decreasing priority urgent Urgent 168 Hours 4096 Designated urgent jobs by DoD HPCMP
debug* Debug 1 Hour 512 User testing
high High 168 Hours 4096 Designated high-priority jobs by Service/Agency
challenge Challenge 168 Hours 4096 Challenge projects only
standard Standard 168 Hours 4096 Normal priority jobs
transfer N/A 12 Hours 1 Data transfer for user jobs. Access to the long-term storage.
Lowest background** Background 4 Hours 4096 Unrestricted access - no allocation charge. Number of cores can be altered depending on the current system load.
Garnet is currently Up.

Garnet is a Cray XE6 located at the ERDC DSRC. Garnet has 4716 compute nodes each with 32 cores (150912 total compute cores), 301 TBytes of memory, and is rated at 1.5 peak PFLOPS.

More Info

Maintenance
Date / TimeDetails
2014 Jul 29 08:00 - 20:00 CT System Maintenance
Node Configuration
Login Nodes Compute Nodes
Total Cores | Nodes 128 | 8 150912 | 4716
Operating System SLES 11 CLE
Cores/Node 16 32
Core Type AMD 64-bit Opteron AMD Interlagos Opteron
Core Speed 2.7 GHz 2.5 GHz
Memory/Node 128 GBytes 64 GBytes
Accessible Memory/Node 8 GBytes 60 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type Ethernet Cray Gemini
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 5,120 Designated urgent jobs by DoD HPCMP
Down arrow for decreasing priority test N/A 24 Hours N/A Staff-only testing
debug Debug 1 Hour 8,192 User testing
high_lg High 24 Hours 102,400 Designated high-priority jobs by Service/Agency (large jobs)
high_sm High 24 Hours 5,120 Designated high-priority jobs by Service/Agency (small jobs)
high_lw High 168 Hours 2,048 Designated high-priority jobs by Service/Agency (long walltime)
challenge_lg Challenge 24 Hours 102,400 Challenge projects only (large jobs)
challenge_sm Challenge 24 Hours 5,120 Challenge projects only (small jobs)
challenge_lw Challenge 168 Hours 2,048 Challenge projects only (long walltime)
standard_lg Standard 24 Hours 102,400 Normal priority jobs (large jobs)
standard_sm Standard 24 Hours 5,120 Normal priority jobs (small jobs)
standard_lw Standard 168 Hours 2,048 Normal priority jobs (long walltime)
transfer N/A 12 Hours 1 Data transfer jobs. Access to the long-term storage.
Lowest background Background 4 Hours 5,120 Unrestricted access - no allocation charge. Number of cores can be altered depending on the current system load.
Haise is currently Up.

Haise is an IBM iDataPlex located at the Navy DSRC. Haise has 1220 compute nodes each with 16 cores (19520 total compute cores), 39.04 TBytes of memory, and is rated at 435 peak TFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Accelerator
Total Cores | Nodes 32 | 2 19520 | 1220 64 | 4 228 | 12
Operating System RHEL 6
Cores/Node 16 16 + 120 MIC
Core Type Intel Xeon Sandy Bridge Intel Xeon Sandy Bridge E5-2670
+Intel Xeon Phi 5110P
Core Speed 2.6 GHz 2.6 GHz
+1.053 GHz
Memory/Node 64 GBytes 32 GBytes 256 GBytes 64 GBytes
+8 GBytes
Accessible Memory/Node 8 GBytes 27 GBytes 247 GBytes 59 GBytes
+7.5 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type 10 GigEthernet FDR-10 InfiniBand
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 4096 Designated urgent projects by DoD HPCMP
Down arrow for decreasing priority high High 168 Hours 6144 Designated high-priority jobs by Service/Agency
challenge Challenge 168 Hours 6144 Challenge projects only
special N/A 24 Hours 4096 Access available by request
debug Debug 30 Minutes 1024 User diagnostic jobs
standard Standard 168 Hours 4096 Non-challenge user jobs
share N/A 48 Hours 1 Serial jobs
transfer N/A 12 Hours 1 Data transfer jobs
Lowest background Background 4 Hours 512 User jobs that will not be charged against the project allocation
Harold is currently Up.

Harold is an SGI Altix ICE 8200 located at the ARL DSRC. Harold has 1344 compute cores each with 8 cores (10752 total compute cores), 32.26 TBytes of memory, and is rated at 109.3 peak TFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Total Cores | Nodes 64 | 8 10752 | 1344
Operating System SLES 11
Cores/Node 8
Core Type Intel Xeon Nehalem-EP
Core Speed 2.8 GHz
Memory/Node 48 GBytes 24 GBytes
Accessible Memory/Node 2 GBytes 20 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type 4x DDR Infiniband
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest debug Debug 1 Hour 256 User diagnostic jobs
Down arrow for decreasing priority transfer N/A 24 Hours 1 Data transfer for user jobs
urgent Urgent 96 Hours N/A Designated urgent jobs by DoD HPCMP
staff N/A 368 Hour N/A ARL DSRC staff testing only. System testing and user support.
high High 96 Hours N/A Designated high-priority jobs by DoD HPCMP
challenge Challenge 168 Hours N/A Challenge projects only
cots Standard 96 Hours N/A Abaqus, Fluent, and Cobalt jobs
interactive Standard 12 Hours N/A Interactive jobs
standard-long Standard 200 Hours N/A ARL DSRC permission required
standard Standard 96 Hours N/A Non-Challenge user jobs
Lowest background Background 24 Hours N/A User jobs that will not be charged against the project allocation.
Kilrain is currently Up.

Kilrain is an IBM iDataPlex located at the Navy DSRC. Kilrain has 1220 compute nodes each with 16 cores (19520 total compute cores), 39.04 TBytes of memory, and is rated at 435 peak TFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory Accelerator
Total Cores | Nodes 32 | 2 19520 | 1220 64 | 4 228 | 12
Operating System RHEL 6
Cores/Node 16 16 + 120 MIC
Core Type Intel Xeon Sandy Bridge Intel Xeon Sandy Bridge E5-2670
+Intel Xeon Phi 5110P
Core Speed 2.6 GHz 2.6 GHz
+1.053 GHz
Memory/Node 64 GBytes 32 GBytes 256 GBytes 64 GBytes
+8 GBytes
Accessible Memory/Node 8 GBytes 27 GBytes 247 GBytes 59 GBytes
+7.5 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type 10 GigEthernet FDR-10 InfiniBand
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 24 Hours 4096 Designated urgent projects by DoD HPCMP
Down arrow for decreasing priority high High 168 Hours 6144 Designated high-priority jobs by Service/Agency
challenge Challenge 168 Hours 6144 Challenge projects only
special N/A 24 Hours 4096 Access available by request
debug Debug 30 Minutes 1024 User diagnostic jobs
standard Standard 168 Hours 4096 Non-challenge user jobs
share N/A 48 Hours 1 Serial jobs
transfer N/A 12 Hours 1 Data transfer jobs
Lowest background Background 4 Hours 512 User jobs that will not be charged against the project allocation
Pershing is currently Up.

Pershing is an IBM iDataPlex located at the ARL DSRC. Pershing has 1260 compute nodes each with 16 cores (20160 total compute cores), 45.7 TBytes of memory, and is rated at 420 peak TFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Standard Memory Large Memory
Total Cores | Nodes 128 | 8 17472 | 1092 2688 | 168
Operating System RHEL 6
Cores/Node 16
Core Type Intel Xeon Sandy Bridge
Core Speed 2.6 GHz
Memory/Node 32 GBytes 64 GBytes
Accessible Memory/Node 2 GBytes 29 GBytes 56 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type 10 Gigabit Ethernet FDR-10 Infiniband
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest debug Debug 1 Hour 512 User diagnostic jobs
Down arrow for decreasing priority transfer N/A 24 Hours 1 Data transfer for user jobs
urgent Urgent 96 Hours N/A Designated urgent jobs by DoD HPCMP
staff N/A 368 Hour N/A ARL DSRC staff testing only. System testing and user support.
high High 96 Hours N/A Designated high-priority jobs by DoD HPCMP
challenge Challenge 168 Hours N/A Challenge projects only
cots Standard 96 Hours N/A Abaqus, Fluent, and Cobalt jobs
interactive Standard 12 Hours N/A Interactive jobs
standard-long Standard 200 Hours N/A ARL DSRC permission required
standard Standard 96 Hours N/A Non-Challenge user jobs
Lowest background Background 24 Hours N/A User jobs that will not be charged against the project allocation.
Predator is currently Up.

Predator is an SGI UV System located at the AFRL DSRC. Predator has 1 compute node with 1004 cores.

More Info

Node Configuration
Login Nodes Compute Nodes
Total Cores | Nodes 8 | 1 1004 | 1
Operating System RHEL
Cores/Node 8 1004
Core Type Intel E5 Sandy Bridge
Core Speed 2.7 GHz
Memory/Core 4 GBytes
Accessible Memory/Core 3.5 GBytes
Memory Model Shared on node.
Interconnect Type Numalink v6
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 168 Hours 508 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority debug Debug 1 Hour
1/2 Hour
16
32
User testing; In the debug queue, you may use 736 cores for 1 hour or 1472 cores for 1/2 hour
high High 168 Hours 508 Jobs belonging to DoD HPCMP High Priority Projects
challenge Challenge 168 Hours 508 Jobs belonging to the DoD HPCMP Challenge Projects
standard Standard 168 Hours 508 Standard jobs
transfer N/A 12 Hours 1 Data transfer for user jobs.
Lowest background Background 120 Hours 32 Unrestricted access - no allocation charge
Riptide is currently Up.

Riptide is an IBM iDataPlex located at the MHPCC DSRC. Riptide has 756 compute nodes each with 16 cores (12096 total comput cores), 24.2 TBytes of memory, and is rated at 251.6 peak TFLOPS.

More Info

Maintenance
Date / TimeDetails
2014 Jul 23 08:00 - 12:00 HI (Completed) Archive Maintenance
2014 Aug 01 08:00 - Aug 04 00:00 HI System Maintenance
2014 Aug 27 08:00 - 17:00 HI System Maintenance
2014 Oct 01 00:00 - 17:00 HI System Maintenance
Node Configuration
Login Nodes Compute Nodes
Total Cores | Nodes 64 | 4 12096 | 756
Operating System RHEL 6
Cores/Node 16
Core Type Intel Xeon Sandy Bridge
Core Speed 2.6 GHz
Memory/Node 64 GBytes 32 GBytes
Memory Model Distributed
Interconnect Type Mellanox SX6036 FDR/10 Infiniband
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest transfer N/A 12 Hours N/A Data transfer for user jobs
Down arrow for decreasing priority urgent Urgent 24 Hours 2296 Jobs belonging to DoD HPCMP Urgent Projects
debug Debug 30 Minutes 1024 User diagnostic jobs
high High 168 Hours 2296 Jobs belonging to DoD HPCMP High Priority Projects
challenge Challenge 168 Hours 2296 Jobs belonging to the DoD HPCMP Challenge Projects
standard Standard 168 Hours 2296 Standard jobs
Lowest background Background 4 Hours 768 User jobs that will not be charged against the project allocation.
Spirit is currently Up.

Spirit is an SGI ICE X System located at the AFRL DSRC. Spirit has 4590 compute nodes each with 16 cores (73440 total compute cores), 146.88 TBytes of memory, and is rated at 1.5 peak PFLOPS.

More Info

Node Configuration
Login Nodes Compute Nodes
Total Cores | Nodes 128 | 8 73440 | 4590
Operating System RHEL 6
Cores/Node 16
Core Type Intel Xeon Sandy Bridge
Core Speed 2.6 GHz
Memory/Node 64 GBytes 32 GBytes
Accessible Memory/Node 62 GBytes 30 GBytes
Memory Model Shared on node. Shared on node.
Distributed across cluster.
Interconnect Type FDR 14x Infiniband FDR 14x Infiniband;
Enhanced LX Hypercube
Queue Descriptions and Limits
Priority Queue Name Job Class Max Wall Clock Time Max Cores Per Job Comments
Highest urgent Urgent 168 Hours 36720 Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority debug Debug 1 Hour
1/2 Hour
736 User testing; In the debug queue, you may use 736 cores for 1 hour or 1472 cores for 1/2 hour
high High 168 Hours 36720 Jobs belonging to DoD HPCMP High Priority Projects
challenge Challenge 168 Hours 36720 Jobs belonging to the DoD HPCMP Challenge Projects
standard Standard 168 Hours 36720 Standard jobs
transfer N/A 12 Hours 1 Data transfer for user jobs.
Lowest background Background 120 Hours 32 Unrestricted access - no allocation charge