Non-Allocated Systems

The HPCMP provides a limited number of HPC systems for non-allocated use. These systems are typically experimental or unique in nature, and do not include a full complement of application software, however they provide users an opportunity to explore nontraditional architectures while operating free of the constraints of allocated use.

Builder is currently Up.

Builder is a single-node Aspen Systems Linux Gigabyte server intended primarily to provide a platform for building and sharing Singularity software containers. The system is populated with AMD 7742 processors and Nvidia V100 graphics processing units. Builder uses Intel Gigabit Ethernet as its high-speed network for I/O traffic. Builder uses AVAGO MegaRAID to manage its local file system that targets 130 TB of disk storage. Builder is equipped with two 64-core processors (128 total cores) running the RHEL 8 operating system, two GPUs, 1,024 gigabytes of memory, with no user-accessible swap space.

More Info

Maintenance
Date / TimeDetails
2024 Nov 20 07:30 - Nov 21 00:00 HI System Maintenance
Node Configuration
Login/GPU
Total Nodes 1
Processor AMD 7742 Rome
Processor Speed 2.25 GHz
Sockets / Node 2
Cores / Node 128
Total CPU Cores 128
Usable Memory / Node 1 TB
Accelerators / Node 2
Accelerator NVIDIA V100
Memory / Accelerator 32 GB
Storage on Node 130 TB SAS HDD Array
Interconnect XL710 for 40GbE QSFP+ to cwfs, I350 Gigabit Network Connection to DREN
Operating System RHEL 8.5

Queue Information

Builder does not employ a job scheduler, so there are no queues.

Reef is currently Up.

Reef is an Aspen Systems Linux Cluster. The login and compute nodes are populated with Intel 2.5-Ghz Cascade Lake processors. Reef has 5 CPU-only and 11 GPU compute nodes. Each compute node has two 20-core processors, sharing 768 GB of DDR4 memory, with no user accessible swap space. Reef has 109 TB (formatted) of disk storage. Reef is intended to be used as a batch scheduled HPC system.

Note: the configuration of Reef is subject to change without notice.

More Info

Node Configuration
Login Standard Tesla V100 NVIDIA RTX 8000
Total Nodes 1 5 9 2
Processor Intel 6248 Cascade Lake Intel 6248 Cascade Lake Intel 6248 Cascade Lake Intel 6248 Cascade Lake
Processor Speed 2.5 GHz 2.5 GHz 2.5 GHz 2.5 GHz
Sockets / Node 2 2 2 2
Cores / Node 38 38 38 38
Total CPU Cores 40 200 360 80
Usable Memory / Node 650 GB 650 GB 650 GB 650 GB
Accelerators / Node None None 2 2
Accelerator N/A N/A NVIDIA V100 PCIe 3 NVIDIA RTX 8000 PCIe 3
Memory / Accelerator N/A N/A 32 GB 48 GB
Storage on Node None None None None
Interconnect Mellanox HDR100, EDR InfiniBand Mellanox HDR100, EDR InfiniBand Mellanox HDR100, EDR InfiniBand Mellanox HDR100, EDR InfiniBand
Operating System RHEL 7 RHEL 7 RHEL 7 RHEL 7
Queue Descriptions and Limits on Reef
Queue Name Max Wall Clock Time Max Jobs Min Cores Per Job Max Cores Per Job Max Queued Per User Max Running Per User Description
standard None N/A 1 190 N/A N/A 5 Non-GPU Compute nodes
tesla None None 1 342 N/A N/A 9 GPU Compute nodes (Dual-Tesla V100)
rtx None None 1 76 N/A N/A 2 GPU Compute nodes (Dual-Quadro RTX 8000)