Non-Allocated Systems

The HPCMP provides a limited number of HPC systems for non-allocated use. These systems are typically experimental or unique in nature, and do not include a full complement of application software, however they provide users an opportunity to explore nontraditional architectures while operating free of the constraints of allocated use.

Builder is currently Up.

Builder is a single-node Aspen Systems Linux Gigabyte server intended primarily to provide a platform for building and sharing Singularity software containers. The system is populated with AMD 7742 processors and Nvidia V100 graphics processing units. Builder uses Intel Gigabit Ethernet as its high-speed network for I/O traffic. Builder uses AVAGO MegaRAID to manage its local file system that targets 130 TB of disk storage. Builder is equipped with two 64-core processors (128 total cores) running the RHEL 8 operating system, two GPUs, 1,024 gigabytes of memory, with no user-accessible swap space.

More Info

Maintenance
Date / TimeDetails
2024 Apr 17 07:30 - Apr 18 00:00 HI System Maintenance
Node Configuration
Login/GPU
Total Nodes 1
Processor AMD 7742 Rome
Processor Speed 2.25 GHz
Sockets / Node 2
Cores / Node 128
Total CPU Cores 128
Usable Memory / Node 1 TB
Accelerators / Node 2
Accelerator NVIDIA V100
Memory / Accelerator 32 GB
Storage on Node 130 TB SAS HDD Array
Interconnect XL710 for 40GbE QSFP+ to cwfs, I350 Gigabit Network Connection to DREN
Operating System RHEL 8.5

Queue Information

Builder does not employ a job scheduler, so there are no queues.

Reef is currently Up.

Reef is an Aspen Systems Linux Cluster. The login and compute nodes are populated with Intel 2.5-Ghz Cascade Lake processors. Reef has 5 CPU-only and 11 GPU compute nodes. Each compute node has two 20-core processors, sharing 768 GB of DDR4 memory, with no user accessible swap space. Reef has 109 TB (formatted) of disk storage. Reef is intended to be used as a batch scheduled HPC system.

Note: the configuration of Reef is subject to change without notice.

More Info

Maintenance
Date / TimeDetails
2024 Apr 17 07:30 - Apr 18 00:00 HI System Maintenance
Node Configuration
Login Standard Tesla V100 NVIDIA RTX 8000
Total Nodes 1 5 9 2
Processor Intel 6248 Cascade Lake Intel 6248 Cascade Lake Intel 6248 Cascade Lake Intel 6248 Cascade Lake
Processor Speed 2.5 GHz 2.5 GHz 2.5 GHz 2.5 GHz
Sockets / Node 2 2 2 2
Cores / Node 38 38 38 38
Total CPU Cores 40 200 360 80
Usable Memory / Node 640 GB 640 GB 640 GB 640 GB
Accelerators / Node None None 2 2
Accelerator n/a n/a NVIDIA V100 PCIe 3 NVIDIA RTX 8000 PCIe 3
Memory / Accelerator n/a n/a 32 GB 48 GB
Storage on Node None None None None
Interconnect Mellanox HDR100, EDR InfiniBand Mellanox HDR100, EDR InfiniBand Mellanox HDR100, EDR InfiniBand Mellanox HDR100, EDR InfiniBand
Operating System RHEL 7 RHEL 7 RHEL 7 RHEL 7
Queue Descriptions and Limits on Reef
Queue Name Max Wall Clock Time Max Cores Per Job Description
debug 30 Minutes 608 CPU, RTX, and Tesla nodes
all 1 Hour 608 CPU, RTX, and Tesla nodes
standard 7 Days 190 Five Non-GPU Compute nodes
tesla 7 Days 342 Nine Dual-Tesla V100 nodes
rtx 7 Days 76 Two Dual-Quadro RTX 8000 nodes