Documentation

Utility Server (US) Quick Reference

The Utility Server is intended for pre- and post- processing, remote data analysis, job management, and medium-term (30-day) storage. The US provides an interface between the user, the HPCs, and the archive.

The US should not be used as an HPC batch engine. Misuse/Abuse of this unallocated system for this purpose could result in disciplinary action.

Login

Log into Utility Server using ssh. Replace site with the site name (afrl, arl, erdc, mhpcc, ors, or navo):

% ssh us.site.hpc.mil

Available Disk Space

$HOME - 10-GByte quota per user.

$WORKDIR - 100-TByte quota per user with 30-day limit.

$CENTER - 200-TByte quota per user with 30-day limit.

Check quota

show_usage check quota on the US.

show_usage -s check quota on each machine at the center.

Available Node Types

Compute Nodes

AFRL, ARL, ERDC, Navy (44 nodes)

  • 2X AMD Opteron 6134, 2.3 GHz, 16 cores (704 total)
  • 128 GBytes DDR3, 1.333 GHz, 8 GBytes/core

Open Research Systems (28 compute nodes)

  • 2X AMD Opteron 2.3 GHz, 16 cores (448 total)
  • 128 GBytes DDR3, 1.333 GHz, 8 GBytes/core

MHPCC (12 compute nodes)

  • 2X AMD Opteron 2.3 GHz, 16 cores (192 total)
  • 128 GBytes DDR3, 1.333 GHz, 8 GBytes/core
Compute Node Usage

Submit a job on a single compute node:

% qsub -l select=1:ncpus=16 -A $ACCOUNT -I

Submit a job on multiple compute nodes:

% qsub -l select=4:ncpus=16 -A $ACCOUNT -I
Graphics Nodes

AFRL, ARL, ERDC, NAVY (22 nodes)

  • 2X AMD Opteron 6134, 2.3 GHz, 16 cores (352 total)
  • 128 GBytes DDR3, 1.333 GHz, 8 GBytes/core
  • NVIDIA Tesla M2050

Utility Server (US) Quick Reference (Cont.)

Open Research Systems (14 nodes)

  • 2X AMD Opteron 2.3 GHz, 16 cores (224 total)
  • 128 GBytes DDR3, 1.333 GHz, 8 GBytes/core
  • NVIDIA Tesla M2050
MHPCC (6 nodes)
  • 2X AMD Opteron 2.3 GHz, 16 cores (96 total)
  • 128 GBytes DDR3, 1.333 GHz, 8 GBytes/core
  • NVIDIA Tesla M2050
Graphic Node Usage

Submit a job on a single graphics node:

% qsub -l select=1:ncpus=16:ngpus=1 -A  $ACCOUNT -I

Submit a job on multiple graphics node:

% qsub -l select=4:ncpus=16:ngpus=1 -A  $ACCOUNT -I
Large Memory Nodes

AFRL, ARL, ERDC, NAVY (22 nodes)

  • 4X AMD Opteron 6134, 2.3 GHz, 32 cores (704 total)
  • 256 GBytes DDR3, 1.333 GHz, 8 GBytes/core

Open Research System (14 nodes)

  • 4X AMD Opteron 2.3 GHz, 32 cores (448 total)
  • 256 GBytes DDR3, 1.333 GHz, 8 GBytes/core

MHPCC (6 nodes)

  • 4X AMD Opteron 2.3 GHz, 32 cores (192 total)
  • 256 GBytes DDR3, 1.333 GHz, 8 GBytes/core
Large Memory Node Usage

Submit a job to a single large memory node

% qsub -l select=1:ncpus=32:mem=250gb -A $ACCOUNT -I

Submit a job to multiple large memory nodes

% qsub -l select=2:ncpus=32:mem=250gb -A $ACCOUNT -I
Mixed Node Usage
% qsub -l select=1:ncpus=16+1:ncpus=32:mem=250gb -A $ACCOUNT -I

For More Information

Contact CCAC

For more information on the Utility Server or the HPCMP Enhanced User Environment, contact CCAC at
Phone: 1-877-222-2039
Email: help@ccac.hpc.mil

Online Documentation

Additional information is also available in the
Utility Server User Guide and in the Utility Server PBS Guide