Introduction to Singularity Containers on HPC
Containers alleviate installation and portability challenges by packaging all the dependencies of an application within a self-sustainable image, a.k.a a container. Linux container platforms such as Singularity and Docker allow you to install and use software within self-contained, portable, and reproducible environments. Linux containers allow users to:
- Use software with complicated dependencies and environment requirements
- Run an application container from the Sylabs Container Library or Docker Hub
- Use a package manager (like apt or yum) to install software without changing anything on the host system
- Run an application that was built for a different distribution of Linux than the host OS
- Run the latest released software built for newer Linux OS versions than that present on HPC systems
- Archive an analysis for long-term reproducibility and/or publication
What is a Container?
Put simply, a container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerizing the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.
Singularity is a tool for running software containers on HPC systems, similar to Docker. Singularity is the first containerization technology supported across DSRC HPC resources. (Additional details below.)
Singularity is a container framework designed to run scientific applications on HPC-backed resources.
Singularity allows running Docker containers natively and is a replacement for Docker on HPC systems.
Singularity is built with HPC systems in mind, to solve many problems:
- Security: a user in the container is the same user with the same privileges/permissions as the one running the container, so no privilege escalation is possible
- Ease of deployment: no daemon running as root on each node, a container is simply an executable
- Ability to mount local filesystems or do bind mappings to access devices
- Ability to run MPI and GPU HPC workflows with singularity containers
Access Singularity by loading the "singularity" module:
$ module load singularity $ which singularity /p/work1/singularity/2.6.1/bin/singularity $ singularity --version 2.6.1-dist user@topaz04:~> singularity selftest + sh -c test -f /p/work1/singularity/2.6.1/etc/singularity/singularity.conf (retval=0) OK + test -u /p/work1/singularity/2.6.1/libexec/singularity/bin/action-suid (retval=0) OK + test -u /p/work1/singularity/2.6.1/libexec/singularity/bin/mount-suid (retval=0) OK + test -u /p/work1/singularity/2.6.1/libexec/singularity/bin/start-suid (retval=0) OK
Which Systems Support Singularity?
- AFRL: Talon, Mustang
- ARL: Centennial, FOB, Scout*
- Navy: Gaffney, Koehr
- ERDC: Onyx, Vulcanite
- MHPCC: Hokulea*, Reef
- All HPC Portal Appliances
- Restricted systems
*Note: Singularity containers are portable across any instruction set-compatible Linux system, like x86 Intel/AMD. You must build different Singularity containers for Power or Arm processor-based systems.
Which Version of Singularity is Available?
Singularity Version 2.6.1 is installed and supported across the production DSRC HPC systems.
Who Can use Singularity Containers?
Any user may execute singularity containers on any supported systems. Container
images may be placed and run from any system location:
Building containers requires the user/builder have root/admin privileges on a Linux/Windows/ MacOS system. Currently there is no build environment on DSRC HPC-backed resources. Alternatives for building containers are:
- Using your own system (laptop/workstation) where you have root/elevated privileges to install Singularity and build containers, then transfer the containers to DOD HPC systems.
- Users may request assistance building container images for their applications from the Centers application support team by sending a request to the HPC Help Desk.
- If users have non-proprietary and non-sensitive software code, a free cloud container build service is at https://cloud.sylabs.io.
Training is available on the HPC training website:
Training related to containers is available in the Container Development Category.
REQUIRED: You must have an "active" (non-ORS) HPCMP account to access the HPC Training System. The top courses to get started with Singularity are:
- Code Portability and Reproducibility for DoD HPC Systems with Application Containers: Development Tips *
- Software / Interface / Container Requests *
- PET Singularity Container Development Practices *
- Building a Custom DA Tools Container for Deep Learning *
Reference Containers and Examples
Reference containers for basic, MPI and GPU based workflows are provided on most systems for user convenience.
Reference containers, instructions, and run scripts are generally located at:
The README, in the examples directory, provides a complete description.