ARL DSRC Introductory Site Guide

1. Introduction

1.1. Purpose of this document

This document introduces users to the U.S. Army Research Laboratory (ARL) DoD Supercomputing Resource Center (DSRC). It provides an overview of available resources, links to relevant documentation, essential policies governing the use of our systems, and other information to help you make efficient and effective use of your allocated hours.

1.2. About the ARL DSRC

The ARL DSRC is one of five DSRCs managed by the DoD High Performance Computing Modernization Program (HPCMP). The DSRCs deliver a range of compute-intensive and data-intensive capabilities to the DoD science and technology, test and evaluation, and acquisition engineering communities. Each DSRC operates and maintains major High Performance Computing (HPC) systems and associated infrastructure, such as data storage, in both unclassified and classified environments. The HPCMP provides user support through a centralized help desk and data analysis/visualization group.

The ARL DSRC is a supercomputing and computational science facility that supports a broad and diverse user base in the DoD research, development, test, and evaluation (RDT&E) communities. The Center is located at Aberdeen Proving Ground, Maryland and is organizationally aligned under the Combat Capabilities Development Command (DEVCOM), U.S Army Research Laboratory, Army Research Directorate (ARD). The mission of the DEVCOM ARL DSRC is to provide world-class high performance computing, advanced networking, and computational science tools and expertise in support of the RDT&E communities.

1.3. Whom our services are for

The HPCMP's services are available to Service and Agency researchers in the Research, Development, Test, and Evaluation (RDT&E) and acquisition engineering communities of the DoD and its respective DoD contractors, and University staff working on a DoD research grant.

For more details, see HPCMP Presentation " Who may run on HPCMP Resources?"

1.4. How to get an account

Anyone meeting the above criteria may request an HPCMP account. An HPC Help Desk video is available to guide you through the process of getting an account. To begin the account application process, visit the Obtaining an Account page and follow the instructions presented there.

1.5. Visiting the ARL DSRC

If you need to travel to the ARL DSRC, there are security procedures that must be completed BEFORE planning your trip. Please see our Visit section and coordinate with your Service/Agency Approval Authority (S/AAA) to ensure your credentials are in place and all visit requirements are met.

2. Policies

2.1. Baseline Configuration (BC) policies

The Baseline Configuration Team sets policies that apply to all HPCMP HPC systems. The BC Policy Compliance Matrix provides an index of all BC policies and compliance status of systems at each DSRC.

2.2. Login node abuse policy

The login nodes provide login access to the systems and support interactive usage. Interactive usage should be limited to items such as: program development (including debugging and performance improvement), job preparation, and job submission.

Memory or CPU-intensive programs running on the login nodes can significantly affect all users of the system. Therefore, only small applications requiring less than 15 minutes of runtime and less than 8 GB of memory are allowed on the login nodes. The preferred method to run resource intensive executions such as pre- and post-processing of data is to use a compute node within an interactive batch session (see Batch use policy).

You are encouraged to use the transfer batch queue for file packaging and unpackaging (i.e., tar, gzip) and for frequent or lengthy file transfers to and from $ARCHIVE_HOME.

Any process(es) in violation are subject to being terminated without notice and repeated violations may result in your account being disabled.

2.3. File space management policy

2.3.1. Data limits

The ARL DSRC does not have data amount limits on the $WORKDIR file system, however, data within $WORKDIR is subject to the System Scrubber.

Users' $HOME is limited to 200 GB of data. Once users exceed the 200 GB data limit their $HOME, the user's data will no longer get backed up. Users who exceed the data limit will be notified as excessive data in users' $HOME can impact the performance of the HPC systems.

2.3.2. System Scrubber

The scratch file system, /p/work1, should be used for active temporary data storage and batch processing. A system "scrubber" monitors utilization of the scratch space. Files not accessed within 30 days on the scratch file system are subject to removal but may remain longer if the space permits. There are no exceptions to this policy. Customers who wish to keep files for long-term storage should copy files selected for retention back into their /home or /archive directories to avoid data loss by the "scrubber." Customers are responsible for archiving files from the scratch file systems. This file system is considered volatile working storage, and no automated backups are performed.

Note: Please do not use /tmp or /var/tmp for temporary storage!

2.4. Maximum session lifetime policy

To provide users with a more secure high performance computing environment, the HPCMP has implemented a limit on the lifetime of all terminal/window sessions. Any idle terminal or window session connections to the ARL DSRC are terminated after 6 hours. Regardless of activity, any terminal or window session connections to the ARL DSRC are terminated after 24 hours. A 15-minute warning message is sent to each such session prior to its termination.

2.5. Batch use policy

The primary resource to schedule jobs on all systems is the node. Users request a certain number of nodes for a certain length of time. Limits on the number of nodes and length of a job vary by system and queue.

Although every attempt is made to keep entire systems available, interruptions will occur, and more frequently on nodes with larger numbers of processors. Users should use mechanisms to save the state of their jobs where available (most ARL DSRC-supported applications can create restart files so runs do not have to start from the beginning) to protect against system interrupts. Users running long jobs without saving the state of their jobs run at-risk with respect to system interrupts. Use of system-level check pointing is not recommended.

All ARL HPC systems have identical queue names: urgent, debug, HIE, high, frontier, standard, transfer, and background; however, each queue has different properties as specified in the table below. Each of these queues is assigned a priority factor within the batch system. The relative priorities of the queues are shown in the table below. Jobs in queues other than background accrue additional priority based on time in queue. The scheduling of jobs uses job slot-reservation based on these priority factors and increases system utilization via backfilling while waiting for resources to become available.

Queue Descriptions and Limits on ARL DSRC Systems
Priority Queue
Name
Max Wall
Clock Time
Max Cores
Per Job
Description
Highest urgent 96 Hours N/A Jobs belonging to DoD HPCMP Urgent Projects
Down arrow for decreasing priority transfer 48 Hours N/A Data transfer for user jobs.
See the ARL DSRC Archive Guide, section 5.2
debug 1 Hour N/A Time/resource-limited for user testing and debug purposes
high 168 Hours N/A Jobs belonging to DoD HPCMP High Priority Projects
frontier 168 Hours N/A Jobs belonging to DoD HPCMP Frontier Projects
cots 96 Hours N/A Jobs running commercial licensed applications
HIE 24 Hours N/A Rapid response for interactive work. For more information see the HPC Interactive Environment (HIE) User Guide.
interactive 12 Hours N/A Interactive jobs
standard 168 Hours N/A Standard jobs
standard-long 200 Hours N/A ARL DSRC permission required
Lowest background 24 Hours N/A User jobs that are not charged against the project allocation

In conjunction with the HPCMP Baseline Configuration policy for Common Queue Names across the allocated centers, the ARL DSRC will honor batch jobs that include the queue name for urgent, high (high-priority) and frontier.

Any project with an allocation may submit jobs to the background queue. Projects that have exhausted their allocations will only be able to submit jobs to the background queue.

2.6. Special request policy

All special requests for allocated HPC resources, including increased priority within queues, increased queue parameters for maximum number of cores and Wall Time, and dedicated use, should be directed to the HPC Help Desk. Request approval requires documentation of the requirement and associated justification, verification by the ARL DSRC support staff, and approval from the designated authority, as shown in the following table. The ARL DSRC Customer Success Director may permit special requests for HPC resources independent of this model for exceptional circumstances.

Approval Authorities for Special Resource Requests
Resource Request Approval Authority
Up to 10% of an HPC system/complex for one week or less ARL DSRC Director or Designee
Up to 20% of an HPC system/complex for one week or less S/AAA
Up to 30% of an HPC system/complex for two weeks or less Army/Navy/AF Service Principal on HPC Advisory Panel
Up to 100% of an HPC system/complex for greater than two weeks HPCMP Program Director or Designee

2.7. Account removal policy

This policy covers the disposition or removal of user data when the user is no longer eligible for a given HPCMP account on any one or more systems in the HPCMP inventory.

At the time a user becomes ineligible for an HPCMP user account, the user's access to that account is disabled.

The user and the Principal Investigator (PI) are responsible for arranging the disposition of the data prior to account deactivation. The user may request special assistance or specific exemptions or extensions, based on such criteria as availability of resources, technical difficulties, or other special needs. If the user does not request any assistance, then the respective center promptly contacts the user, the PI of the project, and the responsible S/AAA to determine the proposed disposition of the user's data. All data disposition actions are performed as specified in the HPCMP's Data Protection Policy. If the center is unable to reach the aforementioned individuals, or if the contacted person(s) does not respond before the account is deactivated, the user's data stored on systems or home directories is moved to archive storage, and one of the following two cases must hold:

  1. User has an account at another HPCMP center. Then, the user, the PI of the project or responsible S/AAA, as appropriate, has one year to arrange to move the data from the archive to the HPCMP Center where s/he has an active account. After this time period has expired, the center may delete the user's data.
  2. User does not have an account at another HPCMP center. Then, the user, the PI of the project, or responsible S/AAA, as appropriate, has one year to arrange to retrieve the data from the HPCMP resources. After this time period has expired, the center may delete the user's data.

Following the disposition of the user's data, the user account is removed from the system.

In special cases, such as but not limited to, security incidents or HPCMP resource abuse, access to a user account and/or user data can be immediately prohibited or deleted as appropriate for the circumstances as judged by the center or HPCMP.

Please note the following. Exceptions to this general data disposition policy can and will be made as necessary within the ability of the center to fulfill such requests, given reasonable justification as judged by the center. Also, contracts requiring data maintenance beyond the conditions of the data disposition policy cannot be accommodated by the center if the center is not a signatory to the contract. Such contracts may be considered when exceptions are requested.

If you have any questions concerning this policy, please contact the HPC Help Desk at 1-877-222-2039 or via email at help@helpdesk.hpc.mil.

2.8. Communications policy

The ARL DSRC Help Desk Team will communicate with users via e-mail, pertinent system messages as to unplanned and planned outages, performance degradation, and network issues. The Team will also communicate user job run errors that may be causing operational issues with the system.

It is vital to the ARL DSRC's communication process, and mutually beneficial to our users, to understand the responsibilities of being a good citizen of the ARL DSRC. We ask users:

  • Please keep your HPC pIE account updated with your current email addresses. This way we can assure vital information about our Center reaches you. Also, ensure email addresses being used are work/office emails and not personal emails. Please contact your S/AAA to update your email address. Please note that if the email address you give us is behind a firewall, you may need to arrange for your local system administrator to allow email from the ARL DSRC to pass through the firewall boundary to your work site.
  • Please check the Systems page for up-to-date resource availability. The HPC Training page has information on upcoming training opportunities. Updates to our system information may be found on the ARL DSRC Documentation page.

2.9. System availability policy

A system is declared down and made unavailable to users whenever a chronic and/or catastrophic hardware and/or software malfunction or an abnormal computer environment condition exists which could:

  1. Result in corruption of user data.
  2. Result in unpredictable and/or inaccurate runtime results.
  3. Result in a violation of the integrity of the DSRC user environment.
  4. Result in damage to the High Performance Computer System(s).

The integrity of the user environment is considered corrupt anytime a user must modify his/her normal operation while logged into the DSRC. Examples of malfunctions are:

  1. User home ($HOME) directory not available.
  2. User Workspace ($WORKDIR) area not available.
  3. If the archive system is unavailable, the transfer queue is suspended, but logins and remaining queues are enabled.

When a system is declared down, based on a system administrator's and/or computer operator's judgment, users are prevented from using the affected system(s) and all existing batch jobs are prevented from running. Batch jobs held during a "down state" are run only after the system environment returns to a normal state.

Whenever there is a problem on one of the HPC systems that could be remedied by removing a part of the system from production (an activity called draining), it must first be determined how much of the system will be impacted by the draining to brief the necessary levels of management and the user community.

If the architecture of the HPC system allows a node to be removed from production with minimal impact to the system, the system administrators can make the decision to remove the node with notification to the operators for information.

If the architecture of the HPC system allows significant portions of the system to be removed from production and still allows user production on a large part of the system to continue, the system administrator along with government and contractor management can make the decision to remove that part of the system. The system should show that domain or node is out of the normal queue for scheduling jobs, so the user community can determine the status. The system administrator advises operations, the ARL DSRC Help Desk, and the HPC Help Desk of this action.

In cases where $WORKDIR will be unavailable, or a complete system needs to be drained for maintenance, contractors and government director-level management are notified. In cases involving an entire system, the ARL DSRC Help Desk emails users of the downtime schedule and the schedule for returning the system to production.

2.10. Data import and export policy

2.10.1. Network file transfers

The preferred file transfer method is over the network using the encrypted (Kerberos) file transfer programs scp or sftp. Users can also contact the HPC Help Desk for assistance in the process. Depending on the nature of the transfer, transfer time may be improved by reordering the data retrieval from tapes, taking advantage of available bandwidth to/from the Center, or dividing the transfer into smaller parts; the ARL DSRC staff assists the users to the extent they are able. A physical media transfer may also be an option. Limitations such as available resources and network problems outside the Center can be expected, and the user should allow sufficient time to do the transfers.

2.10.2. Reading/Writing media

To request a physical media data transfer, visit our Services section and follow the instructions for submitting a Data Transfer Request Form. Once approved, ARL DSRC staff will assist in transferring your data to or from physical media.

For outbound transfers, data for the request must be tarred or zipped on the HPC system and the user must provide the physical media. Drives will be formatted with a Linux ext filesystem and non-FIPS drives will use a LUKS encrypted filesystem or individual files will be encrypted, unless they are approved for public release. Physical media can include the following:

  • New (i.e., completely unused) optical media
  • A new (i.e., completely unused) non-FIPS-compliant hard drive (External or SATA)
  • A FIPS-compliant external hard drive (does not have to be completely unused)
    • Used FIPS drives will be reformatted and have the pin reset/changed by the Data Transfer Administrator.

For incoming transfers, users must provide the physical media with the data tarred or zipped. Drives with incoming data must be properly marked.

  • Filesystem should be Linux ext formatted, Windows formats will incur significant delays
  • Filesystem or individual files must be encrypted unless data is approved for public release

Questions or inquiries should be sent to data-transfer@arl.hpc.mil.

2.11. Account sharing policy

Users are responsible for all passwords, accounts, YubiKeys, RSA SecurID token, and associated PINs issued to them. Users must not share their passwords, accounts, YubiKeys, RSA SecurID tokens, or PINs with any other individual for any reason. Doing so is a violation of the contract users are required to sign to obtain access to DoD High Performance Computing Modernization Program (HPCMP) computational resources.

Upon discovery/notification of a violation of the above policy, the following actions are taken:

  1. The account (i.e., username) is disabled. No further logins are permitted.
  2. All account assets are frozen. File and directory permissions are set so no other users can access the account assets.
  3. AAny executing jobs are permitted to be completed; however, any jobs residing in input queues are deleted.
  4. The S/AAA who authorized the account is notified of the policy violation and the actions taken.

Upon the first occurrence of a violation of the above policy, the S/AAA has the authority to request the account be re-enabled. Upon the occurrence of a second or subsequent violation of the above policy, the account is only re-enabled if the user's supervisory chain of command, S/AAA, and the High Performance Computing Modernization Office (HPCMO) all agree the account should be re-enabled.

The disposition of account assets is determined by the S/AAA. The S/AAA can:

  1. Request account assets be transferred to another account.
  2. Request account assets be returned to the user.
  3. Request account assets be deleted, and the account closed.

If there are associate investigators who need access to ARL DSRC computer resources, we encourage them to apply for an account. Separate account holders may access common project data as authorized by the project leader.

3. Available resources

3.1. HPC systems

The ARL DSRC unclassified HPC systems are accessible through the Defense Research and Engineering Network (DREN) to all active users. Our current HPC systems include:

Jean is a Liqid system. It has 9 login nodes and 4 types of compute nodes for job execution. Jean uses HDR InfiniBand as its high-speed interconnect for MPI messages and IO traffic. Jean uses WEKA to manage its parallel file system.

Ruth is an HPE Cray EX4000. It has 12 login nodes and 6 types of compute nodes for job execution. Ruth uses Cray Slingshot as its 200 Gbps high-speed interconnect for MPI messages and IO traffic. Ruth uses Lustre to manage its parallel file system.

See the Systems page for more information about Jean and Ruth

For information on restricted systems, see the Restricted Systems page (PKI required).

3.2. Data storage

3.2.1. File systems

Each HPC system has several file systems available for storing user data. Your personal directories on these file systems are commonly referenced via the $HOME, $WORKDIR, and $ARCHIVE_HOME environment variables. Other file systems may be available as well.

File System Environment Variables
Environment Variable Description
$HOME Your home directory on the system
$WORKDIR Your temporary work directory on a high-capacity, high-speed scratch file system used by running jobs
$ARCHIVE_HOME Your archival directory on the archive server

For details about the specific file systems on each system, see the system user guides on the ARL DSRC Documentation page.

3.2.2. Archive system

All our HPC systems have access to an online archival system, which provides long term storage for users' files on a petascale robotic tape library system. A 2-PB disk cache, divided among all ARCHIVE file systems, frontends the unclassified tape file system and temporarily holds files while they are being transferred to or from tape.

For information on using the archive server, see the ARL DSRC Archive Guide.

3.3. Computing environment

To ensure a consistent computing environment and user experience on all HPCMP HPC systems, all systems follow a standard configuration baseline. For more information on the policies defining the baseline configuration, see the Baseline Configuration Compliance Matrix. All systems run variants of the Linux operating system, but the computing environment varies by vendor and architecture due to vendor-specific enhancements. For enhanced security ARL restricts access to Internet websites, please open a helpdesk ticket with HPC Help Desk with the URL(s) and a brief justification to request access if you get timeouts or SSL/TLS errors when trying to access a site.

3.3.1. Software

Each HPC system hosts a large variety of compiler environments, math libraries, programming tools, and third-party analysis applications which are available via loadable software modules. A list of software is available on the Software page, or for more up-to-date software information, use the module commands on the HPC systems. Specific details of the computing environment on each HPC system are discussed in the system user guides, available on the ARL DSRC Documentation page.

To request additional software or to request access to restricted software, please contact the ARL DSRC Help Desk at dsrchelp@arl.army.mil.

3.3.2. Bring your own code

While all HPCMP HPC systems offer a diversity of open source, commercial and government software, there are times when we don't support the application codes and tools needed for specific projects. The following information describes a convenient way to utilize your own software on our systems.

Our HPC systems provide you with adequate file space to store your codes. Data stored in your home directory ($HOME) is backed up on a periodic basis. If you need more home directory space, you may submit a request to the HPC Help Desk at help@helpdesk.hpc.mil. For more details on home directories, see the Baseline Configuration (BC) policy FY12-01 (Minimum Home Directory Size and Backup Schedule).

If you need to share an application among multiple users, BC policy FY10-07 (Common Location to Maintain Codes) explains how to create a common location on the $PROJECTS_HOME file system to place applications and codes without using home directories or scrubbed scratch space. To request a new "project directory," please provide the following information to the HPC Help Desk:

  • Desired DSRC system where a project directory is being requested.
  • POC Information: Name of the sponsor of the project directory, user name, and contact information.
  • Short Description of Project: Short summary of the project describing the need for a project directory.
  • Desired Directory Name: This is the name of the directory created under $PROJECTS_HOME.
  • Is the code/data in the project directory restricted (e.g., ITAR, etc.)?
  • Desired Directory Owner: The user name to be assigned ownership of the directory.
  • Desired Directory Unix Group: The Unix group name to be assigned to the directory.
    (New Unix group names must be eight characters or less)
  • Additional users to be added to the group.

If the POC for the project directory ceases being an account holder on the system, project directories are handled according to the user data retention policies of the center.

Once the project directory is created, you can install software (custom or open source) in this directory. Then, depending on requirements, you can set file and/or directory permissions to allow any combination of group read, write, and execute privileges. Since this directory is fully owned by the POC, s/he can even make use of different groups within subdirectories to provide finer granularity of permissions.

Users are expected to ensure that any software or data placed on HPCMP systems is protected according to any external restrictions on the data. Users are also responsible for ensuring no unauthorized or malicious software is introduced to the HPCMP environment.

For installations involving restricted software, it is your responsibility to set up group permissions on the directories and protect the data. It is crucially important to note that there are users on the HPCMP systems who are not authorized to access restricted data. You may not run servers or use software that communicates to a remote system without prior authorization.

If you need help porting or installing your code, the HPC Help Desk provides a "Code Assist" team that specializes in helping users with installation and configuration issues for user supplied codes. To get help, simply contact the HPC Help Desk and open a ticket.

Please contact the HPC Help Desk to discuss any special requirements.

3.3.3. Batch schedulers

Our HPC systems use various batch schedulers to manage user jobs and system resources. Basic instructions and examples for using the scheduler on each system can be found in the system user guides. More extensive information can be found in the Scheduler Guides. These documents are available on the ARL DSRC Documentation page.

Schedulers place user jobs into different queues based on the project associated with the user account. Most users only have access to the debug, standard, transfer, HIE, and background queues, but other queues may be available to you depending on your project. For more information about the queues on a system, see the Scheduler Guides.

3.3.4. Advance Reservation Service (ARS)

Another way to schedule jobs is through the ARS. This service allows users to reserve resources for use at specific times and for specific durations. The ARS works in tandem with the batch scheduler to ensure your job runs at the scheduled time and that all required resources (i.e., nodes, licenses, etc.) are available when your job begins. For information on using the ARS, see the ARS User Guide.

3.4. Open OnDemand (OOD)

The OOD platform provides access to HPC resources anywhere via web browser access. OOD offers a variety of interactive applications and desktops from slim and quick OOD desktops, SRD desktop via OOD, preconfigured appt tiles, web shells, a file manager and more. Also offers login node desktops for light weight everyday activities, VS Code, and Jupyter Notebook. For more details, please visit: https://centers.hpc.mil/users/tools.html#openOnDemand.

3.5. Secure Remote Desktop (SRD)

The SRD enables users to launch a gnome desktop on an HPC system via a downloadable Java interface client. This desktop is then piped to the user's local workstation (Linux, Mac, or Windows) for display. Once the desktop is launched, you can run any software application installed on the HPC system. For information on using SRD or to download the client, see the Secure Remote Desktop page on the DAV Center website.

3.6. Network connectivity

The ARL DSRC is a primary node on the Defense Research and Engineering Network (DREN), which provides up to 100-Gb/sec service to DoD HPCMP centers nationwide across a 100-Gb/sec backbone. We connect to the DREN via a 100-Gb/sec circuit linking us to the DREN backbone.

The DSRC's local network consists of a 100-Gb/sec fault-tolerant backbone with 40-Gb/sec connections to the HPC and archive systems.

4. How to access our systems

The HPCMP uses a network authentication protocol called Kerberos to authenticate user access to our HPC systems. Before logging in, it is preferred to download and install an HPCMP Kerberos client kit on your local system. For information about downloading and using these kits, visit the Kerberos & Authentication page and click on the tab for your platform. There you will find instructions for downloading and installing the kit, getting a ticket, and logging in.

After installing and configuring a Kerberos client kit, you can access our HPC systems via standard Kerberized commands, such as ssh. File transfers between local and remote systems can be accomplished via the scp or sftp commands. For additional information on using the Kerberos tools, see the Kerberos User Guide or review the tutorial video on Logging into an HPC System. Instructions for logging into each system can be found in the system user guides on the ARL DSRC Documentation page.

Another way to access the HPC systems is through Open OnDemand. For more information, please visit: the Open OnDemand page.

For information on accessing restricted systems, see the system user guides on the Restricted Systems page (PKI required).

5. How to get help

For almost any issue, the first place you should turn for help is the HPC Help Desk. You can email the HPC Help Desk at help@helpdesk.hpc.mil. You can also contact the HPC Help Desk via phone, DSN, or even traditional mail. Full contact information for the Help Desk is on the Technical and Customer Support page. The HPC Help Desk can assist with a wide array of technical issues related to your account and your use of our systems. The HPC Help Desk can also assist in connecting you with various special-purpose groups to address your particular need.

5.1. User Productivity Enhancement and Training (PET)

The PET initiative gives users access to computational experts in many HPC technology areas. These HPC application experts help HPC users become more productive using HPCMP supercomputers. The PET initiative also leverages the expertise of academia and industry experts in new technologies and provides training on HPC-related topics. Help in specific computational technology areas is available providing a wide range of expertise including algorithm development and implementation, code porting and development, performance analysis, application and I/O optimization, accelerator programming, preprocessing and grid generation, workflows, in-situ visualization, and data analytics.

To learn more about PET, see the User Productivity Enhancement and Training page. To request PET assistance, send an email to PET@hpc.mil.

5.2. User Advocacy Group (UAG)

The UAG provides a forum for users of HPCMP resources to influence policies and practices of the Program; to facilitate the exchange of information between the user community and the HPCMP; to serve as an advocate for HPCMP users; and to advise the High Performance Computing Modernization Program Office (HPCMPO) on policy and operational matters related the HPCMP.

To learn more about the UAG, see the User Advocacy Group page (PKI required). To contact the UAG, send an email to hpc-uag@hpc.mil.

5.3. Baseline Configuration Team (BCT)

The BCT defines a common set of capabilities and functions so users can work more productively and collaboratively when using the HPC resources at multiple computing centers. To accomplish this, the BCT passes policies which collectively create a configuration baseline for all HPC systems.

To learn more about the BCT and its policies, see the Baseline Configuration page. To contact the BCT, send an email to BCTinput@afrl.hpc.mil.

5.4. Computational Research and Engineering Acquisition Tools and Environments (CREATE)

The CREATE program provides tools to enhance the productivity of the DoD acquisition engineering workforce by providing high fidelity design and analysis tools with capabilities greater than today's tools, reducing the acquisition development and test process cycle. CREATE projects provide enhanced engineering design tools for the DoD HPC community.

To learn more about CREATE, visit the CREATE page or contact the CREATE Program Office at create@hpc.mil. You may also access the CREATE Community site (Registration and PKI required).

5.5. Data Analysis and Visualization Center (DAV Center)

The DAV Center serves the needs of DoD HPCMP scientists to analyze an ever-increasing volume and complexity of data. Its mission is to put visualization and analysis tools and services into the hands of every user.

For more information about DAV Center, visit the DAV Center website. To request assistance from DAV Center, send an email to support@daac.hpc.mil.