The ARL DSRC unclassified network will be inaccessible on Saturday, 26 April 2014 from 0800 until 1200 in order to migrate from DREN II to DREN III. Users will not be able to access any ARL DSRC computational resources during this outage, including the Enterprise Wide License Server (RLS). Previously submitted user jobs not requiring access to Enterprise Wide License Server will not be affected by this outage. Queued jobs requiring access to the Enterprise Wide License Server will not be able to start, while jobs already running may abort due to loss of communication to the Enterprise Wide License Server.
This outage will also cause the Advance Reservation Service (ARS) to be unavailable during this time. Previously made reservations will not be affected and you will be able to submit jobs to previously reserved reservations. However, new reservations will not be able to be made during this outage.
We apologize for the inconvenience this network outage will cause our users.
If you have any questions, please contact the Consolidated Customer Assistance Center (CCAC) at 877-CCAC-039.
The Navy DSRC is pleased to announce the release of the HPC Portal for all Navy DSRC Utility Server, Haise, and Kilrain users. All users of the Navy Utility Server, Haise, and Kilrain will be able to access and authenticate to the HPC Portal without any additional actions. The HPC Portal will be available for users starting Monday April 14, 2014. This means users can access the Navy DSRC Utility Server, Haise, and Kilrain with only a web browser and their CAC or Yubikey.
What is the purpose of the HPC Portal?
The HPC Portal aims to enhance access and use of HPC computational and data storage resources through a web-based graphical interface, increasing transparent and user-friendly access to these resources for both new user communities and existing users, and establishing new ways to enable HPC workflows.
What is the URL to access the HPC Portal?
From that URL, users will be able to select from available HPC Portals, including the Navy DSRC. Users will be guided through the login process, which relies on the HPCMP OpenID mechanism. No software or Kerberos kit is required; any Navy DSRC user having a CAC or Yubikey will be able to access.
Where can I read more about the HPC Portal, including information on Browser Compatibility?
http://www.mhpcc.hpc.mil/portal and https://portal.mhpcc.hpc.mil/nachos/info.groovy
What does the HPC Portal provide?
By default the HPC Portal provides all users a browser-based X-terminal and access to specific applications including; CREATE-AV Kestrel, CREATE-AV Helios, Distributed MATLAB, MATLAB IDE, FieldView, Pointwise, Kestrel UI (KUI), DaVinci, SENTRi, Ensight, ParaView, Tecplot 360, Capstone, and Status Dashboard. Access to some applications such as CREATE-AV Kestrel, CREATE-AV Helios, and CREATE Capstone are protected due to ITAR and HPCMP restrictions, and require additional approval for access.
What if I want to suggest or develop an application for the HPC Portal?
The HPC Portal team has created a Virtual Machine based Software Development Kit (SDK) along with tutorials and examples. The SDK provides developers with a "HPC Portal in a Box" environment for standing up new applications.
The HPC Portal Team is interested in your thoughts, suggestions, and feedback. Please contact us at:
Portal Wiki: https://portal.navydsrc.hpc.mil/portal/group/home/forum
Via CCAC: https://help.ccac.hpc.mil
Hybrid computing is acknowledged to be the future of High Performance Computing, and as such, the Navy DSRC is pleased to announce the availability of Intel Xeon Phi accelerated nodes on the IBM iDataPlex systems HAISE and KILRAIN. Intel Xeon Phis, also known as many integrated cores (MICs), can each support up to 240 threads of concurrent execution. A detailed Intel Xeon Phi Guide is available on the Navy DSRC website at: http://www.navydsrc.hpc.mil/docs/xeonPhiGuide.html.
Users will be able to access the nodes via the 'phi64' queue on each system. PBS jobs submitted to the 'phi64' queue will continue to be charged based on the usage of a standard compute node (# of nodes * 16 cores * # of hours). If you would like access to the phi64 queue, please send a brief note requesting access to email@example.com.
These nodes are comprised of both a compute node and an accelerator node. Each accelerated compute node contains two 8-core 2.6 GHz Intel Sandy Bridge processors, identical to those in the rest of the system, and 64 GB of memory. Each Xeon Phi co-processor node contains two Intel Xeon Phi 5110P co-processors, each composed of 60 cores and 8 GB of internal memory. Each Xeon Phi core supports up to 4 execution threads, allowing for potentially extensive parallelism.
In order to properly support compilation of Intel Xeon Phi codes, the Intel Compiler Suite and Intel Math Kernel Library (MKL) modules will both be defaulted to version 13.1 on Haise and Kilrain.
Please note: Intel Xeon Phi nodes on Haise and Kilrain currently only support offload mode and native mode.
In offload mode, code runs on the standard compute node portion of a Phi node and offloads segments of execution to the co-processor portion. For example, an MPI code with offload directives within it could run on multiple accelerated nodes, using the Intel Xeon Phi portion of each node to potentially speed up calculations.
In native mode, code runs directly on the co-processor node. Currently, MPI codes running in native mode are limited to a single Phi node. However, in the next several months Mellanox should release an update to its version of OFED, which will support native execution of MPI codes across multiple Intel Xeon Phi nodes.
For more information about developing code for the Intel Xeon Phi, including webinars, please see: http://software.intel.com/mic-developer
Users are invited to report problems and direct requests for unclassified assistance to the Consolidated Customer Assistance Center (CCAC) at 1-877-222-2039 or by E-mail to firstname.lastname@example.org.