Optimal Matching of Systems and Code Types

In March 2007, a user sent an input to the BC team requesting a capability that allows a user to choose the right center/system/architecture for a given application. The BC team acknowledged receiving the input but was unable to pursue the user's request due to several higher prioritized requests received from other users. Independent of this user's request, at the Spring 2009 Baseline Configuration Face-to-Face Meeting in Monterey, California, one of three customers, invited to the meeting, expressed interest in a capability that matched systems and code types in some optimal fashion.

The BC team started its initial efforts in this project through a number of discussions with the HPCMP Benchmarking Team and the ezHPC team. From these discussions it was revealed that the Benchmarking Team had already started looking into a solution to the requests received from the two users. Subsequently, the Benchmarking Team was invited to the Fall 2009 Baseline Configuration Face-to-Face Meeting in Savannah, Georgia, to present their work on matching systems and codes types across the enterprise.

The method followed by the Benchmarking Team can be summarized as follows: Benchmarks are performed on all allocated machines using the following seven codes: Adaptive Mesh Refinement (AMR), Air Vehicles Unstructured Solver (AVUS), CTH, General Atomic and Molecular Electronic Structure System (GAMESS), HYbrid Coordinate Ocean Model (HYCOM), Improved Concurrent Electromagnetic Particle In Cell (ICEPIC), and Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). All runs are done using 256 cores (standard test case) and 1024 cores (large test case). The results are presented in a matrix form using red-to-green colors to denote each application's relative performance on a machine-by-machine basis.

The BC team approved the method adopted by the Benchmarking Team. Above and beyond, BC team has requested from the Benchmarking Team to expand their results by including brief algorithmic details on each code (such as: dense linear solver, sparse LU factorization, singular value decomposition, QR factorization, etc) in order to help users establish some relationship between their codes and the seven applications used in the benchmarking.

For more details on the work done by the Benchmarking team, please visit the following link:

HPC Benchmark Runtimes (Making the Most of Your Allocation)*
* The Benchmarking website no longer exists.