Topic: MPI Test Suite
Date Received: July 2, 2008

The MPI Test Suite that is under development should address several additional issues besides raw compliance with the standards. Some of these additional issues are:

  1. Documenting the performance for some of the more important calls (e.g., simple sends and receives, both latency and bandwidth as a function of message size, noting if very large messages break the library).
  2. Documenting the performance for the collective calls as a function of both message size and number of MPI processes.
  3. Documenting memory usage as a function of the number of MPI processes (message size is optional), for commonly used calls, including some of the collectives.
  4. Documenting and if at all possible trying to standardize the environment variables used to tune the MPI library. The standardization can be achieved by writing a wrapper script that converts the standard environment variables (when defined) into the machine specific environment variables.
  5. While technically this is not part of the test suite, it might be helpful to define a standard environment variable that people can test to see if an MPI implementation is thread safe (can be embedded in an OpenMP parallel loop). For most implementations I believe that this variable should still return FALSE or NO or 0. However, if the documentation says that it is thread safe, it can return TRUE or YES or SAFE or 1. If it goes on to say that it is thread hot (efficient), it can return HOT or 2.

BC Team Feedback
Reply Date:
October 9, 2008

The BC team has reviewed your recent input and recommends the following:

Performance and Memory Usage Documentation - Your request for documenting performance of individual and collective calls, as well as memory usage, could be very useful to the DoD user community, and are worthy of consideration. However, the BC team views MPI performance and memory usage issues to be outside the BC scope. Meanwhile, the BC team is attempting to identify a group within or outside the HPCMP that may help in the documentation of MPI-related performance and memory usage.

Standardizing MPI-related Environment Variables - The six participating HPCMP sites presently support multiple MPI stacks. Consequently, we feel that it would be a very cumbersome task to maintain environment variables for all the existing MPI stacks. An alternative would be to bring this request to the attention of the MPI-3 Forum.

Level of Thread Support of an MPI Implementation - We are pleased to let you know that the BC Project entitled MPI Test Suite (FY06-16) will include the level of thread support provided by the MPI implementation (MPI_THREAD_SINGLE, MPI_THREAD_FUNNELED, MPI_THREAD_SERIALIZED, or MPI_THREAD_MULTIPLE).

We thank you for your valued input, and hope that you find the BC team response satisfactory.

Note added on February 27, 2020: In the original reply to the user, BC Team brought an MPI benchmark called SKaMPI to the attention of the user and included a link supported by the SKaMPI benchmark team. Since then, the project appears to have been abandoned and the link discountinued, eliminating the need to reference SKaMPI any further.