MPI Test Suite Result Details for
MPT MPI 2.20 on Koehr (KOEHR.ARL.HPC.MIL)
Run Environment
- HPC Center:ARL
- HPC System: SGI ICE_X (Koehr)
- Run Date: Tue Jan 5 11:05:12 UTC 2021
- MPI: MPT MPI 2.20 (Implements MPI 3.1 Standard)
- Shell:/bin/sh
- Launch Command:/p/app/hpe/mpt-2.20/bin/mpirun
Language | Executable | Path |
---|---|---|
C | mpicc | /p/app/hpe/mpt-2.20/bin/mpicc |
C++ | mpicxx | /p/app/hpe/mpt-2.20/bin/mpicxx |
F77 | mpif77 | /p/app/hpe/mpt-2.20/bin/mpif77 |
F90 | mpif08 | /p/app/hpe/mpt-2.20/bin/mpif08 |
Variable Name | Value |
---|---|
MPI_UNIVERSE | 33 |
MPI_ROOT | /p/app/hpe/mpt-2.20 |
Topology - Score: 100% Passed
The Network topology tests are designed to examine the operation of specific communication patterns such as Cartesian and Graph topology.
Passed MPI_Cart_create() test 1 - cartcreates
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test creates a cartesian mesh and tests for errors.
No errors
Passed MPI_Cart_map() test 2 - cartmap1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test creates a cartesian map and tests for errrors.
No errors
Passed MPI_Cart_shift() test - cartshift1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test exercises MPI_Cart_shift().
No errors
Passed MPI_Cart_sub() test - cartsuball
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test exercises MPI_Cart_sub().
No errors
Passed MPI_Cartdim_get() test - cartzero
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Check that the MPI implementation properly handles zero-dimensional Cartesian communicators - the original standard implies that these should be consistent with higher dimensional topologies and therefore should work with any MPI implementation. MPI 2.1 made this requirement explicit.
No errors
Passed MPI_Topo_test() test - dgraph_unwgt
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Specify a distributed graph of a bidirectional ring of the MPI_COMM_WORLD communicator. Thus each node in the graph has a left and right neighbor.
No errors
Passed MPI_Dims_create() test - dims1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test uses multiple varies for the arguments of MPI_Dims_create() and tests whether the product of ndims (number of dimensions) and the returned dimensions are equal to nnodes (number of nodes) thereby determining if the decomposition is correct. The test also checks for compliance with the MPI_- standard section 6.5 regarding decomposition with increasing dimensions. The test considers dimensions 2-4.
No errors
Passed MPI_Dims_create() test - dims2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test is similar to topo/dims1 but only exercises dimensions 2 and 4 including test cases whether all all dimensions are specified.
No errors
Passed MPI_Dims_create() test - dims3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test is similar to topo/dims1 but only considers special cases using dimensions 3 and 4.
No errors
Passed MPI_Dist_graph_create test - distgraph1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test excercises MPI_Dist_graph_create() and MPI_Dist_graph_adjacent().
No errors
Passed MPI_Graph_create() test 1 - graphcr2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Create a communicator with a graph that contains null edges and one that contains duplicate edges.
No errors
Passed MPI_Graph_create() test 2 - graphcr
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Create a communicator with a graph that contains no processes.
No errors
Passed MPI_Graph_map() test - graphmap1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Simple test of MPI_Graph_map().
No errors
Passed Neighborhood routines test - neighb_coll
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
A basic test for the 10 (5 patterns x {blocking,non-blocking}) MPI-3 neighborhood collective routines.
No errors
Passed MPI_Topo_test dup test - topodup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Create a cartesian topology, get its characteristics, then dup it and check that the new communicator has the same properties.
No errors
Passed MPI_Topo_test datatype test - topotest
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Check that topo test returns the correct type, including MPI_UNDEFINED.
No errors
Basic Functionality - Score: 98% Passed
This group features tests that emphasize basic MPI functionality such as initializing MPI and retrieving its rank.
Passed Intracomm communicator test - mtestcheck
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This program calls MPI_Reduce with all Intracomm Communicators.
No errors
Passed MPI_Abort() return exit test - abortexit
Build: Passed
Execution: Failed
Exit Status: Intentional_failure_was_successful
MPI Processes: 1
Test Description:
This program calls MPI_Abort and confirms that the exit status in the call is returned to the envoking environment.
MPI_Abort() with return exit code:6 MPT ERROR: Rank 0(g:0) is aborting with error code 6. Process ID: 152110, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/util/abortexit MPT Version: HPE MPT 2.20 08/30/19 04:33:45 MPT: --------stack traceback------- MPT: Attaching to program: /proc/152110/exe, process 152110 MPT: (no debugging symbols found)...done. MPT: [New LWP 152130] MPT: [Thread debugging using libthread_db enabled] MPT: Using host libthread_db library "/lib64/libthread_db.so.1". MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: Missing separate debuginfos, use: debuginfo-install glibc-2.17-307.el7.1.x86_64 libbitmask-2.0-sgi720r52.rhel76.x86_64 libcpuset-1.0-sgi720r102.rhel76.x86_64 libgcc-4.8.5-39.el7.x86_64 libibverbs-22.4-2.el7_8.x86_64 libnl3-3.2.28-4.el7.x86_64 libpsm2-11.2.80-1.x86_64 numactl-libs-2.0.12-5.el7.x86_64 numatools-2.0-sgi720r149.rhel76.x86_64 MPT: (gdb) #0 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: #1 0x00002aaaab61d806 in mpi_sgi_system ( MPT: #2 MPI_SGI_stacktraceback ( MPT: header=header@entry=0x7fffffffc490 "MPT ERROR: Rank 0(g:0) is aborting with error code 6.\n\tProcess ID: 152110, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/util/abortexit\n\tMPT Version: HPE MPT 2.20 08/30/19 04:33:4"...) at sig.c:340 MPT: #3 0x00002aaaab565fc9 in print_traceback (ecode=ecode@entry=6) at abort.c:246 MPT: #4 0x00002aaaab56629a in PMPI_Abort (comm=<optimized out>, errorcode=6) MPT: at abort.c:68 MPT: #5 0x0000000000402703 in main () MPT: (gdb) A debugging session is active. MPT: MPT: Inferior 1 [process 152110] will be detached. MPT: MPT: Quit anyway? (y or n) [answered Y; input not from terminal] MPT: Detaching from program: /proc/152110/exe, process 152110 MPT: [Inferior 1 (process 152110) detached] MPT: -----stack traceback ends----- MPT ERROR: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize() aborting job
Passed Send/Recv test 1 - srtest
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This is a basic test of the send/receive with a barrier using MPI_Send() and MPI_Recv().
No errors
Passed Send/Recv test 2 - self
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test uses MPI_Sendrecv() sent from and to rank=0.
No errors.
Passed Basic Send/Recv Test - sendrecv
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test sends the length of a message, followed by the message body.
No errors.
Passed Message patterns test - patterns
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test sends/receives a number of messages in different patterns to make sure that all messages are received in the order they are sent. Two processes are used in the test.
No errors.
Passed Elapsed walltime test - wtime
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test measures how accuractly MPI can measure 1 second.
sleep(1): start:4.11786e+06, finish:4.11786e+06, duration:1.00006 No errors.
Passed Const test - const
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test is designed to test the new MPI-3.0 const cast applied to a "const *" buffer pointer.
No errors.
Passed Init argument test - init_args
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
In MPI-1.1, it is explicitly stated that an implementation is allowed to require that the arguments argc and argv passed by an application to MPI_Init in C be the same arguments passed into the application as the arguments to main. In MPI-2 implementations are not allowed to impose this requirement. Conforming implementations of MPI allow applications to pass NULL for both the argc and argv arguments of MPI_Init(). This test prints the result of the error status of MPI_Init(). If the test completes without error, it reports 'No errors.'
MPI_INIT accepts Null arguments for MPI_init(). No errors
Passed MPI Attribues test - attrself
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a test of creating and inserting attribues in different orders to ensure that the list management code handles all cases.
No errors
Passed MPI_Finalized() test - finalized
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This tests when MPI_Finalized() will work correctly if MPI_INit() was not called. This behaviour is not defined by the MPI standard, therefore this test is not garanteed.
No errors
Passed MPI_{Is,Query}_thread() test - initstat
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test examines the MPI_Is_thread() and MPI_Query_thread() call after being initilized using MPI_Init_thread().
No errors
Passed MPI_Get_library_version test - library_version
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
MPI-3.0 Test returns MPI library version.
HPE MPT 2.20 08/30/19 04:33:45 No errors
Passed MPI_Wtime() test - timeout
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This program tests the ability of mpiexec to timeout a process after no more than 3 minutes. By default, it will run for 30 secs.
No errors
Passed MPI_Get_version() test - version
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This MPI_3.0 test prints the MPI version. If running a version of MPI < 3.0, it simply prints "No Errors".
No errors
Passed MPI_ANY_{SOURCE,TAG} test - anyall
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test uses MPI_ANY_SOURCE and MPI_ANY_TAG on an MPI_Irecv().
No errors
Passed MPI_Status large count test - big_count_status
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test manipulates an MPI status object using MPI_Status_set_elements_x() with a large count value.
No errors
Passed MPI_BOTTOM test - bottom
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test makes use of MPI_BOTTOM in communication.
No errors
Passed MPI_Bsend() test 1 - bsend1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a simple program that tests MPI_Bsend().
No errors
Passed MPI_Bsend() test 2 - bsend2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a simple program that tests bsend.
No errors
Passed MPI_Bsend() test 3 - bsend3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a simple program that tests bsend.
No errors
Passed MPI_Bsend() test 4 - bsend4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a simple program that tests bsend.
No errors
Passed MPI_Bsend() test 5 - bsend5
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a simple program that tests bsend.
No errors
Passed MPI_Bsend() alignment test - bsendalign
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test bsend with a buffer with alignment between 1 and 7 bytes.
No errors
Passed MPI_Bsend() ordered test - bsendfrag
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test bsend message handling where different messages are received in different orders.
No errors
Passed MPI_Bsend() detach test - bsendpending
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test the handling of MPI_Bsend() operations when a detach occurs before the bsend data has been sent.
No errors
Passed MPI_Irecv() cancelled test - cancelrecv
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test attempts to cancel a receive request.
No errors
Passed Input queuing test - eagerdt
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test of a large number of messages of MPI datatype messages with no preposted receive so that an MPI implementation may have to queue up messages on the sending side.
No errors
Passed Generalized request test - greq1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Simple test of generalized requests.This simple code allows us to check that requests can be created, tested, and waited on in the case where the request is complete before the wait is called.
No errors
Passed MPI_Send() intercomm test - icsend
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Simple test of intercommunicator send and receive.
No errors
Passed MPI_Test() pt2pt test - inactivereq
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test program checks that the point-to-point completion routines can be applied to an inactive persistent request, as required by the MPI-1 standard. See section 3.7.3, It is allowed to call MPI TEST with a null or inactive request argument. In such a case the operation returns with flag = true and empty status.
No errors
Passed MPI_Isend() root test 1 - isendself
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a simple test case of sending a non-blocking message to the root process.
No errors
Passed MPI_Isend() root test 2 - isendselfprobe
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a simple test case of sending a non-blocking message to the root process.
No errors
Passed MPI_Isend() root test 3 - issendselfcancel
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test case sends a non-blocking synchonous send to the root process, cancels it, then attempts to read it.
No errors
Passed MPI_Mprobe() test - mprobe1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test MPI_Mprobe() to get the status of a pending receive, then calls MPI_Mrecv() with that status value.
No errors
Passed Ping flood test - pingping
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test sends a large number of messages in a loop in the source processes, and receives a large number of messages in a loop in the destination process.
No errors
Passed MPI_Probe() test 2 - probenull
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This program checks that MPI_Iprobe and MPI_Probe correctly handle a source of MPI_PROC_NULL.
No errors
Passed MPI_Probe() test 1 - probe-unexp
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This program verifies that MPI_Probe() is operating properly in the face of unexpected messages arriving after MPI_Probe() has been called. This program may hang if MPI_Probe() does not return when the message finally arrives.
No errors
Failed Many send/cancel test 1 - pscancel
Build: Passed
Execution: Failed
Exit Status: Application_timed_out
MPI Processes: 2
Test Description:
Test of various send cancel calls.
No errors
Passed Many send/cancel test 2 - rcancel
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test of various receive cancel calls, with multiple requests to cancel.
No errors
Passed MPI_Isend()/MPI_Request test - rqfreeb
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_Ibsend and MPI_Request_free.
No errors
Passed MPI_Request_get_status() test - rqstatus
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test MPI_Request_get_status(). The test also checks that MPI_REQUEST_NULL and MPI_STATUS_IGNORE work as arguments as required beginning with MPI-2.2.
No errors
Passed MPI_Cancel() test 2 - scancel2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test of send cancel (failure) calls.
No errors
Passed MPI_Cancel() test 1 - scancel
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test of various send cancel calls.
No errors
Passed MPI_Request() test 3 - sendall
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test issues many non-blocking receives followed by many blocking MPI_Send() calls, then issues an MPI_Wait() on all pending receives. When complete, the program prints the amount of time transpired using MPI_Wtime().
No errors
Passed Race condition test - sendflood
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
Run this test with 8 processes. This test was submitted as a result of problems seen with the ch3:shm device on a Solaris system. The symptom is that the test hangs; this is due to losing a message, probably due to a race condition in a message-queue update.
No errors
Passed MPI_{Send,Receive} test 1 - sendrecv1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a simple test of Send-Recv.
No errors
Passed MPI_{Send,Receive} test 2 - sendrecv2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This is a simple test of various Send-Recv.
No errors
Passed MPI_{Send,Receive} test 3 - sendrecv3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Head to head send-recv to test backoff in device when large messages are being transferred.
No errors
Passed Preposted receive test - sendself
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Test of sending to self (root) (with a preposted receive).
No errors
Passed MPI_Waitany() test 1 - waitany-null
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Rhis is a simple test of MPI_Waitany().
No errors
Passed MPI_Waitany() test 2 - waittestnull
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This program checks that the various MPI_Test and MPI_Wait routines allow both null requests and in the multiple completion cases, empty lists of requests.
No errors
Passed Simple thread test 1 - initth2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
The test initializes a thread, then calls MPI_Finalize() and prints "No errors".
No errors
Passed Simple thread test 2 - initth
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
The test here is a simple one that Finalize exits, so the only action is to write no error.
No errors
Communicator Testing - Score: 100% Passed
This group features tests that emphasize MPI calls that create, manipulate, and delete MPI Communicators.
Passed Comm_split test 2 - cmsplit2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 12
Test Description:
This test ensures that MPI_Comm_split breaks ties in key values by using the original rank in the input communicator. This typically corresponds to the difference between using a stable sort or using an unstable sort. It checks all sizes from 1..comm_size(world)-1, so this test does not need to be run multiple times at process counts from a higher-level test driver.
No errors
Passed Comm_split test 3 - cmsplit
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test comm split.
No errors
Passed Comm_split test 4 - cmsplit_type
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test only checks that the MPI_Comm_split_type routine doesn't fail. It does not check for correct behavior.
No errors
Passed Comm creation test - commcreate1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
Check that Communicators can be created from various subsets of the processes in the communicator.
No errors
Passed Comm_create_group test 2 - comm_create_group4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This routine creates/frees groups using different schemes.
No errors
Passed Comm_create_group test 3 - comm_create_group8
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
This routine creates/frees groups using different schemes.
No errors
Passed Comm_create_group test 4 - comm_group_half2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This routine creates/frees groups using different schemes.
No errors
Passed Comm_create_group test 5 - comm_group_half4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This routine creates/frees groups using different schemes.
No errors
Passed Comm_creation_group test 6 - comm_group_half8
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
This routine creates/frees groups using different schemes.
No errors
Passed Comm_create_group test 7 - comm_group_rand2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This routine creates/frees groups using even-odd pairs.
No errors
Passed Comm_create_group test 8 - comm_group_rand4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This routine create/frees groups using modulus 4 random numbers.
No errors
Passed Comm_create_group test 1 - comm_group_rand8
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
This test is create/frees groups using different schemes.
No errors
Passed Comm_idup test 1 - comm_idup2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test exercises MPI_Comm_idup().
No errors
Passed Comm_idup test 2 - comm_idup4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test plan: Make rank 0 wait in a blocking recv until all other processes have posted their MPI_Comm_idup ops, then post last. Should ensure that idup doesn't block on the non-zero ranks, otherwise we'll get a deadlock.
No errors
Passed Comm_idup test 3 - comm_idup9
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 9
Test Description:
Test plan: Make rank 0 wait in a blocking recv until all other processes have posted their MPI_Comm_idup ops, then post last. Should ensure that idup doesn't block on the non-zero ranks, otherwise we'll get a deadlock.
No errors
Passed Comm_idup test 4 - comm_idup_mul
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test creating multiple communicators with MPI_Comm_idup.
No errors
Passed Comm_idup test 5 - comm_idup_overlap
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Each pair dups the communicator such that the dups are overlapping. If this were done with MPI_Comm_dup, this should deadlock.
No errors
Passed MPI_Info_create() test - comm_info
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 6
Test Description:
Comm_{set,get}_info test
No errors
Passed Comm_{get,set}_name test - commname
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_Comm_get_name().
No errors
Passed Comm_{dup,free} test - ctxalloc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This program tests the allocation (and deallocation) of contexts.
No errors
Passed Context split test - ctxsplit
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This check is intended to fail if there is a leak of context ids. Because this is trying to exhaust the number of context ids, it needs to run for a longer time than many tests. The for loop uses 10000 iterations, which is adequate for MPICH (with only about 1k context ids available).
No errors
Passed Comm_dup test 1 - dup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test exercises MPI_Comm_dup().
No errors
Passed Comm_dup test 2 - dupic
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Check that there are separate contexts. We do this by setting up non-blocking received on both communicators, and then sending to them. If the contexts are different, tests on the unsatisfied communicator should indicate no available message.
No errors
Passed Comm_with_info() test 1 - dup_with_info2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test exercises MPI_Comm_dup_with_info().
No errors
Passed Comm_with_info test 2 - dup_with_info4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test exercises MPI_Comm_dup_with_info().
No errors
Passed Comm_with_info test 3 - dup_with_info9
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 9
Test Description:
This test exercises MPI_Comm_dup_with_info().
No errors
Passed Intercomm_create test 1 - ic1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
A simple test of the intercomm create routine, with a communication test.
No errors
Passed Intercomm_create test 2 - ic2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 33
Test Description:
Regression test based on test code from N. Radclif@Cray.
No errors
Passed Comm_create() test - iccreate
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
This program tests that MPI_Comm_create applies to intercommunicators. This is an extension added in MPI-2.
No errors
Passed Comm_create group tests - icgroup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
Get the group of an intercommunicator.The following illustrates the use of the routines to run through a selection of communicators and datatypes.
No errors
Passed Intercomm_merge test - icm
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
Test intercomm merge, including the choice of the high value.
No errors
Passed Comm_split Test 1 - icsplit
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
This tests whether MPI_Comm_split() applies to intercommunicators which is an extension of MPI-2.
No errors
Passed Intercomm_probe test - probe-intercomm
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test MPI_Probe() with an intercomm communicator.
No errors
Passed Threaded group test - comm_create_group_threads
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
In this test a number of threads are created with a distinct MPI communicator (or comm) group distringuished by its thread-id (used as a tag). Threads on even ranks join an even comm and threads on odd ranks join the odd comm.
No errors
Passed Thread Group creation test - comm_create_threads
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Every thread paticipates in a distinct MPI_Comm_create group, distinguished by its thread-id (used as the tag). Threads on even ranks join an even comm and threads on odd ranks join the odd comm.
No errors
Passed Easy thread test 1 - comm_dup_deadlock
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a simple test of threads in MPI.
No errors
Passed Easy thread test 2 - comm_idup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a simple test of threads in MPI
No Errors
Passed Multiple threads test 1 - ctxdup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test creates communications concurrently in different threads.
No errors
Passed Multiple threads test 2 - ctxidup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test creates communications concurrently in different threads.
No errors
Passed Multiple threads test 3 - dup_leak_test
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test repeatedly duplicates and frees communicators with multiple threads concurrently to stress the multithreaded aspects of the context ID allocation code. Thanks to IBM for providing the original version of this test.
No errors
Error Processing - Score: 75% Passed
This group features tests of MPI error processing.
Failed Error Handling test - errors
Build: Passed
Execution: Failed
Exit Status: Failed with signal 127
MPI Processes: 1
Test Description:
Reports the default action taken on an error. It also reports if error handling can be changed to "returns", and if so, if this functions properly.
MPI errors are fatal by default. MPI errors can be changed to MPI_ERRORS_RETURN. Call MPI_Send() with a bad destination rank. MPT ERROR: Assertion failed at gps.c:187: "MPI_UNDEFINED != grank" MPT ERROR: Rank 0(g:0) is aborting with error code 0. Process ID: 173825, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/utk/errors MPT Version: HPE MPT 2.20 08/30/19 04:33:45 MPT: --------stack traceback------- MPT: Attaching to program: /proc/173825/exe, process 173825 MPT: (no debugging symbols found)...done. MPT: [New LWP 173827] MPT: [Thread debugging using libthread_db enabled] MPT: Using host libthread_db library "/lib64/libthread_db.so.1". MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: Missing separate debuginfos, use: debuginfo-install glibc-2.17-307.el7.1.x86_64 libbitmask-2.0-sgi720r52.rhel76.x86_64 libcpuset-1.0-sgi720r102.rhel76.x86_64 libgcc-4.8.5-39.el7.x86_64 libibverbs-22.4-2.el7_8.x86_64 libnl3-3.2.28-4.el7.x86_64 libpsm2-11.2.80-1.x86_64 numactl-libs-2.0.12-5.el7.x86_64 numatools-2.0-sgi720r149.rhel76.x86_64 MPT: (gdb) #0 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: #1 0x00002aaaab61d806 in mpi_sgi_system ( MPT: #2 MPI_SGI_stacktraceback ( MPT: header=header@entry=0x7fffffffbb70 "MPT ERROR: Rank 0(g:0) is aborting with error code 0.\n\tProcess ID: 173825, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/utk/errors\n\tMPT Version: HPE MPT 2.20 08/30/19 04:33:45\n") at sig.c:340 MPT: #3 0x00002aaaab565fc9 in print_traceback (ecode=ecode@entry=0) at abort.c:246 MPT: #4 0x00002aaaab566476 in MPI_SGI_abort () at abort.c:122 MPT: #5 0x00002aaaab56d08a in MPI_SGI_assert_fail ( MPT: str=str@entry=0x2aaaab69a2e5 "MPI_UNDEFINED != grank", MPT: file=file@entry=0x2aaaab69a2c8 "gps.c", line=line@entry=187) at all.c:217 MPT: #6 0x00002aaaab5bd12b in MPI_SGI_gps_initialize ( MPT: dom=dom@entry=0x2aaaab8d7dc0 <dom_default>, grank=grank@entry=-3) MPT: at gps.c:187 MPT: #7 0x00002aaaab560892 in MPI_SGI_gps (grank=-3, MPT: dom=0x2aaaab8d7dc0 <dom_default>) at gps.h:149 MPT: #8 MPI_SGI_request_send (modes=modes@entry=9, MPT: ubuf=ubuf@entry=0x7fffffffc290, count=1, type=type@entry=3, MPT: des=des@entry=1, tag=tag@entry=-1, comm=1) at req.c:764 MPT: #9 0x00002aaaab61c1cd in PMPI_Send (buf=0x7fffffffc290, MPT: count=<optimized out>, type=3, des=1, tag=-1, comm=1) at send.c:34 MPT: #10 0x0000000000402318 in main () MPT: (gdb) A debugging session is active. MPT: MPT: Inferior 1 [process 173825] will be detached. MPT: MPT: Quit anyway? (y or n) [answered Y; input not from terminal] MPT: Detaching from program: /proc/173825/exe, process 173825 MPT: [Inferior 1 (process 173825) detached] MPT: -----stack traceback ends----- MPT ERROR: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize() aborting job
Passed MPI FILE I/O test - userioerr
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test exercises MPI I/O and MPI error handling techniques.
No errors
Passed MPI_Add_error_class() test - adderr
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Create NCLASSES new classes, each with 5 codes (160 total).
No errors
Failed MPI_Comm_errhandler() test - commcall
Build: Failed
Execution: NA
Exit Status: Build_errors
MPI Processes: 2
Test Description:
Test comm_{set,call}_errhandle.
Test Output: None.
Passed MPI_Error_string() test 1 - errstring
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Test that prints out MPI error codes from 0-53.
msg for 0 is No error msg for 1 is Invalid buffer pointer msg for 2 is Invalid count argument msg for 3 is Invalid datatype argument msg for 4 is Invalid tag argument msg for 5 is Invalid communicator msg for 6 is Invalid rank msg for 7 is Invalid request (handle) msg for 8 is Invalid root msg for 9 is Invalid group msg for 10 is Invalid operation msg for 11 is Invalid topology msg for 12 is Invalid dimension argument msg for 13 is Invalid argument msg for 14 is Unknown error msg for 15 is Message truncated on receive: An application bug caused the sender to send too much data msg for 16 is Unclassified error msg for 17 is Internal MPI (implementation) error msg for 18 is Error code is in status msg for 19 is Pending request msg for 20 is (undefined error code 20) msg for 21 is (undefined error code 21) msg for 22 is (undefined error code 22) msg for 23 is (undefined error code 23) msg for 24 is (undefined error code 24) msg for 25 is (undefined error code 25) msg for 26 is (undefined error code 26) msg for 27 is (undefined error code 27) msg for 28 is File access permission denied msg for 29 is Error related to the amode passed to MPI_FILE_OPEN msg for 30 is Invalid assert argument msg for 31 is Invalid file name msg for 32 is Invalid base argument msg for 33 is An error occurred in a user-supplied data conversion function msg for 34 is Invalid disp argument msg for 35 is Conversion functions could not be registered because a data representation identifier that was already defined was passed to MPI_REGISTER_DATAREP msg for 36 is File exists msg for 37 is File operation could not be completed because the file is currently open by some process msg for 38 is Invalid file handle msg for 39 is Info key length exceeds maximum supported length msg for 40 is Info key value is not defined msg for 41 is Info value length exceeds maximum supported length msg for 42 is MPI info error msg for 43 is I/O error msg for 44 is Info key value length exceeds maximum supported length msg for 45 is Invalid locktype argument msg for 46 is Name error msg for 47 is No additional memory could be allocated msg for 48 is Collective argument not identical on all processes, or collective routines called in a different order by different processes msg for 49 is No additional file space is available msg for 50 is File does not exist msg for 51 is Port error msg for 52 is A file quota was exceeded msg for 53 is Read-only file or file system No errors.
Passed MPI_Error_string() test 2 - errstring2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Simple test where an MPI error class is created, and an error string introduced for that string.
No errors
Passed User error handling test 2 - predef_eh2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Ensure that setting a user-defined error handler on predefined communicators does not cause a problem at finalize time. Regression test for ticket #1591.
No errors
Passed User error handling test 1 - predef_eh
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Ensure that setting a user-defined error handler on predefined communicators does not cause a problem at finalize time. Regressiontest for ticket #1591.
No errors
UTK Test Suite - Score: 92% Passed
This group features the test suite developed at the University of Tennesss Knoxville for MPI-2.2 and earlier specifications. Though techically not a functional group, it was retained to allow comparison with the previous benchmark suite.
Passed Alloc_mem test - alloc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks if MPI_Alloc_mem() is supported. If test passes, then the following is reported "MPI_Alloc_mem is supported." else, "MPI_Alloc_mem NOT supported" is reported.
No errors
Passed Communicator attributes test - attributes
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Returns all communicator attributes that are not supported. The test is run as a single process MPI job.
No errors
Passed Extended collectives test - collectives
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Checks if "extended collectives" are supported. If the test fails to compile, then "extended collectives" are not supported. If the test compiles, then a 4-process MPI job is executed. If the job aborts, then "Extended collectives NOT supported" is reported. If the job executes and the correct value is returned, then "Extended collectives ARE supported" is reported.
No errors
Passed Deprecated routines test - deprecated
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks all MPI deprecated routines as of MPI-2.2.
MPI_Address(): is functional. MPI_Attr_delete(): is functional. MPI_Attr_get(): is functional. MPI_Attr_put(): is functional. MPI_Errhandler_create(): is functional. MPI_Errhandler_get(): is functional. MPI_Errhandler_set(): is functional. MPI_Keyval_create(): is functional. MPI_Keyval_free(): is functional. MPI_Type_extent(): is functional. MPI_Type_hindexed(): is functional. MPI_Type_hvector(): is functional. MPI_Type_lb(): is functional. MPI_Type_struct(): is functional. MPI_Type_ub(): is functional. No errors
Passed Dynamic process management test - dynamic
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks if the dynamic process management routines through MPI-2.2 are defined. If the test passes, then "No errors" is reported.
MPI_Comm_spawn(): verified MPI_Comm_get_parrent(): verified MPI_Open_port(): verified MPI_Comm_accept(): verified MPI_Comm_connect(): verified MPI_Publish_name(): verified MPI_Unpublish_name(): verified MPI_Lookup_name(): verified MPI_Comm_disconnect(): verified MPI_Comm_join(): verified Dynamic process management routines: verified No errors
Failed Error Handling test - errors
Build: Passed
Execution: Failed
Exit Status: Failed with signal 127
MPI Processes: 1
Test Description:
Reports the default action taken on an error. It also reports if error handling can be changed to "returns", and if so, if this functions properly.
MPI errors are fatal by default. MPI errors can be changed to MPI_ERRORS_RETURN. Call MPI_Send() with a bad destination rank. MPT ERROR: Assertion failed at gps.c:187: "MPI_UNDEFINED != grank" MPT ERROR: Rank 0(g:0) is aborting with error code 0. Process ID: 173825, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/utk/errors MPT Version: HPE MPT 2.20 08/30/19 04:33:45 MPT: --------stack traceback------- MPT: Attaching to program: /proc/173825/exe, process 173825 MPT: (no debugging symbols found)...done. MPT: [New LWP 173827] MPT: [Thread debugging using libthread_db enabled] MPT: Using host libthread_db library "/lib64/libthread_db.so.1". MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: Missing separate debuginfos, use: debuginfo-install glibc-2.17-307.el7.1.x86_64 libbitmask-2.0-sgi720r52.rhel76.x86_64 libcpuset-1.0-sgi720r102.rhel76.x86_64 libgcc-4.8.5-39.el7.x86_64 libibverbs-22.4-2.el7_8.x86_64 libnl3-3.2.28-4.el7.x86_64 libpsm2-11.2.80-1.x86_64 numactl-libs-2.0.12-5.el7.x86_64 numatools-2.0-sgi720r149.rhel76.x86_64 MPT: (gdb) #0 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: #1 0x00002aaaab61d806 in mpi_sgi_system ( MPT: #2 MPI_SGI_stacktraceback ( MPT: header=header@entry=0x7fffffffbb70 "MPT ERROR: Rank 0(g:0) is aborting with error code 0.\n\tProcess ID: 173825, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/utk/errors\n\tMPT Version: HPE MPT 2.20 08/30/19 04:33:45\n") at sig.c:340 MPT: #3 0x00002aaaab565fc9 in print_traceback (ecode=ecode@entry=0) at abort.c:246 MPT: #4 0x00002aaaab566476 in MPI_SGI_abort () at abort.c:122 MPT: #5 0x00002aaaab56d08a in MPI_SGI_assert_fail ( MPT: str=str@entry=0x2aaaab69a2e5 "MPI_UNDEFINED != grank", MPT: file=file@entry=0x2aaaab69a2c8 "gps.c", line=line@entry=187) at all.c:217 MPT: #6 0x00002aaaab5bd12b in MPI_SGI_gps_initialize ( MPT: dom=dom@entry=0x2aaaab8d7dc0 <dom_default>, grank=grank@entry=-3) MPT: at gps.c:187 MPT: #7 0x00002aaaab560892 in MPI_SGI_gps (grank=-3, MPT: dom=0x2aaaab8d7dc0 <dom_default>) at gps.h:149 MPT: #8 MPI_SGI_request_send (modes=modes@entry=9, MPT: ubuf=ubuf@entry=0x7fffffffc290, count=1, type=type@entry=3, MPT: des=des@entry=1, tag=tag@entry=-1, comm=1) at req.c:764 MPT: #9 0x00002aaaab61c1cd in PMPI_Send (buf=0x7fffffffc290, MPT: count=<optimized out>, type=3, des=1, tag=-1, comm=1) at send.c:34 MPT: #10 0x0000000000402318 in main () MPT: (gdb) A debugging session is active. MPT: MPT: Inferior 1 [process 173825] will be detached. MPT: MPT: Quit anyway? (y or n) [answered Y; input not from terminal] MPT: Detaching from program: /proc/173825/exe, process 173825 MPT: [Inferior 1 (process 173825) detached] MPT: -----stack traceback ends----- MPT ERROR: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize() aborting job
Passed Init argument test - init_args
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
In MPI-1.1, it is explicitly stated that an implementation is allowed to require that the arguments argc and argv passed by an application to MPI_Init in C be the same arguments passed into the application as the arguments to main. In MPI-2 implementations are not allowed to impose this requirement. Conforming implementations of MPI allow applications to pass NULL for both the argc and argv arguments of MPI_Init(). This test prints the result of the error status of MPI_Init(). If the test completes without error, it reports 'No errors.'
MPI_INIT accepts Null arguments for MPI_init(). No errors
Passed C/Fortran interoperability test - interoperability
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks if the C-Fortran (F77) interoperability functions are supported using MPI-2.2 specification.
No errors
Passed I/O modes test - io_modes
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks if all MPI predefined I/O modes are supported. If test passes, "No errors" is reported. Any modes not supported are indicated individually as not being supported.
MPI_MODE_APPEND:128 MPI_MODE_CREATE:1 MPI_MODE_DELETE_ON_CLOSE:16 MPI_MODE_EXCL:64 MPI_MODE_RDONLY:2 MPI_MODE_RDWR:8 MPI_MODE_SEQUENTIAL:256 MPI_MODE_UNIQUE_OPEN:32 MPI_MODE_WRONLY:4 No errors
Passed I/O verification test 1 - io_test
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Reports if MPI I/O is supported. If the MPI-I/O routines terminate normally and provide correct results, MPI-I/O reportes "No errors", otherwise error messages are generated.
rank:0/4 MPI-I/O is supported. No errors rank:1/4 MPI-I/O is supported. rank:2/4 MPI-I/O is supported. rank:3/4 MPI-I/O is supported.
Passed I/O verification test 2 - io_verify
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test is used to verify that the file created by io_test,c holds the correct values. If the test fails, the problem is reported. If all tests pass successfully it is reported that MPI-I/O is supported.
MPI-I/O: MPI_File_open() is verified. MPI-I/O: MPI_File_read() is verified. MPI-I/O: MPI_FILE_close() is verified. No errors
Passed Master/slave test - master
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test running as a single MPI process spawns four slave processes using MPI_Comm_spawn(). The master process sends and receives a message from each slave. If the test completes, it will report 'No errors.', otherwise specific error messages are listed.
MPI_UNIVERSE_SIZE read 33 MPI_UNIVERSE_SIZE forced to 33 master rank creating 4 slave processes. master error code for slave:0 is 0. master error code for slave:1 is 0. master error code for slave:2 is 0. master error code for slave:3 is 0. master rank:0/1 sent an int:4 to slave rank:0. slave rank:0/4 alive. master rank:0/1 sent an int:4 to slave rank:1. slave rank:1/4 alive. master rank:0/1 sent an int:4 to slave rank:2. slave rank:2/4 alive. master rank:0/1 sent an int:4 to slave rank:3. slave rank:3/4 alive. slave rank:3/4 received an int:4 from rank 0 master rank:0/1 recv an int:0 from slave rank:0 slave rank:0/4 received an int:4 from rank 0 slave rank:0/4 sent its rank to rank 0 slave rank 0 just before disconnecting from master_comm. slave rank: 0 after disconnecting from master_comm. master rank:0/1 recv an int:1 from slave rank:1 master rank:0/1 recv an int:2 from slave rank:2 master rank:0/1 recv an int:3 from slave rank:3 ./master ending with exit status:0 slave rank:1/4 received an int:4 from rank 0 slave rank:1/4 sent its rank to rank 0 slave rank 1 just before disconnecting from master_comm. slave rank: 1 after disconnecting from master_comm. slave rank:2/4 received an int:4 from rank 0 slave rank:2/4 sent its rank to rank 0 slave rank 2 just before disconnecting from master_comm. slave rank: 2 after disconnecting from master_comm. slave rank:3/4 sent its rank to rank 0 slave rank 3 just before disconnecting from master_comm. slave rank: 3 after disconnecting from master_comm. No errors
Failed MPI-2 Routines test 2 - mpi_2_functions_bcast
Build: Passed
Execution: Failed
Exit Status: Application_timed_out
MPI Processes: 2
Test Description:
This test simply checks all MPI-2 routines that replaced some MPI-1 routines. Since these routines were added to avoid ambiquity with MPI-2 functionality, they do not add functionality to the test suite.
Test Output: None.
Passed MPI-2 routines test 1 - mpi_2_functions
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks all MPI-2.2 routines that replaced deprecated routines. If the test passes, then "No errors" is reported, otherwise, specific errors are reported."
No errors
Passed One-sided fences test - one_sided_fences
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test Verifies that one-sided communication with active target synchronization with fences functions properly. If all operations succeed, one-sided communication with active target synchronization with fences is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with active target synchronization with fences is reported as NOT supported.
No errors
Passed One-sided communication test - one_sided_modes
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks MPI-2.2 one-sided communication modes reporting those that are not defined. If the test compiles, then "No errors" is reported, else, all undefined modes are reported as "not defined."
No errors
Passed One-sided passive test - one_sided_passive
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Verifies one-sided-communication with passive target synchronization functions properly. If all operations succeed, one-sided-communication with passive target synchronization is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with passive target synchronization with fences is reported as NOT supported.
No errors
Passed One-sided post test - one_sided_post
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Verifies one-sided-communication with active target synchronization with post/start/complete/wait functions properly. If all operations succeed, one-sided communication with active target synchronization with post/start/complete/wait is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with active target synchronization with post/start/complete/wait is reported as NOT supported.
No errors
Passed One-sided routines test - one_sided_routines
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Reports if one-sided communication routines are defined. If this test compiles, one-sided communication is reported as defined, otherwise "not supported".
No errors
Passed Thread support test - thread_safety
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Reports the level of thread support provided. This is either MPI_THREAD_SINGLE, MPI_THREAD_FUNNELED, MPI_THREAD_SERIALIZED, or MPI_THREAD_MULTIPLE.
MPI_THREAD_MULTIPLE requested. MPI_THREAD_MULTIPLE is supported. No errors
Passed Errorcodes test - process_errorcodes
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 0
Test Description:
The MPI-3.0 specifications require that the same constants be available for the C language and FORTRAN. The report includes a record for each errorcode of the form "X MPI_ERRCODE is [not] verified" where X is either 'c' for the C compiler, or 'F' for the FORTRAN 77 compiler. The report sumarizes with the number of errorcodes for each compiler that were successfully verified.
c "MPI_ERR_ACCESS" (28) is verified. c "MPI_ERR_AMODE" (29) is verified. c "MPI_ERR_ARG" (13) is verified. c "MPI_ERR_ASSERT" (30) is verified. c "MPI_ERR_BAD_FILE" (31) is verified. c "MPI_ERR_BASE" (32) is verified. c "MPI_ERR_BUFFER" (1) is verified. c "MPI_ERR_COMM" (5) is verified. c "MPI_ERR_CONVERSION" (33) is verified. c "MPI_ERR_COUNT" (2) is verified. c "MPI_ERR_DIMS" (12) is verified. c "MPI_ERR_DISP" (34) is verified. c "MPI_ERR_DUP_DATAREP" (35) is verified. c "MPI_ERR_FILE" (38) is verified. c "MPI_ERR_FILE_EXISTS" (36) is verified. c "MPI_ERR_FILE_IN_USE" (37) is verified. c "MPI_ERR_GROUP" (9) is verified. c "MPI_ERR_IN_STATUS" (18) is verified. c "MPI_ERR_INFO" (42) is verified. c "MPI_ERR_INFO_KEY" (39) is verified. c "MPI_ERR_INFO_NOKEY" (40) is verified. c "MPI_ERR_INFO_VALUE" (41) is verified. c "MPI_ERR_INTERN" (17) is verified. c "MPI_ERR_IO" (43) is verified. c "MPI_ERR_KEYVAL" (44) is verified. c "MPI_ERR_LASTCODE" (100) is verified. c "MPI_ERR_LOCKTYPE" (45) is verified. c "MPI_ERR_NAME" (46) is verified. c "MPI_ERR_NO_MEM" (47) is verified. c "MPI_ERR_NO_SPACE" (49) is verified. c "MPI_ERR_NO_SUCH_FILE" (50) is verified. c "MPI_ERR_NOT_SAME" (48) is verified. c "MPI_ERR_OP" (10) is verified. c "MPI_ERR_OTHER" (16) is verified. c "MPI_ERR_PENDING" (19) is verified. c "MPI_ERR_PORT" (51) is verified. c "MPI_ERR_QUOTA" (52) is verified. c "MPI_ERR_RANK" (6) is verified. c "MPI_ERR_READ_ONLY" (53) is verified. c "MPI_ERR_REQUEST" (7) is verified. c "MPI_ERR_RMA_ATTACH" (63) is verified. c "MPI_ERR_RMA_CONFLICT" (54) is verified. c "MPI_ERR_RMA_FLAVOR" (65) is verified. c "MPI_ERR_RMA_RANGE" (62) is verified. c "MPI_ERR_RMA_SHARED" (64) is verified. c "MPI_ERR_RMA_SYNC" (55) is verified. c "MPI_ERR_ROOT" (8) is verified. c "MPI_ERR_SERVICE" (56) is verified. c "MPI_ERR_SIZE" (57) is verified. c "MPI_ERR_SPAWN" (58) is verified. c "MPI_ERR_TAG" (4) is verified. c "MPI_ERR_TOPOLOGY" (11) is verified. c "MPI_ERR_TRUNCATE" (15) is verified. c "MPI_ERR_TYPE" (3) is verified. c "MPI_ERR_UNKNOWN" (14) is verified. c "MPI_ERR_UNSUPPORTED_DATAREP" (59) is verified. c "MPI_ERR_UNSUPPORTED_OPERATION" (60) is verified. c "MPI_ERR_WIN" (61) is verified. c "MPI_SUCCESS" (0) is verified. c "MPI_T_ERR_CANNOT_INIT" (66) is verified. c "MPI_T_ERR_CVAR_SET_NEVER" (76) is verified. c "MPI_T_ERR_CVAR_SET_NOT_NOW" (75) is verified. c "MPI_T_ERR_INVALID_HANDLE" (72) is verified. c "MPI_T_ERR_INVALID_INDEX" (69) is verified. c "MPI_T_ERR_INVALID_ITEM" (70) is verified. c "MPI_T_ERR_INVALID_SESSION" (71) is verified. c "MPI_T_ERR_MEMORY" (68) is verified. c "MPI_T_ERR_NOT_INITIALIZED" (67) is verified. c "MPI_T_ERR_OUT_OF_HANDLES" (73) is verified. c "MPI_T_ERR_OUT_OF_SESSIONS" (74) is verified. c "MPI_T_ERR_PVAR_NO_ATOMIC" (79) is verified. c "MPI_T_ERR_PVAR_NO_STARTSTOP" (78) is verified. c "MPI_T_ERR_PVAR_NO_WRITE" (77) is verified. F "MPI_ERR_ACCESS" (28) is verified F "MPI_ERR_AMODE" (29) is verified F "MPI_ERR_ARG" (13) is verified F "MPI_ERR_ASSERT" (30) is verified F "MPI_ERR_BAD_FILE" (31) is verified F "MPI_ERR_BASE" (32) is verified F "MPI_ERR_BUFFER" (1) is verified F "MPI_ERR_COMM" (5) is verified F "MPI_ERR_CONVERSION" (33) is verified F "MPI_ERR_COUNT" (2) is verified F "MPI_ERR_DIMS" (12) is verified F "MPI_ERR_DISP" (34) is verified F "MPI_ERR_DUP_DATAREP" (35) is verified F "MPI_ERR_FILE" (38) is verified F "MPI_ERR_FILE_EXISTS" (36) is verified F "MPI_ERR_FILE_IN_USE" (37) is verified F "MPI_ERR_GROUP" (9) is verified F "MPI_ERR_IN_STATUS" (18) is verified F "MPI_ERR_INFO" (42) is verified F "MPI_ERR_INFO_KEY" (39) is verified F "MPI_ERR_INFO_NOKEY" (40) is verified F "MPI_ERR_INFO_VALUE" (41) is verified F "MPI_ERR_INTERN" (17) is verified F "MPI_ERR_IO" (43) is verified F "MPI_ERR_KEYVAL" (44) is verified F "MPI_ERR_LASTCODE" (100) is verified F "MPI_ERR_LOCKTYPE" (45) is verified F "MPI_ERR_NAME" (46) is verified F "MPI_ERR_NO_MEM" (47) is verified F "MPI_ERR_NO_SPACE" (49) is verified F "MPI_ERR_NO_SUCH_FILE" (50) is verified F "MPI_ERR_NOT_SAME" (48) is verified F "MPI_ERR_OP" (10) is verified F "MPI_ERR_OTHER" (16) is verified F "MPI_ERR_PENDING" (19) is verified F "MPI_ERR_PORT" (51) is verified F "MPI_ERR_QUOTA" (52) is verified F "MPI_ERR_RANK" (6) is verified F "MPI_ERR_READ_ONLY" (53) is verified F "MPI_ERR_REQUEST" (7) is verified F "MPI_ERR_RMA_ATTACH" (63) is verified F "MPI_ERR_RMA_CONFLICT" (54) is verified F "MPI_ERR_RMA_FLAVOR" (65) is verified F "MPI_ERR_RMA_RANGE" (62) is verified F "MPI_ERR_RMA_SHARED" (64) is verified F "MPI_ERR_RMA_SYNC" (55) is verified F "MPI_ERR_ROOT" (8) is verified F "MPI_ERR_SERVICE" (56) is verified F "MPI_ERR_SIZE" (57) is verified F "MPI_ERR_SPAWN" (58) is verified F "MPI_ERR_TAG" (4) is verified F "MPI_ERR_TOPOLOGY" (11) is verified F "MPI_ERR_TRUNCATE" (15) is verified F "MPI_ERR_TYPE" (3) is verified F "MPI_ERR_UNKNOWN" (14) is verified F "MPI_ERR_UNSUPPORTED_DATAREP" is not verified: (compilation). F "MPI_ERR_UNSUPPORTED_OPERATION" is not verified: (compilation). F "MPI_ERR_WIN" (61) is verified F "MPI_SUCCESS" (0) is verified F "MPI_T_ERR_CANNOT_INIT" (66) is verified F "MPI_T_ERR_CVAR_SET_NEVER" (76) is verified F "MPI_T_ERR_CVAR_SET_NOT_NOW" (75) is verified F "MPI_T_ERR_INVALID_HANDLE" (72) is verified F "MPI_T_ERR_INVALID_INDEX" (69) is verified F "MPI_T_ERR_INVALID_ITEM" (70) is verified F "MPI_T_ERR_INVALID_SESSION" (71) is verified F "MPI_T_ERR_MEMORY" (68) is verified F "MPI_T_ERR_NOT_INITIALIZED" (67) is verified F "MPI_T_ERR_OUT_OF_HANDLES" (73) is verified F "MPI_T_ERR_OUT_OF_SESSIONS" (74) is verified F "MPI_T_ERR_PVAR_NO_ATOMIC" (79) is verified F "MPI_T_ERR_PVAR_NO_STARTSTOP" is not verified: (compilation). F "MPI_T_ERR_PVAR_NO_WRITE" (77) is verified C errorcodes successful: 73 out of 73 FORTRAN errorcodes successful:70 out of 73 No errors.
Passed Assignment constants test - process_assignment_constants
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 0
Test Description:
This test was added to the UTK suite as a partial replacement for the "utk/constants" test for Named Constants supported in MPI-1.0 and higher. The test is a perl script that constructs a small seperate main program in either C or Fortran for each constant. The constants for this test are used to assign a value to a const integer type in C and an integer type in Fortran. This test is the de facto test for any constant recognized by the compiler.
NOTE: The constants used in this test are tested against both C and Fortran compilers. Some of the constants are optional and may not be supported by the MPI implementation. Failure to verify these constants does not necessarily constitute failure of the MPI implementation to satisfy the MPI specifications.
c "MPI_ARGV_NULL" is verified by const integer. c "MPI_ARGVS_NULL" is verified by const integer. c "MPI_ANY_SOURCE" is verified by const integer. c "MPI_ANY_TAG" is verified by const integer. c "MPI_BAND" is verified by const integer. c "MPI_BOR" is verified by const integer. c "MPI_BSEND_OVERHEAD" is verified by const integer. c "MPI_BXOR" is verified by const integer. c "MPI_CART" is verified by const integer. c "MPI_COMBINER_CONTIGUOUS" is verified by const integer. c "MPI_COMBINER_DARRAY" is verified by const integer. c "MPI_COMBINER_DUP" is verified by const integer. c "MPI_COMBINER_F90_COMPLEX" is verified by const integer. c "MPI_COMBINER_F90_INTEGER" is verified by const integer. c "MPI_COMBINER_F90_REAL" is verified by const integer. c "MPI_COMBINER_HINDEXED" is verified by const integer. c "MPI_COMBINER_HINDEXED_INTEGER" is verified by const integer. c "MPI_COMBINER_HVECTOR" is verified by const integer. c "MPI_COMBINER_HVECTOR_INTEGER" is verified by const integer. c "MPI_COMBINER_INDEXED" is verified by const integer. c "MPI_COMBINER_INDEXED_BLOCK" is verified by const integer. c "MPI_COMBINER_NAMED" is verified by const integer. c "MPI_COMBINER_RESIZED" is verified by const integer. c "MPI_COMBINER_STRUCT" is verified by const integer. c "MPI_COMBINER_STRUCT_INTEGER" is verified by const integer. c "MPI_COMBINER_SUBARRAY" is verified by const integer. c "MPI_COMBINER_VECTOR" is verified by const integer. c "MPI_COMM_NULL" is verified by const integer. c "MPI_COMM_SELF" is verified by const integer. c "MPI_COMM_WORLD" is verified by const integer. c "MPI_CONGRUENT" is verified by const integer. c "MPI_CONVERSION_FN_NULL" is not verified. c "MPI_DATATYPE_NULL" is verified by const integer. c "MPI_DISPLACEMENT_CURRENT" is verified by const integer. c "MPI_DISTRIBUTE_BLOCK" is verified by const integer. c "MPI_DISTRIBUTE_CYCLIC" is verified by const integer. c "MPI_DISTRIBUTE_DFLT_DARG" is verified by const integer. c "MPI_DISTRIBUTE_NONE" is verified by const integer. c "MPI_ERRCODES_IGNORE" is verified by const integer. c "MPI_ERRHANDLER_NULL" is verified by const integer. c "MPI_ERRORS_ARE_FATAL" is verified by const integer. c "MPI_ERRORS_RETURN" is verified by const integer. c "MPI_F_STATUS_IGNORE" is verified by const integer. c "MPI_F_STATUSES_IGNORE" is verified by const integer. c "MPI_FILE_NULL" is verified by const integer. c "MPI_GRAPH" is verified by const integer. c "MPI_GROUP_NULL" is verified by const integer. c "MPI_IDENT" is verified by const integer. c "MPI_IN_PLACE" is verified by const integer. c "MPI_INFO_NULL" is verified by const integer. c "MPI_KEYVAL_INVALID" is verified by const integer. c "MPI_LAND" is verified by const integer. c "MPI_LOCK_EXCLUSIVE" is verified by const integer. c "MPI_LOCK_SHARED" is verified by const integer. c "MPI_LOR" is verified by const integer. c "MPI_LXOR" is verified by const integer. c "MPI_MAX" is verified by const integer. c "MPI_MAXLOC" is verified by const integer. c "MPI_MIN" is verified by const integer. c "MPI_MINLOC" is verified by const integer. c "MPI_OP_NULL" is verified by const integer. c "MPI_PROC_NULL" is verified by const integer. c "MPI_PROD" is verified by const integer. c "MPI_REPLACE" is verified by const integer. c "MPI_REQUEST_NULL" is verified by const integer. c "MPI_ROOT" is verified by const integer. c "MPI_SEEK_CUR" is verified by const integer. c "MPI_SEEK_END" is verified by const integer. c "MPI_SEEK_SET" is verified by const integer. c "MPI_SIMILAR" is verified by const integer. c "MPI_STATUS_IGNORE" is verified by const integer. c "MPI_STATUSES_IGNORE" is verified by const integer. c "MPI_SUCCESS" is verified by const integer. c "MPI_SUM" is verified by const integer. c "MPI_UNDEFINED" is verified by const integer. c "MPI_UNEQUAL" is verified by const integer. F "MPI_ARGV_NULL" is not verified. F "MPI_ARGVS_NULL" is not verified. F "MPI_ANY_SOURCE" is verified by integer assignment. F "MPI_ANY_TAG" is verified by integer assignment. F "MPI_BAND" is verified by integer assignment. F "MPI_BOR" is verified by integer assignment. F "MPI_BSEND_OVERHEAD" is verified by integer assignment. F "MPI_BXOR" is verified by integer assignment. F "MPI_CART" is verified by integer assignment. F "MPI_COMBINER_CONTIGUOUS" is verified by integer assignment. F "MPI_COMBINER_DARRAY" is verified by integer assignment. F "MPI_COMBINER_DUP" is verified by integer assignment. F "MPI_COMBINER_F90_COMPLEX" is verified by integer assignment. F "MPI_COMBINER_F90_INTEGER" is verified by integer assignment. F "MPI_COMBINER_F90_REAL" is verified by integer assignment. F "MPI_COMBINER_HINDEXED" is verified by integer assignment. F "MPI_COMBINER_HINDEXED_INTEGER" is verified by integer assignment. F "MPI_COMBINER_HVECTOR" is verified by integer assignment. F "MPI_COMBINER_HVECTOR_INTEGER" is verified by integer assignment. F "MPI_COMBINER_INDEXED" is verified by integer assignment. F "MPI_COMBINER_INDEXED_BLOCK" is verified by integer assignment. F "MPI_COMBINER_NAMED" is verified by integer assignment. F "MPI_COMBINER_RESIZED" is verified by integer assignment. F "MPI_COMBINER_STRUCT" is verified by integer assignment. F "MPI_COMBINER_STRUCT_INTEGER" is verified by integer assignment. F "MPI_COMBINER_SUBARRAY" is verified by integer assignment. F "MPI_COMBINER_VECTOR" is verified by integer assignment. F "MPI_COMM_NULL" is verified by integer assignment. F "MPI_COMM_SELF" is verified by integer assignment. F "MPI_COMM_WORLD" is verified by integer assignment. F "MPI_CONGRUENT" is verified by integer assignment. F "MPI_CONVERSION_FN_NULL" is not verified. F "MPI_DATATYPE_NULL" is verified by integer assignment. F "MPI_DISPLACEMENT_CURRENT" is verified by integer assignment. F "MPI_DISTRIBUTE_BLOCK" is verified by integer assignment. F "MPI_DISTRIBUTE_CYCLIC" is verified by integer assignment. F "MPI_DISTRIBUTE_DFLT_DARG" is verified by integer assignment. F "MPI_DISTRIBUTE_NONE" is verified by integer assignment. F "MPI_ERRCODES_IGNORE" is not verified. F "MPI_ERRHANDLER_NULL" is verified by integer assignment. F "MPI_ERRORS_ARE_FATAL" is verified by integer assignment. F "MPI_ERRORS_RETURN" is verified by integer assignment. F "MPI_F_STATUS_IGNORE" is verified by integer assignment. F "MPI_F_STATUSES_IGNORE" is verified by integer assignment. F "MPI_FILE_NULL" is verified by integer assignment. F "MPI_GRAPH" is verified by integer assignment. F "MPI_GROUP_NULL" is verified by integer assignment. F "MPI_IDENT" is verified by integer assignment. F "MPI_IN_PLACE" is not verified. F "MPI_INFO_NULL" is verified by integer assignment. F "MPI_KEYVAL_INVALID" is verified by integer assignment. F "MPI_LAND" is verified by integer assignment. F "MPI_LOCK_EXCLUSIVE" is verified by integer assignment. F "MPI_LOCK_SHARED" is verified by integer assignment. F "MPI_LOR" is verified by integer assignment. F "MPI_LXOR" is verified by integer assignment. F "MPI_MAX" is verified by integer assignment. F "MPI_MAXLOC" is verified by integer assignment. F "MPI_MIN" is verified by integer assignment. F "MPI_MINLOC" is verified by integer assignment. F "MPI_OP_NULL" is verified by integer assignment. F "MPI_PROC_NULL" is verified by integer assignment. F "MPI_PROD" is verified by integer assignment. F "MPI_REPLACE" is verified by integer assignment. F "MPI_REQUEST_NULL" is verified by integer assignment. F "MPI_ROOT" is verified by integer assignment. F "MPI_SEEK_CUR" is verified by integer assignment. F "MPI_SEEK_END" is verified by integer assignment. F "MPI_SEEK_SET" is verified by integer assignment. F "MPI_SIMILAR" is verified by integer assignment. F "MPI_STATUS_IGNORE" is not verified. F "MPI_STATUSES_IGNORE" is not verified. F "MPI_SUCCESS" is verified by integer assignment. F "MPI_SUM" is verified by integer assignment. F "MPI_UNDEFINED" is verified by integer assignment. F "MPI_UNEQUAL" is verified by integer assignment. Number of successful C constants: 75 of 76 Number of successful FORTRAN constants: 69 of 76 No errors.
Passed Compiletime constants test - process_compiletime_constants
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 0
Test Description:
The MPI-3.0 specifications require that some named constants be known at compiletime. The report includes a record for each constant of this class in the form "X MPI_CONSTANT is [not] verified by METHOD" where X is either 'c' for the C compiler, or 'F' for the FORTRAN 77 compiler. For a C langauge compile, the constant is used as a case label in a switch statement. For a FORTRAN language compile, the constant is assigned to a PARAMETER. The report sumarizes with the number of constants for each compiler that was successfully verified.
c "MPI_MAX_PROCESSOR_NAME" is verified by switch label. c "MPI_MAX_LIBRARY_VERSION_STRING" is verified by switch label. c "MPI_MAX_ERROR_STRING" is verified by switch label. c "MPI_MAX_DATAREP_STRING" is verified by switch label. c "MPI_MAX_INFO_KEY" is verified by switch label. c "MPI_MAX_INFO_VAL" is verified by switch label. c "MPI_MAX_OBJECT_NAME" is verified by switch label. c "MPI_MAX_PORT_NAME" is verified by switch label. c "MPI_VERSION" is verified by switch label. c "MPI_SUBVERSION" is verified by switch label. c "MPI_MAX_LIBRARY_VERSION_STRING" is verified by switch label. F "MPI_ADDRESS_KIND" is verified by PARAMETER. F "MPI_ASYNC_PROTECTS_NONBLOCKING" is verified by PARAMETER. F "MPI_COUNT_KIND" is verified by PARAMETER. F "MPI_ERROR" is verified by PARAMETER. F "MPI_ERRORS_ARE_FATAL" is verified by PARAMETER. F "MPI_ERRORS_RETURN" is verified by PARAMETER. F "MPI_INTEGER_KIND" is verified by PARAMETER. F "MPI_OFFSET_KIND" is verified by PARAMETER. F "MPI_SOURCE" is verified by PARAMETER. F "MPI_STATUS_SIZE" is verified by PARAMETER. F "MPI_SUBARRAYS_SUPPORTED" is verified by PARAMETER. F "MPI_TAG" is verified by PARAMETER. F "MPI_MAX_PROCESSOR_NAME" is verified by PARAMETER. F "MPI_MAX_LIBRARY_VERSION_STRING" is verified by PARAMETER. F "MPI_MAX_ERROR_STRING" is verified by PARAMETER. F "MPI_MAX_DATAREP_STRING" is verified by PARAMETER. F "MPI_MAX_INFO_KEY" is verified by PARAMETER. F "MPI_MAX_INFO_VAL" is verified by PARAMETER. F "MPI_MAX_OBJECT_NAME" is verified by PARAMETER. F "MPI_MAX_PORT_NAME" is verified by PARAMETER. F "MPI_VERSION" is verified by PARAMETER. F "MPI_SUBVERSION" is verified by PARAMETER. F "MPI_MAX_LIBRARY_VERSION_STRING" is verified by PARAMETER. Number of successful C constants: 11 of 11 Number of successful FORTRAN constants: 23 out of 23 No errors.
Passed Datatypes test - process_datatypes
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 0
Test Description:
This test was added to the UTK suite as a replacement for the "utk/datatypes" test for constants in MPI-1.0 and higher. The test is a constructs small seperate main programs in either C, FORTRAN77, or C++ for each datatype. If a test fails to compile, the datatype is reported as "not verified: (compilation)". If the test executes successfully, the report includes the size of the datatype (in bytes) and includes the words "is verified."
c "MPI_2DOUBLE_PRECISION" Size = 16 is verified. c "MPI_2INT" Size = 8 is verified. c "MPI_2INTEGER" Size = 8 is verified. c "MPI_2REAL" Size = 8 is verified. c "MPI_AINT" Size = 8 is verified. c "MPI_BYTE" Size = 1 is verified. c "MPI_C_BOOL" Size = 1 is verified. c "MPI_C_COMPLEX" Size = 8 is verified. c "MPI_C_DOUBLE_COMPLEX" Size = 16 is verified. c "MPI_C_FLOAT_COMPLEX" Size = 8 is verified. c "MPI_C_LONG_DOUBLE_COMPLEX" Size = 32 is verified. c "MPI_CHAR" Size = 1 is verified. c "MPI_CHARACTER" Size = 1 is verified. c "MPI_COMPLEX" Size = 8 is verified. c "MPI_COMPLEX2" is not verified: (compilation). c "MPI_COMPLEX4" is not verified: (compilation). c "MPI_COMPLEX8" Size = 8 is verified. c "MPI_COMPLEX16" Size = 16 is verified. c "MPI_COMPLEX32" Size = 32 is verified. c "MPI_DOUBLE" Size = 8 is verified. c "MPI_DOUBLE_INT" Size = 12 is verified. c "MPI_DOUBLE_COMPLEX" Size = 16 is verified. c "MPI_DOUBLE_PRECISION" Size = 8 is verified. c "MPI_FLOAT" Size = 4 is verified. c "MPI_FLOAT_INT" Size = 8 is verified. c "MPI_INT" Size = 4 is verified. c "MPI_INT8_T" Size = 1 is verified. c "MPI_INT16_T" Size = 2 is verified. c "MPI_INT32_T" Size = 4 is verified. c "MPI_INT64_T" Size = 8 is verified. c "MPI_INTEGER" Size = 4 is verified. c "MPI_INTEGER1" Size = 1 is verified. c "MPI_INTEGER2" Size = 2 is verified. c "MPI_INTEGER4" Size = 4 is verified. c "MPI_INTEGER8" Size = 8 is verified. c "MPI_INTEGER16" Size = 16 is verified. c "MPI_LB" Size = 0 is verified. c "MPI_LOGICAL" Size = 4 is verified. c "MPI_LONG" Size = 8 is verified. c "MPI_LONG_INT" Size = 12 is verified. c "MPI_LONG_DOUBLE" Size = 16 is verified. c "MPI_LONG_DOUBLE_INT" Size = 20 is verified. c "MPI_LONG_LONG" Size = 8 is verified. c "MPI_LONG_LONG_INT" Size = 8 is verified. c "MPI_OFFSET" Size = 8 is verified. c "MPI_PACKED" Size = 1 is verified. c "MPI_REAL" Size = 4 is verified. c "MPI_REAL2" is not verified: (compilation). c "MPI_REAL4" Size = 4 is verified. c "MPI_REAL8" Size = 8 is verified. c "MPI_REAL16" Size = 16 is verified. c "MPI_SHORT" Size = 2 is verified. c "MPI_SHORT_INT" Size = 6 is verified. c "MPI_SIGNED_CHAR" Size = 1 is verified. c "MPI_UB" Size = 0 is verified. c "MPI_UNSIGNED_CHAR" Size = 1 is verified. c "MPI_UNSIGNED_SHORT" Size = 2 is verified. c "MPI_UNSIGNED" Size = 4 is verified. c "MPI_UNSIGNED_LONG" Size = 8 is verified. c "MPI_WCHAR" Size = 2 is verified. c "MPI_LONG_LONG_INT" Size = 8 is verified. c "MPI_FLOAT_INT" Size = 8 is verified. c "MPI_DOUBLE_INT" Size = 12 is verified. c "MPI_LONG_INT" Size = 12 is verified. c "MPI_LONG_DOUBLE_INT" Size = 20 is verified. c "MPI_2INT" Size = 8 is verified. c "MPI_SHORT_INT" Size = 6 is verified. c "MPI_LONG_DOUBLE_INT" Size = 20 is verified. c "MPI_2REAL" Size = 8 is verified. c "MPI_2DOUBLE_PRECISION" Size = 16 is verified. c "MPI_2INTEGER" Size = 8 is verified. C "MPI_CXX_BOOL" Size = 1 is verified. C "MPI_CXX_FLOAT_COMPLEX" Size = 8 is verified. C "MPI_CXX_DOUBLE_COMPLEX" Size = 16 is verified. C "MPI_CXX_LONG_DOUBLE_COMPLEX" Size = 32 is verified. f "MPI_BYTE" Size =1 is verified. f "MPI_CHARACTER" Size =1 is verified. f "MPI_COMPLEX" Size =8 is verified. f "MPI_DOUBLE_COMPLEX" Size =16 is verified. f "MPI_DOUBLE_PRECISION" Size =8 is verified. f "MPI_INTEGER" Size =4 is verified. f "MPI_INTEGER1" Size =1 is verified. f "MPI_INTEGER2" Size =2 is verified. f "MPI_INTEGER4" Size =4 is verified. f "MPI_LOGICAL" Size =4 is verified. f "MPI_REAL" Size =4 is verified. f "MPI_REAL2" Size =0 is verified. f "MPI_REAL4" Size =4 is verified. f "MPI_REAL8" Size =8 is verified. f "MPI_PACKED" Size =1 is verified. f "MPI_2REAL" Size =8 is verified. f "MPI_2DOUBLE_PRECISION" Size =16 is verified. f "MPI_2INTEGER" Size =8 is verified. No errors.
Group Communicator - Score: 100% Passed
This group features tests of MPI communicator group calls.
Passed Win_get_group test - getgroup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a simple test of MPI_Win_get_group().
No errors
Passed MPI_Group_incl() test 1 - groupcreate
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a simple test of creating a group array.
No errors
Passed MPI_Group_incl() test 2 - groupnullincl
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a test to determine if an empty group can be created.
No errors
Passed MPI_Group_translate_ranks test - grouptest2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a test of MPI_Group_translate_ranks().
No errors
Passed MPI_Group_excl() test - grouptest
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
This is a test of MPI_Group_excl().
No errors
Passed MPI_Group irregular test - gtranks
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
This is a test comparing small groups against larger groups, and use groups with irregular members (to bypass optimizations in group_translate_ranks for simple groups).
No errors
Passed MPI_Group_Translate_ranks() test - gtranksperf
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 20
Test Description:
Measure and compare the relative performance of MPI_Group_translate_ranks with small and large group2 sizes but a constant number of ranks. This serves as a performance sanity check for the Scalasca use case where we translate to MPI_COMM_WORLD ranks. The performance should only depend on the number of ranks passed, not the size of either group (especially group2). This test is probably only meaningful for large-ish process counts.
No errors
Parallel Input/Output - Score: 100% Passed
This group features tests that involve MPI parallel input/output operations.
Passed I/O modes test - io_modes
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks if all MPI predefined I/O modes are supported. If test passes, "No errors" is reported. Any modes not supported are indicated individually as not being supported.
MPI_MODE_APPEND:128 MPI_MODE_CREATE:1 MPI_MODE_DELETE_ON_CLOSE:16 MPI_MODE_EXCL:64 MPI_MODE_RDONLY:2 MPI_MODE_RDWR:8 MPI_MODE_SEQUENTIAL:256 MPI_MODE_UNIQUE_OPEN:32 MPI_MODE_WRONLY:4 No errors
Passed I/O verification test 1 - io_test
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Reports if MPI I/O is supported. If the MPI-I/O routines terminate normally and provide correct results, MPI-I/O reportes "No errors", otherwise error messages are generated.
rank:0/4 MPI-I/O is supported. No errors rank:1/4 MPI-I/O is supported. rank:2/4 MPI-I/O is supported. rank:3/4 MPI-I/O is supported.
Passed I/O verification test 2 - io_verify
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test is used to verify that the file created by io_test,c holds the correct values. If the test fails, the problem is reported. If all tests pass successfully it is reported that MPI-I/O is supported.
MPI-I/O: MPI_File_open() is verified. MPI-I/O: MPI_File_read() is verified. MPI-I/O: MPI_FILE_close() is verified. No errors
Passed Asynchronous IO test - async_any
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test asynchronous I/O with multiple completion. Each process writes to separate files and reads them back.
No errors
Passed Asynchronous IO test - async
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test contig asynchronous I/O. Each process writes to separate files and reads them back. The file name is taken as a command-line argument, and the process rank is appended to it.
No errors
Passed MPI_File_get_type_extent test - getextent
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test file_get_extent.
No errors
Passed Non-blocking I/O test - i_noncontig
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests noncontiguous reads/writes using non-blocking I/O.
No errors
Passed MPI_File_write_ordered test 1 - rdwrord
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test reading and writing ordered output.
No errors
Passed MPI_File_write_ordered test 2 - rdwrzero
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test reading and writing data with zero length. The test then looks for errors in the MPI IO routines and reports any that were found, otherwise "No errors" is reported.
No errors
Passed MPI_Type_create_resized test - resized2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Test file views with MPI_Type_create_resized.
No errors
Passed MPI_Type_create_resized test - resized
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Test file views with MPI_Type_create_resized.
No errors
Passed MPI_Info_set() test - setinfo
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test file_set_view. Access style is explicitly described as modifiable. Values include read_once, read_mostly, write_once, write_mostly, random.
No errors
Passed MPI_File_set_view() test - setviewcur
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test set_view with DISPLACEMENT_CURRENT. This test reads a header then sets the view to every "size" int, using set view and current displacement. The file is first written using a combination of collective and ordered writes.
No errors
Passed MPI FILE I/O test - userioerr
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test exercises MPI I/O and MPI error handling techniques.
No errors
Datatypes - Score: 98% Passed
This group features tests that involve named MPI and user defined datatypes.
Passed Datatypes test - process_datatypes
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 0
Test Description:
This test was added to the UTK suite as a replacement for the "utk/datatypes" test for constants in MPI-1.0 and higher. The test is a constructs small seperate main programs in either C, FORTRAN77, or C++ for each datatype. If a test fails to compile, the datatype is reported as "not verified: (compilation)". If the test executes successfully, the report includes the size of the datatype (in bytes) and includes the words "is verified."
c "MPI_2DOUBLE_PRECISION" Size = 16 is verified. c "MPI_2INT" Size = 8 is verified. c "MPI_2INTEGER" Size = 8 is verified. c "MPI_2REAL" Size = 8 is verified. c "MPI_AINT" Size = 8 is verified. c "MPI_BYTE" Size = 1 is verified. c "MPI_C_BOOL" Size = 1 is verified. c "MPI_C_COMPLEX" Size = 8 is verified. c "MPI_C_DOUBLE_COMPLEX" Size = 16 is verified. c "MPI_C_FLOAT_COMPLEX" Size = 8 is verified. c "MPI_C_LONG_DOUBLE_COMPLEX" Size = 32 is verified. c "MPI_CHAR" Size = 1 is verified. c "MPI_CHARACTER" Size = 1 is verified. c "MPI_COMPLEX" Size = 8 is verified. c "MPI_COMPLEX2" is not verified: (compilation). c "MPI_COMPLEX4" is not verified: (compilation). c "MPI_COMPLEX8" Size = 8 is verified. c "MPI_COMPLEX16" Size = 16 is verified. c "MPI_COMPLEX32" Size = 32 is verified. c "MPI_DOUBLE" Size = 8 is verified. c "MPI_DOUBLE_INT" Size = 12 is verified. c "MPI_DOUBLE_COMPLEX" Size = 16 is verified. c "MPI_DOUBLE_PRECISION" Size = 8 is verified. c "MPI_FLOAT" Size = 4 is verified. c "MPI_FLOAT_INT" Size = 8 is verified. c "MPI_INT" Size = 4 is verified. c "MPI_INT8_T" Size = 1 is verified. c "MPI_INT16_T" Size = 2 is verified. c "MPI_INT32_T" Size = 4 is verified. c "MPI_INT64_T" Size = 8 is verified. c "MPI_INTEGER" Size = 4 is verified. c "MPI_INTEGER1" Size = 1 is verified. c "MPI_INTEGER2" Size = 2 is verified. c "MPI_INTEGER4" Size = 4 is verified. c "MPI_INTEGER8" Size = 8 is verified. c "MPI_INTEGER16" Size = 16 is verified. c "MPI_LB" Size = 0 is verified. c "MPI_LOGICAL" Size = 4 is verified. c "MPI_LONG" Size = 8 is verified. c "MPI_LONG_INT" Size = 12 is verified. c "MPI_LONG_DOUBLE" Size = 16 is verified. c "MPI_LONG_DOUBLE_INT" Size = 20 is verified. c "MPI_LONG_LONG" Size = 8 is verified. c "MPI_LONG_LONG_INT" Size = 8 is verified. c "MPI_OFFSET" Size = 8 is verified. c "MPI_PACKED" Size = 1 is verified. c "MPI_REAL" Size = 4 is verified. c "MPI_REAL2" is not verified: (compilation). c "MPI_REAL4" Size = 4 is verified. c "MPI_REAL8" Size = 8 is verified. c "MPI_REAL16" Size = 16 is verified. c "MPI_SHORT" Size = 2 is verified. c "MPI_SHORT_INT" Size = 6 is verified. c "MPI_SIGNED_CHAR" Size = 1 is verified. c "MPI_UB" Size = 0 is verified. c "MPI_UNSIGNED_CHAR" Size = 1 is verified. c "MPI_UNSIGNED_SHORT" Size = 2 is verified. c "MPI_UNSIGNED" Size = 4 is verified. c "MPI_UNSIGNED_LONG" Size = 8 is verified. c "MPI_WCHAR" Size = 2 is verified. c "MPI_LONG_LONG_INT" Size = 8 is verified. c "MPI_FLOAT_INT" Size = 8 is verified. c "MPI_DOUBLE_INT" Size = 12 is verified. c "MPI_LONG_INT" Size = 12 is verified. c "MPI_LONG_DOUBLE_INT" Size = 20 is verified. c "MPI_2INT" Size = 8 is verified. c "MPI_SHORT_INT" Size = 6 is verified. c "MPI_LONG_DOUBLE_INT" Size = 20 is verified. c "MPI_2REAL" Size = 8 is verified. c "MPI_2DOUBLE_PRECISION" Size = 16 is verified. c "MPI_2INTEGER" Size = 8 is verified. C "MPI_CXX_BOOL" Size = 1 is verified. C "MPI_CXX_FLOAT_COMPLEX" Size = 8 is verified. C "MPI_CXX_DOUBLE_COMPLEX" Size = 16 is verified. C "MPI_CXX_LONG_DOUBLE_COMPLEX" Size = 32 is verified. f "MPI_BYTE" Size =1 is verified. f "MPI_CHARACTER" Size =1 is verified. f "MPI_COMPLEX" Size =8 is verified. f "MPI_DOUBLE_COMPLEX" Size =16 is verified. f "MPI_DOUBLE_PRECISION" Size =8 is verified. f "MPI_INTEGER" Size =4 is verified. f "MPI_INTEGER1" Size =1 is verified. f "MPI_INTEGER2" Size =2 is verified. f "MPI_INTEGER4" Size =4 is verified. f "MPI_LOGICAL" Size =4 is verified. f "MPI_REAL" Size =4 is verified. f "MPI_REAL2" Size =0 is verified. f "MPI_REAL4" Size =4 is verified. f "MPI_REAL8" Size =8 is verified. f "MPI_PACKED" Size =1 is verified. f "MPI_2REAL" Size =8 is verified. f "MPI_2DOUBLE_PRECISION" Size =16 is verified. f "MPI_2INTEGER" Size =8 is verified. No errors.
Passed Blockindexed contiguous test 1 - blockindexed-misc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test converts a block indexed datatype to a contiguous datatype.
No errors
Passed Blockindexed contiguous test 2 - blockindexed-zero-count
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This tests the behaviour with a zero-count blockindexed datatype.
No errors
Passed Type_get_envelope test - contents
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This tests the functionality of MPI_Type_get_envelope() and MPI_Type_get_contents().
No errors
Passed Simple datatype test 1 - contigstruct
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test checks to see if we can create a simple datatype made from many contiguous copies of a single struct. The struct is built with monotone decreasing displacements to avoid any struct->config optimizations.
No errors
Passed Simple datatype test 2 - contig-zero-count
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Tests behaviour with a zero count contig.
No errors
Passed C++ datatype test - cxx-types
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test checks for the existence of four new C++ named predefined datatypes that should be accessible from C and Fortran.
No errors
Passed Type_create_darray test - darray-cyclic
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 12
Test Description:
Cyclic check of a custom struct darray.
No errors
Passed Type_create_darray test - darray-pack_72
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 32
Test Description:
The default behavior of the test is be to indicate the cause of any errors.
No errors
Passed Type_create_darray packing test - darray-pack
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 9
Test Description:
Performs a sequence of tests building darrays with single-element blocks, running through all the various positions that the element might come from. Returns the number of errors encountered.
No errors
Passed Type_struct() alignment test - dataalign
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This routine checks the alignment of a custom datatype.
No errors
Passed Get_address test - gaddress
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This routine shows how math can be used on MPI addresses.
No errors
Passed Get_elements test - get-elements
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
We use a contig of a struct in order to satisfy two properties: (A) a type that contains more than one element type (the struct portion) (B) a type that has an odd number of ints in its "type contents" (1 in this case). This triggers a specific bug in some versions of MPICH.
No errors
Passed Get_elements Pair test - get-elements-pairtype
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Send a { double, int, double} tuple and receive as a pair of MPI_DOUBLE_INTs. this should (a) be valid, and (b) result in an element count of 3.
No errors
Passed Get_elements test - getpartelm
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Receive partial datatypes and check that MPI_Getelements gives the correct version.
No errors
Passed Datatype structs test - get-struct
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test was motivated by the failure of an example program for RMA involving simple operations on a struct that included a struct. The observed failure was a SEGV in the MPI_Get.
No errors
Passed Type_create_hindexed_block test 1 - hindexed_block
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Tests behavior with a hindexed_block that can be converted to a contig easily. This is specifically for coverage. Returns the number of errors encountered.
No errors
Passed Type_create_hindexed_block test 2 - hindexed_block_contents
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test is a simple check of MPI_Type_create_hindexed_block() using MPI_Type_get_envelope() and MPI_Type_get_contents().
No errors
Passed Type_hindexed test - hindexed-zeros
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Tests with an hindexed type with all zero length blocks.
No errors
Passed Type_hvector_blklen test - hvecblklen
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Inspired by the Intel MPI_Type_hvector_blklen test. Added to include a test of a dataloop optimization that failed.
No errors
Passed Type_indexed test - indexed-misc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Tests behavior with an indexed array that can be compacted but should continue to be stored as an indexed type. Specifically for coverage. Returns the number of errors encountered.
No errors
Passed Large count test - large-count
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test checks for large count functionality ("MPI_Count") mandated by MPI-3, as well as behavior of corresponding pre-MPI-3 interfaces that have better defined behavior when an "int" quantity would overflow.
No errors
Passed Type_contiguous test - large_type
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test checks that MPI can handle large datatypes.
No errors
Passed Contiguous bounds test - lbub
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
The default behavior of the test is be to indicate the cause of any errors.
No errors
Passed Pack test - localpack
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test users MPI_Pack() on a communication buffer, then call MPU_Unpack() to confirm that the unpacked data matches the original. This routine performs all work within a simple processor.
No errors
Passed LONG_DOUBLE size test - longdouble
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test ensures that simplistic build logic/configuration did not result in a defined, yet incorrectly sized, MPI predefined datatype for long double and long double Complex. Based on a test suggested by Jim Hoekstra @ Iowa State University. The test also considers other datatypes that are optional in the MPI-3 specification.
No errors
Passed Type_indexed test - lots-of-types
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Author: Rob Ross
Date: November 2, 2005
This test allocates 1024 indexed datatypes with 1024 distinct blocks each.
It's possible that a low memory machine will run out of memory running this
test. This test requires approximately 25MBytes of memory at this time.
No errors
Passed Datatypes test 1 - pairtype-size-extent
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Check for optional datatypes such as LONG_DOUBLE_INT.
No errors
Passed Datatypes test 2 - sendrecvt2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This program is derived from one in the MPICH-1 test suite. It tests a wide variety of basic and derived datatypes.
No errors
Passed Datatypes test 3 - sendrecvt4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This program is derived from one in the MPICH-1 test suite. This test sends and receives EVERYTHING from MPI_BOTTOM, by putting the data into a structure.
No errors
Passed Type_commit test - simple-commit
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Tests that verifies that the MPI_Type_commit succeeds.
No errors
Passed Pack test - simple-pack
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Tests functionality of MPI_Type_get_envelope() and MPI_Type_get_contents() on a MPI_FLOAT. Returns the number of errors encountered.
No errors
Passed Pack_external_size test - simple-pack-external
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Tests functionality of MPI_Type_get_envelope() and MPI_Type_get_contents() on a MPI_FLOAT. Returns the number of errors encountered.
No errors
Passed Type_create_resized test - simple-resized
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Tests behavior with resizing of a simple derived type.
No errors
Passed Type_get_extent test - simple-size-extent
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test confirms that MPI_Type_get_extent() works properly.
No errors
Passed Pack, Unpack test - slice-pack
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test confirms that sliced array pack and unpack properly.
No errors
Passed Type_hvector test - struct-derived-zeros
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Based on code from Jeff Parker at IBM.
No errors
Passed Type_struct test - struct-empty-el
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test creates an MPI_Type_struct() datatype, assigns data and sends the strufcture to a second process. The second process receives the structure and conforms that the information contained in the structure agrees with the original data.
No errors
Passed MPI_Type_struct test 1 - struct-ezhov
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a very simple where a MPI_Type-struct() datatype is created and transfered to a second processor where the size of the structure is confirmed.
No errors
Passed MPI_Type_struct test 2 - struct-no-real-types
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Tests behavior with an empty struct type.
No errors
Passed Pack, Unpack test 1 - structpack2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test confirms that packed structures unpack properly.
No errors
Passed Pack,Unpack test 2 - struct-pack
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test confirms that packed structures unpack properly.
No errors
Failed Derived HDF5 test - struct-verydeep
Build: Passed
Execution: Failed
Exit Status: Failed with signal 9
MPI Processes: 1
Test Description:
This test simulates a HDF5 structure type encountered by the HDF5 library. The test is run using 1 processor (submitted by Rob Latham robl@mcs.anl.gov.
MPT ERROR: The program attempted to construct a derived datatype with depth 16, but the maximum allowed depth is 14. You can increase the allowed depth via the MPI_TYPE_DEPTH environmet variable. MPT ERROR: rank:0, function:MPI_TYPE_VECTOR, Invalid datatype argument MPT ERROR: Rank 0(g:0) is aborting with error code 0. Process ID: 180404, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/datatypes/struct-verydeep MPT Version: HPE MPT 2.20 08/30/19 04:33:45 MPT: --------stack traceback------- MPT: Attaching to program: /proc/180404/exe, process 180404 MPT: (no debugging symbols found)...done. MPT: [New LWP 180405] MPT: [Thread debugging using libthread_db enabled] MPT: Using host libthread_db library "/lib64/libthread_db.so.1". MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: Missing separate debuginfos, use: debuginfo-install glibc-2.17-307.el7.1.x86_64 libbitmask-2.0-sgi720r52.rhel76.x86_64 libcpuset-1.0-sgi720r102.rhel76.x86_64 libgcc-4.8.5-39.el7.x86_64 libibverbs-22.4-2.el7_8.x86_64 libnl3-3.2.28-4.el7.x86_64 libpsm2-11.2.80-1.x86_64 numactl-libs-2.0.12-5.el7.x86_64 numatools-2.0-sgi720r149.rhel76.x86_64 MPT: (gdb) #0 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: #1 0x00002aaaab61d806 in mpi_sgi_system ( MPT: #2 MPI_SGI_stacktraceback ( MPT: header=header@entry=0x7fffffffb380 "MPT ERROR: Rank 0(g:0) is aborting with error code 0.\n\tProcess ID: 180404, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/datatypes/struct-verydeep\n\tMPT Version: HPE MPT 2.20 08/30"...) at sig.c:340 MPT: #3 0x00002aaaab565fc9 in print_traceback (ecode=ecode@entry=0) at abort.c:246 MPT: #4 0x00002aaaab566476 in MPI_SGI_abort () at abort.c:122 MPT: #5 0x00002aaaab5ac560 in errors_are_fatal (comm=<optimized out>, MPT: code=<optimized out>) at errhandler.c:256 MPT: #6 0x00002aaaab5ac65c in MPI_SGI_error (comm=<optimized out>, comm@entry=1, MPT: code=code@entry=3) at errhandler.c:82 MPT: #7 0x00002aaaab628e5d in MPI_SGI_type_check_depth ( MPT: newtype=newtype@entry=0x7fffffffb9cc) at type_depth.c:55 MPT: #8 0x00002aaaab62d67d in PMPI_Type_vector (count=1, blocklen=5, stride=1, MPT: oldtype=<optimized out>, newtype=0x7fffffffb9cc) at type_vector.c:37 MPT: #9 0x0000000000402791 in makeHDF5type0 () MPT: #10 0x00000000004022bd in main () MPT: (gdb) A debugging session is active. MPT: MPT: Inferior 1 [process 180404] will be detached. MPT: MPT: Quit anyway? (y or n) [answered Y; input not from terminal] MPT: Detaching from program: /proc/180404/exe, process 180404 MPT: [Inferior 1 (process 180404) detached] MPT: -----stack traceback ends----- MPT ERROR: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize() aborting job
Passed MPI datatype test - struct-zero-count
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Tests behavior with a zero-count struct of builtins.
No errors
Passed Type_create_subarray test 1 - subarray
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test creates a subarray and confirms its contents.
No errors
Passed Type_create_subarray test 2 - subarray-pack
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test confirms that a packed sub-array can be properly unpacked.
No errors
Passed Datatype reference count test - tfree
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test to check if freed datatypes have reference count semantics. The idea here is to create a simple but non-contiguous datatype, perform an irecv with it, free it, and then create many new datatypes. If the datatype was freed and the space was reused, this test may detect an error.
No errors
Passed Datatype match test - tmatchsize
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Test of type_match_size. Check the most likely cases. Note that it is an error to free the type returned by MPI_Type_match_size. Also note that it is an error to request a size not supported by the compiler, so Type_match_size should generate an error in that case.
No errors
Passed Pack,Unpack test - transpose-pack
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test confirms that an MPI packed matrix can be unpacked correctly by the MPI infrastructure.
No errors
Passed Type_create_resized() test 1 - tresized2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test of MPI datatype resized with non-zero lower bound.
No errors
Passed Type_create_resized() test 2 - tresized
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test of MPI datatype resized with 0 lower bound.
No errors
Passed Type_commit test - typecommit
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test builds datatypes using various components and confirms that MPI_Type_commit() succeeded.
No errors
Passed Type_free test - typefree
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test is used to confirm that memory is properly recovered from freed datatypes. The test may be run with valgrind or similar tools, or it may be run with MPI implementation specific options. For this test it is run only with standard MPI error checking enabled.
No errors
Passed Type_{lb,ub,extent} test - typelb
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test checks that both the upper and lower boundary of an hindexed MPI type is correct.
No errors
Passed Datatype inclusive test - typename
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Sample some datatypes. See 8.4, "Naming Objects" in MPI-2. The default name is the same as the datatype name.
No errors
Passed Unpack() test - unpack
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Test sent in by Avery Ching to report a bug in MPICH. Adding it as a regression test. Note: Unpack without a Pack is not technically correct, but should work with MPICH. This may not be supported with other MPI variants.
No errors
Passed Noncontiguous datatypes test - unusual-noncontigs
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test uses a structure datatype that describes data that is contiguous, but is is manipulated as if it is noncontiguous. The test is designed to expose flaws in MPI memory management should they exist.
No errors
Passed Type_vector_blklen test - vecblklen
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test is inspired by the Intel MPI_Type_vector_blklen test. The test fundamentally tries to deceive MPI into scrambling the data using padded struct types, and MPI_Pack() and MPI_Unpack(). The data is then checked to make sure the original data was not lost in the process. If "No errors" is reported, then the MPI functions that manipulated the data did not corrupt the test data.
No errors
Passed Zero size block test - zeroblks
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test creates an empty packed indexed type, and then checks that the last 40 entrines of the unpacked recv_buffer have the corresponding elements from the send buffer.
No errors
Passed Datatype test - zeroparms
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test creates a valid datatype, commits and frees the datatype, then repeats the process for a second datatype of the same size.
No errors
Collectives - Score: 83% Passed
This group features tests of utilizing MPI collectives.
Failed Allgather test 1 - allgather2
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 10
Test Description:
This is a test of MPI_Allgather() using the MPI_IN_PLACE argument.
Found 10 errors
Passed Allgather test 2 - allgather3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
This test is similar to coll/allgather2, but it tests a zero byte gather operation.
No errors
Failed Allgather test 3 - allgatherv2
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 10
Test Description:
Gather data from a vector to contiguous datatype. Use IN_PLACE. This is the trivial version based on the coll/allgather test with constant data sizes.
MPT ERROR: MPI_COMM_WORLD rank 1 has terminated without calling MPI_Finalize() aborting job MPT: Received signal 11
Passed Allgather test 4 - allgatherv3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
Gather data from a vector to contiguous datatype. This is the trivial version based on the allgather test (allgatherv but with constant data sizes).
No errors
Passed Allreduce test 2 - allred2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
MPI_Allreduce() Test using MPI_IN_PLACE.
No errors
Passed Allreduce test 3 - allred3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
This test implements a simple matrix-matrix multiply. This is an associative but not commutative operation where matSize=matrix. The number of matrices is the count argument. The matrix is stored in C order, so that c(i,j) = cin[j+i*matSize].
No errors
Passed Allreduce test 4 - allred4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This example is similar to coll/allred3, but uses 3x3 matrices with integer-valued entries. This is an associative but not commutative operation. The number of matrices is the count argument. The matrix is stored in C order, such that
c(i,j) is cin[j+i*3]
I = identity matrix A = (1 0 0 B = (0 1 0 0 0 1 1 0 0 0 1 0) 0 0 1)
The product:
I^k A I^(p-2-k-j) B I^j
is
(0 1 0 0 0 1 1 0 0)
for all values of k, p, and j.
No errors
Passed Allreduce test 5 - allred5
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
This test implements a simple matrix-matrix multiply. The operation is associative but not commutative where matSize=matrix. The number of matrices is the count argument. The matrix is stored in C order, so that c(i,j) is cin[j+i*matSize].
No errors
Failed Allreduce test 6 - allred6
Build: Passed
Execution: Failed
Exit Status: Failed with signal 9
MPI Processes: 10
Test Description:
This is a comprehensive test of MPI_Allreduce().
MPT ERROR: Rank 0(g:0) received signal SIGSEGV(11). Process ID: 161948, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/coll/allred6 MPT Version: HPE MPT 2.20 08/30/19 04:33:45 MPT: --------stack traceback------- MPT: Attaching to program: /proc/161948/exe, process 161948 MPT: (no debugging symbols found)...done. MPT: [New LWP 161958] MPT: [Thread debugging using libthread_db enabled] MPT: Using host libthread_db library "/lib64/libthread_db.so.1". MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: Missing separate debuginfos, use: debuginfo-install glibc-2.17-307.el7.1.x86_64 libbitmask-2.0-sgi720r52.rhel76.x86_64 libcpuset-1.0-sgi720r102.rhel76.x86_64 libgcc-4.8.5-39.el7.x86_64 libibverbs-22.4-2.el7_8.x86_64 libnl3-3.2.28-4.el7.x86_64 libpsm2-11.2.80-1.x86_64 numactl-libs-2.0.12-5.el7.x86_64 numatools-2.0-sgi720r149.rhel76.x86_64 MPT: (gdb) #0 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: #1 0x00002aaaab61d806 in mpi_sgi_system ( MPT: #2 MPI_SGI_stacktraceback ( MPT: header=header@entry=0x7fffffffb3c0 "MPT ERROR: Rank 0(g:0) received signal SIGSEGV(11).\n\tProcess ID: 161948, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/coll/allred6\n\tMPT Version: HPE MPT 2.20 08/30/19 04:33:45\n") at sig.c:340 MPT: #3 0x00002aaaab61da02 in first_arriver_handler (signo=signo@entry=11, MPT: stack_trace_sem=stack_trace_sem@entry=0x2aaaaf220080) at sig.c:489 MPT: #4 0x00002aaaab61dd9b in slave_sig_handler (signo=11, MPT: siginfo=<optimized out>, extra=<optimized out>) at sig.c:565 MPT: #5 <signal handler called> MPT: #6 0x0000000000000000 in ?? () MPT: #7 0x00002aaaab60e5e5 in MPI_SGI_reduce_local (op=<optimized out>, MPT: datatype=3, count=1, inoutbuf=0x7fffffffc560, inbuf=<optimized out>) MPT: at ../../../../include/reduction.h:117 MPT: #8 MPI_SGI_reduce_basic (_sendbuf=_sendbuf@entry=0x7fffffffc934, MPT: recvbuf=recvbuf@entry=0x7fffffffc934, count=count@entry=1, MPT: type=type@entry=3, op=op@entry=0, root=root@entry=0, comm=comm@entry=1) MPT: at reduce.c:636 MPT: #9 0x00002aaaab57277a in MPI_SGI_allreduce_basic (comm=<optimized out>, MPT: op=<optimized out>, type=<optimized out>, count=<optimized out>, MPT: recvbuf=<optimized out>, sendbuf=<optimized out>) at allreduce.c:765 MPT: #10 MPI_SGI_allreduce (sendbuf=sendbuf@entry=0x7fffffffc934, MPT: recvbuf=recvbuf@entry=0x7fffffffc934, count=count@entry=1, MPT: type=type@entry=3, op=0, comm=comm@entry=1, further=further@entry=1) MPT: at allreduce.c:583 MPT: #11 0x00002aaaab573d13 in PMPI_Allreduce (sendbuf=0x7fffffffc934, MPT: recvbuf=0x7fffffffc934, count=1, type=3, op=<optimized out>, comm=1) MPT: at allreduce.c:110 MPT: #12 0x0000000000402273 in main () MPT: (gdb) A debugging session is active. MPT: MPT: Inferior 1 [process 161948] will be detached. MPT: MPT: Quit anyway? (y or n) [answered Y; input not from terminal] MPT: Detaching from program: /proc/161948/exe, process 161948 MPT: [Inferior 1 (process 161948) detached] MPT: -----stack traceback ends----- MPT: On host r2i7n16, Program /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/coll/allred6, Rank 0, Process 161948: Dumping core on signal SIGSEGV(11) into directory /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/coll MPT ERROR: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize() aborting job MPT: Received signal 11
Passed Allreduce test 1 - allred
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
This test all possible MPI operation codes using the MPI_Allreduce() routine.
No errors
Passed Allreduce test 7 - allredmany
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This example should be run with 2 processes and tests the ability of the implementation to handle a flood of one-way messages.
No errors
Failed Alltoall test 8 - alltoall1
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 8
Test Description:
The test illustrates the use of MPI_Alltoall() to run through a selection of communicators and datatypes.
Found 8 errors
Passed Alltoallv test 1 - alltoallv0
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
This tests MPI_Alltoallv() by having each processor send data to two neighbors only, using counts of 0 for the other neighbors. This idiom is sometimes used for halo exchange operations. The test uses MPI_INT which is adequate for testing systems that use point-to-point operations.
No errors
Failed Alltoallv test 2 - alltoallv
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 10
Test Description:
This program tests MPI_Alltoallv() by having each processor send different amounts of data to each neighboring processor. The test uses only MPI_INT which is adequate for testing systems that use point-to-point operations.
Found 65 errors
Passed Matrix transpose test 1 - alltoallw1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
This somewhat detailed example test was taken from MPI-The complete reference, Vol 1, p 222-224. Please refer to this reference for more details of the test.
No errors
Failed Matrix transpose test 2 - alltoallw2
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 10
Test Description:
This program tests MPI_Alltoallw() by having the ith processor send different amounts of data to all processors. This is similar to the coll/alltoallv test, but with displacements in bytes rather than units of the datatype. Currently, the test uses only MPI_INT which is adequate for testing systems that use point-to-point operations.
Found 65 errors
Passed Alltoallw test - alltoallw_zeros
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
Based on a test case contributed by Michael Hofmann. This test makes sure that zero counts with non-zero-sized types on the send (recv) side match and don't cause a problem with non-zero counts and zero-sized types on the recv (send) side when using MPI_Alltoallw and MPI_Alltoallv.
No errors
Passed Bcast test 1 - bcast2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
Test of broadcast with various roots and datatypes and sizes that are not powers of two.
No errors
Passed Bcast test 2 - bcast3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
Test of broadcast with various roots and datatypes and sizes that are not powers of two.
No errors
Passed Bcast test 3 - bcasttest
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
Various tests of MPI_Bcast() using MPI_INIT with data sizes that are in powers of two.
No errors
Passed Bcast test 4 - bcastzerotype
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
Tests broadcast behavior with non-zero counts but zero-sized types.
No errors
Passed Reduce/Bcast tests - coll10
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
The operation is inoutvec[i] = invec[i] op inoutvec[i] (see MPI-1.3 Message-Passing INterface section 4.9.4). The order of operation is important. Note that the computation is in process rank (in the communicator) order independant of the root process.
No errors
Passed MScan test - coll11
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
The operation is inoutvec[i] = invec[i] op inoutvec[i] (see MPI-1.3 Message-Passing INterface section 4.9.4). The order of operation is important. Note that the computation is in process rank (in the communicator) order independant of the root process.
No errors
Passed Reduce/Bcast/Allreduce test - coll12
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test uses various calls to MPI_Reduce(), MPI_Bcast(), and MPI_Allreduce().
No errors
Passed Alltoall test - coll13
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a test contributed by hook@nas.nasa.gov. It is another test of MPI_Alltoall().
No errors
Passed Gather test - coll2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
This test uses MPI_Gather() to define a two-dimensional table.
No errors
Passed Gatherv test - coll3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
This test uses MPI_Gatherv() to define a two-dimensional table. This test is similar to coll/coll2.
No errors
Passed Scatter test - coll4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test uses MPI_Scatter() to define a two-dimensional table. See also test coll2 and coll3 for similar tests.
No errors
Passed Scatterv test - coll5
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test uses MPI_SCatterv() to define a two-dimensional table.
No errors
Passed Allgatherv test - coll6
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
This test uses MPI_Allgatherv() to define a two-dimensional table.
No errors
Passed Allgatherv test - coll7
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
This test if the same as coll/coll6 except that the size of the table is greater than the number of processors.
No errors
Passed Reduce/Bcast test - coll8
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test repeats pairs of calls to MPI_Reduce() and MPI_Bcast() using different reduction operations while looking for errors.
No errors
Passed Reduce/Bcast test - coll9
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test uses various calls to MPI_Reduce(), MPI_Bcast(), and MPI_Allreduce().
No errors
Passed Exscan Exclusive Scan test - exscan2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
Test Simple test of MPI_Exscan().
No errors
Failed Exscan exclusive scan test - exscan
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 10
Test Description:
The following illustrates the use of the routines to run through a selection of communicators and datatypes. Use subsets of these for tests that do not involve combinations of communicators, datatypes, and counts of datatypes.
Found 1040 errors
Passed Gather test - gather2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This tests gathers data from a vector to contiguous datatype. The test uses the IN_PLACE option.
No errors
Failed Gather test - gather
Build: Passed
Execution: Failed
Exit Status: Application_timed_out
MPI Processes: 4
Test Description:
This test gathers data from a vector to contiguous datatype. The test does not use MPI_IN_PLACE.
Test Output: None.
Failed Iallreduce test - iallred
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 2
Test Description:
This test illustrates the use of MPI_Iallreduce() and MPI_Allreduce().
Test Output: None.
Passed Ibarrier test - ibarrier
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test calls MPI_Ibarrier() followed by an conditional loop containing usleep(1000) and MPI_Test(). This test hung indefinitely under some MPI implementations. Successfully completing this test indicates the error has been corrected.
No errors
Passed Allgather test - icallgather
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
Simple intercomm allgather test.
No errors
Passed Allgatherv test - icallgatherv
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
Simple intercomm allgatherv test.
No errors
Passed Allreduce test - icallreduce
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
Simple intercomm allreduce test.
No errors
Passed Alltoall test - icalltoall
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
Simple intercomm alltoall test.
No errors
Passed Alltoallv test - icalltoallv
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
This program tests MPI_Alltoallv by having processor i send different amounts of data to each processor. Because there are separate send and receive types to alltoallv, there need to be tests to rearrange data on the fly. The first test sends i items to processor i from all processors. Currently, the test uses only MPI_INT which is adequate for testing systems using point-to-point operations.
No errors
Passed Alltoallw test - icalltoallw
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
This program tests MPI_Alltoallw by having processor i send different amounts of data to each processor. This is just the MPI_Alltoallv test, but with displacements in bytes rather than units of the datatype. Because there are separate send and receive types to alltoallw, there need to be tests to rearrange data on the fly.
The first test sends i items to processor i from all processors. Currently, the test uses only MPI_INT; this is adequate for testing systems that use point-to-point operations.
No errors
Passed Barrier test - icbarrier
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
This only checks that the Barrier operation accepts intercommunicators. It does not check for the semantics of a intercomm barrier (all processes in the local group can exit when (but not before) all processes in the remote group enter the barrier.
No errors
Passed Bcast test - icbcast
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
Simple intercomm broadcast test.
No errors
Passed Gather test - icgather
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
Simple intercomm gather test.
No errors
Passed Gatherv test - icgatherv
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
Simple intercomm gatherv test.
No errors
Passed Reduce test - icreduce
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
Simple intercomm reduce test.
No errors
Passed Scatter test - icscatter
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
Simple intercomm scatter test.
No errors
Passed Scatterv test - icscatterv
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
Simple intercomm scatterv test.
No errors
Passed Allreduce test - longuser
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
User-defined operation on a long value (tests proper handling of possible pipelining in the implementation of reductions with user-defined operations).
No errors
Passed Ibcast,Wait,Ibarrier test 1 - nonblocking2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
This is a basic test of all 17 non-blocking collective operations specified by the MPI-3 standard. It only exercises the intracommunicator functionality, does not use MPI_IN_PLACE, and only transmits/receives simple integer types with relatively small counts. It does check a few fancier issues, such as ensuring that "premature user releases" of MPI_Op and MPI_Datatype objects does not result in a segfault.
No errors
Passed Ibcast,Wait,Ibarrier test 2 - nonblocking3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
This test attempts to execute multiple simultaneous non-blocking collective (NBC) MPI routines at the same time, and manages their completion with a variety of routines (MPI_{Wait,Test}{,_all,_any,_some}). The test also exercises a few point-to-point operations.
No errors
Failed Non-blocking collectives test - nonblocking4
Build: Passed
Execution: Failed
Exit Status: Failed with signal 9
MPI Processes: 4
Test Description:
This is a weak sanity test that all non-blocking collectives specified by MPI-3 are present in the library and accept arguments as expected. This test does not check for progress, matching issues, or sensible output buffer values.
Found 15 errors MPT ERROR: Assertion failed at nbc.c:749: "MPI_SUCCESS == mpi_errno" MPT ERROR: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize() aborting job
Passed Wait test - nonblocking
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
This is a very weak sanity test that all non-blocking collectives specified by MPI-3 are present in the library and take arguments as expected. This test does not check for progress, matching issues, or sensible output buffer values.
No errors
Passed BAND operations test 1 - opband
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_BAND operations on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.
No errors
Passed BOR operations test 2 - opbor
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_BOR operations on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.
No errors
Passed BOR Operations test - opbxor
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_BXOR operations on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.
No errors
Passed Op_{create,commute,free} test - op_commutative
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
A simple test of MPI_Op_Create/commute/free.
No errors
Passed LAND operations test - opland
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_LAND operations on optional datatypes. Note that failing this test does not implty a fault with the MPI implementation.
No errors
Passed LOR operations test - oplor
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_LOR operations on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.
No errors
Passed LXOR operations test - oplxor
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
Test MPI_LXOR operations on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.
No errors
Passed MAX operations test - opmax
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
Test MPI_MAX operations on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.
No errors
Passed MAXLOC operations test - opmaxloc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
Test MPI_LAXLOC operations on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.
No errors
Passed MIN operations test - opmin
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_Min operations on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.
No errors
Passed MINLOC operations test - opminloc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_MINLOC operations on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.
No errors
Passed PROD operations test - opprod
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 6
Test Description:
Test MPI_PROD operations on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.
No errors
Passed SUM operations test - opsum
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test looks at integer or integer drelated atatypes not required my the MPI-3.0 standard (e.g. long long). Note that failure to support these datatypes is not an indication of a non-compliant MPI implementation.
No errors
Failed Reduce test 1 - red3
Build: Passed
Execution: Failed
Exit Status: Application_timed_out
MPI Processes: 10
Test Description:
This tests implements a simple matrix-matrix multiply. This is an associative but not comutative operation. For a matrix size of matsize, the matrix is stored in C order where c(i,j) is cin[j+i*matSize].
Test Output: None.
Passed Reduce test 2 - red4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
This tests implements a simple matrix-matrix multiply. This is an associative but not comutative operation. For a matrix size of matsize, the matrix is stored in C order where c(i,j) is cin[j+i*matSize].
No errors
Passed Reduce_Scatter test 1 - redscat2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
Test of reduce scatter. Checks that the non-communcative operations are not commuted and that all of the operations are performed.
No errors
Failed Reduce_Scatter test 2 - redscat3
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 8
Test Description:
Test of reduce scatter with large data (needed to trigger the long-data algorithm). Each processor contributes its rank + index to the reduction, then receives the "ith" sum. Can be run with any number of processors, bit currently uses 1 processor due to the high demand on memory.
Found 8 errors
Passed Reduce_Scatter test 3 - redscatbkinter
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
Test of reduce scatter block with large data on an intercommunicator (needed in MPICH to trigger the long-data algorithm). Each processor contributes its rank + the index to the reduction, then receives the ith sum. Can be called with any number of processors.
No errors
Passed Reduce_Scatter test 4 - redscatblk3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
Test of reduce scatter with large data (needed in MPICH to trigger the long-data algorithm). Each processor contributes its rank + the index to the reduction, then receives the ith sum. Can be called with any number of processors.
No errors
Passed Reduce_scatter_block test 1 - red_scat_block2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
Test of reduce scatter with large data (needed in MPICH to trigger the long-data algorithm). Each processor contributes its rank + the index to the reduction, then receives the ith sum. Can be called with any number of processors.
No errors
Passed Reduce_scatter_block test 2 - red_scat_block
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
Each process contributes its rank plus the index to the reduction, then receives the ith sum. Can be called with any number of processors.
No errors
Passed Reduce_scatter test 1 - redscat
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 6
Test Description:
Each processor contribues its rank plus the index to the reduction, then receives the ith sum. Can be called with any number of processors.
No errors
Passed Reduce_scatter test 2 - redscatinter
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
Test of reduce scatter with large data on an intercommunicator (needed in MPICH to trigger the long-data algorithm). Each processor contributes its rank + the index to the reduction, then receives the ith sum. Can be called with any number of processors.
No errors
Failed Reduce test - reduce
Build: Passed
Execution: Failed
Exit Status: Application_timed_out
MPI Processes: 10
Test Description:
A simple test of MPI_Reduce() with the rank of the root process shifted through each possible value.
Test Output: None.
Passed Reduce_local test - reduce_local
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
A simple test of MPI_Reduce_local(). Uses MPI_SUM as well as user defined operators.
No errors
Failed Scan test - scantst
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 4
Test Description:
A simple test of MPI_Scan(). The operation is inoutvec[i] = invec[i] op inoutvec[i] (see 4.9.4 of the MPI standard 1.3). The order is important. Note that the computation is in process rank (in the communicator) order, independent of the root.
Found 1 errors
Failed Scatter test 1 - scatter2
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 4
Test Description:
This test sends a vector and receives individual elements, except for the root process that does not receive any data.
Found 1 errors
Passed Scatter test 2 - scatter3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test sends contiguous data and receives a vector on some nodes and contiguous data on others. There is some evidence that some MPI implementations do not check recvcount on the root process. This test checks for that case.
No errors
Passed Scatter test 3 - scattern
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test sends a vector and receives individual elements.
No errors
Passed Scatterv test - scatterv
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is an example of using scatterv to send a matrix from one process to all others, with the matrix stored in Fortran order. Note the use of an explicit upper bound (UB) to enable the sources to overlap. This tests uses scatterv to make sure that it uses the datatype size and extent correctly. It requires the number of processors used in the call to MPI_Dims_create.
No errors
Passed Reduce test - uoplong
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 16
Test Description:
Test user-defined operations with a large number of elements. Added because a talk at EuroMPI'12 claimed that these failed with more than 64k elements.
No errors
Passed Extended collectives test - collectives
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Checks if "extended collectives" are supported. If the test fails to compile, then "extended collectives" are not supported. If the test compiles, then a 4-process MPI job is executed. If the job aborts, then "Extended collectives NOT supported" is reported. If the job executes and the correct value is returned, then "Extended collectives ARE supported" is reported.
No errors
Passed Alltoall thread test - alltoall
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
The LISTENER THREAD waits for communication from any source (including calling thread) messages which it receives that has tag REQ_TAG. Each thread enters an infinite loop that will stop only if every node in the MPI_COMM_WORLD send message containing -1.
No errors
MPI_Info Objects - Score: 100% Passed
The info tests emphasize the MPI Info object functionality.
Passed MPI_Info_delete() test - infodel
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test exercises the MPI_Info_delete().
No errors
Passed MPI_Info_dup() test - infodup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test exercises the MPI_Info_dup().
No errors
Passed MPI_Info_get() test 1 - infoenv
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a simple test of MPI_Info_get().
No errors
Passed MPI_Info_get() test 2 - infomany2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Test of info that makes use of the extended handles, including inserts and deletes.
No errors
Passed MPI_Info_get() test 3 - infomany
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Test of info that makes use of the extended handles.
No errors
Passed MPI_Info_get() test 4 - infoorder
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a simple test that illustrates how named keys are ordered.
No errors
Passed MPI_Info_get() test 5 - infotest
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Simple info test.
No errors
Passed MPI_Info_{get,send} test - infovallen
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Simple info test.
No errors
Dynamic Process Management - Score: 96% Passed
This group features tests that add processes to a running communicator, joining separately started applications, then handling faults/failures.
Passed Dynamic process management test - dynamic
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks if the dynamic process management routines through MPI-2.2 are defined. If the test passes, then "No errors" is reported.
MPI_Comm_spawn(): verified MPI_Comm_get_parrent(): verified MPI_Open_port(): verified MPI_Comm_accept(): verified MPI_Comm_connect(): verified MPI_Publish_name(): verified MPI_Unpublish_name(): verified MPI_Lookup_name(): verified MPI_Comm_disconnect(): verified MPI_Comm_join(): verified Dynamic process management routines: verified No errors
Passed MPI_Comm_disconnect() test - disconnect2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 3
Test Description:
A simple test of Comm_disconnect.
No errors
Passed MPI_Comm_disconnect() test - disconnect3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 3
Test Description:
A simple test of Comm_disconnect.
No errors
Passed MPI_Comm_disconnect() test 1 - disconnect
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 3
Test Description:
A simple test of Comm_disconnect.
No errors
Passed MPI_Comm_disconnect() test 2 - disconnect_reconnect2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 3
Test Description:
In this program, the return codes from the MPI routines are checked. Since the error handlers for the communicators are not set to MPI_ERRORS_RETURN, any error should cause an abort rather than a return. The test on the return value is an extra safety check; note that a return value of other than MPI_SUCCESS in these routines indicates an error in the error handling by the MPI implementation.
No errors
Passed MPI_Comm_disconnect() test 3 - disconnect_reconnect3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 3
Test Description:
This test tests the disconnect code for processes that span process groups. This test spawns a group of processes and then merges them into a single communicator. Then the single communicator is split into two communicators, one containing the even ranks and the other the odd ranks. Then the two new communicators do MPI_Comm_accept/connect/disconnect calls in a loop. The even group does the accepting while the odd group does the connecting.
No errors
Passed MPI_Comm_disconnect() test 4 - disconnect_reconnect
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 3
Test Description:
A simple test of Comm_connect/accept/disconnect.
No errors
Passed MPI_Comm_join() test - join
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
A simple test of Comm_join.
No errors
Passed MPI_Comm_connect() test 1 - multiple_ports2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test checks to make sure that two MPI_Comm_connections to two different MPI ports match their corresponding MPI_Comm_accepts.
No errors
Passed MPI_Comm_connect() test 2 - multiple_ports
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 3
Test Description:
This test checks to make sure that two MPI_Comm_connects to two different MPI ports match their corresponding MPI_Comm_accepts.
No errors
Failed MPI_Publish_name() test - namepub
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 2
Test Description:
This test confirms the functionality of MPI_Open_port() and MPI_Publish_name().
Error in Publish_name: "Port error" Error in Lookup name: "Name error" Error in Unpublish name: "Port error" Found 3 errors
Passed PGROUP creation test - pgroup_connect_test
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
James Dinan dinan@mcs.anl.gov
May, 2011
In this test, processes create an intracommunicator, and creation is
collective only on the members of the new communicator, not on the parent
communicator. This is accomplished by building up and merging
intercommunicators using Connect/Accept to merge with a master/controller
process.
No errors
Passed Creation group intercomm test - pgroup_intercomm_test
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
James Dinan dinan@mcs.anl.gov
May, 2011
In this test processes create an intracommunicator, and creation is collective
only on the members of the new communicator, not on the parent communicator.
This is accomplished by building up and merging intercommunicators starting with
MPI_COMM_SELF for each process involved.
No errors
Passed MPI_Comm_accept() test - selfconacc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This tests exercises MPI_Open_port(), MPI_Comm_accept(), and MPI_Comm_disconnect().
No errors
Passed MPI spawn processing test 1 - spaconacc2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
In this program, the return codes from the MPI routines are checked. Since the error handlers for the communicators are not set to MPI_ERRORS_RETURN, any error should cause an abort rather than a return. The test on the return value is an extra safety check; note that a return value of other than MPI_SUCCESS in these routines indicates an error in the error handling by the MPI implementation.
No errors
Passed MPI spawn processing test 2 - spaconacc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
In this program, the return codes from the MPI routines are checked. Since the error handlers for the communicators are not set to MPI_ERRORS_RETURN, any error should cause an abort rather than a return. The test on the return value is an extra safety check; note that a return value of other than MPI_SUCCESS in these routines indicates an error in the error handling by the MPI implementation.
No errors
Passed MPI_Intercomm_creat() test - spaiccreate
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Use Spawn to create an intercomm, then create a new intercomm that includes processes not in the initial spawn intercomm.This test ensures that spawned processes are able to communicate with processes that were not in the communicator from which they were spawned.
No errors
Passed MPI_Comm_spawn() test 1 - spawn1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
A simple test of Comm_spawn.
No errors
Passed MPI_Comm_spawn() test 2 - spawn2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
A simple test of Comm_spawn, called twice.
No errors
Passed MPI_Comm_spawn() test 3 - spawnargv
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
A simple test of Comm_spawn, with complex arguments.
No errors
Passed MPI_Comm_spawn() test 4 - spawninfo1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
A simple test of Comm_spawn with info.
No errors
Passed MPI_Comm_spawn() test 5 - spawnintra
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
A simple test of Comm_spawn, followed by intercomm merge.
No errors
Passed MPI_Comm_spawn() test 6 - spawnmanyarg
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
A simple test of Comm_spawn, with many arguments.
No errors
Passed MPI_Comm_spawn_multiple() test 1 - spawnminfo1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
A simple test of Comm_spawn_multiple with info.
No errors
Passed MPI_Comm_spawn_multiple() test 2 - spawnmult2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This tests spawn_mult by using the same executable and no command-line options. The attribute MPI_APPNUM is used to determine which executable is running.
No errors
Passed MPI spawn test with pthreads - taskmaster
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Create a thread for each task. Each thread will spawn a child process to perform its task.
No errors
Passed Multispawn test - multispawn
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This (is a placeholder for a) test that creates 4 threads, each of which does a concurrent spawn of 4 more processes, for a total of 17 MPI processes. The resulting intercomms are tested for consistency (to ensure that the spawns didn't get confused among the threads). As an option, it will time the Spawn calls. If the spawn calls block the calling thread, this may show up in the timing of the calls.
No errors
Passed Taskmaster test - th_taskmaster
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a simple test that creates threads to verifiy compatibility between MPI and the pthread library.
No errors
Threads - Score: 100% Passed
This group features tests that utilize thread compliant MPI implementations. This includes the threaded environment provided by MPI-3.0, as well as POSIX compliant threaded libraries such as PThreads.
Passed Thread/RMA interaction test - multirma
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This is a simple test of threads in MPI.
No errors
Passed Threaded group test - comm_create_group_threads
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
In this test a number of threads are created with a distinct MPI communicator (or comm) group distringuished by its thread-id (used as a tag). Threads on even ranks join an even comm and threads on odd ranks join the odd comm.
No errors
Passed Thread Group creation test - comm_create_threads
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Every thread paticipates in a distinct MPI_Comm_create group, distinguished by its thread-id (used as the tag). Threads on even ranks join an even comm and threads on odd ranks join the odd comm.
No errors
Passed Easy thread test 1 - comm_dup_deadlock
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a simple test of threads in MPI.
No errors
Passed Easy thread test 2 - comm_idup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a simple test of threads in MPI
No Errors
Passed Multiple threads test 1 - ctxdup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test creates communications concurrently in different threads.
No errors
Passed Multiple threads test 2 - ctxidup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test creates communications concurrently in different threads.
No errors
Passed Multiple threads test 3 - dup_leak_test
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test repeatedly duplicates and frees communicators with multiple threads concurrently to stress the multithreaded aspects of the context ID allocation code. Thanks to IBM for providing the original version of this test.
No errors
NA MPIT multithreaded test - mpit_threading
Build: Passed
Execution: Failed
Exit Status: MPI_THREAD_MULTIPLE required
MPI Processes: 1
Test Description:
This test is adapted from test/mpi/mpi_t/mpit_vars.c. But this is a multithreading version in which multiple threads will call MPI_T routines.
With verbose set, thread 0 will prints out MPI_T control variables, performance variables and their categories.
MPI does not support MPI_THREAD_MULTIPLE.
Passed Simple thread test 1 - initth2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
The test initializes a thread, then calls MPI_Finalize() and prints "No errors".
No errors
Passed Simple thread test 2 - initth
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
The test here is a simple one that Finalize exits, so the only action is to write no error.
No errors
Passed Alltoall thread test - alltoall
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
The LISTENER THREAD waits for communication from any source (including calling thread) messages which it receives that has tag REQ_TAG. Each thread enters an infinite loop that will stop only if every node in the MPI_COMM_WORLD send message containing -1.
No errors
Passed Threaded request test - greq_test
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Threaded generalized request tests.
No errors
Passed Threaded wait/test test - greq_wait
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Threaded wait/test request tests.
No errors
Passed Threaded ibsend test - ibsend
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This program performs a short test of MPI_BSEND in a multithreaded environment. It starts a single receiver thread that expects NUMSENDS messages and NUMSENDS sender threads, that use MPI_Bsend to send a message of size MSGSIZE to its right neigbour or rank 0 if (my_rank==comm_size-1), i.e. target_rank = (my_rank+1)%size.
After all messages have been received, the receiver thread prints a message, the threads are joined into the main thread and the application terminates.
No Errors
Passed Threaded multi-target test 1 - multisend2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
Run concurrent sends to different target processes. Stresses an implementation that permits concurrent sends to different targets. By turning on verbose output, some simple performance data will be output.
No errors
Passed Threaded multi-target test 2 - multisend3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
Run concurrent sends to different target processes. Stresses an implementation that permits concurrent sends to different targets. By turning on verbose output, some simple performance data will be output. Use non-blocking sends, and have a single thread complete all I/O.
No errors
Passed Threaded multi-target test 3 - multisend4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
Run concurrent sends to different target processes. Stresses an implementation that permits concurrent sends to different targets. By turning on verbose output, some simple performance data will be output. Use non-blocking sends, and have a single thread complete all I/O.
No errors
Passed Threaded multi-target test 3 - multisend
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Run concurrent sends to different target processes. Stresses an implementation that permits concurrent sends to different targets. By turning on verbose output, some simple performance data will be output. Use non-blocking sends, and have a single thread complete all I/O.
No errors
Passed Threaded multi-target test 4 - sendselfth
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Send to self in a threaded program.
No errors
Passed Multi-threaded send/receive test - threaded_sr
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
The buffer size needs to be large enough to cause the rndv protocol to be used. If the MPI provider doesn't use a rndv protocol then the size doesn't matter.
No errors
Passed Multi-threaded blocking test - threads
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
The tests blocking and non-blocking capability within MPI.
No errors
Passed Multispawn test - multispawn
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This (is a placeholder for a) test that creates 4 threads, each of which does a concurrent spawn of 4 more processes, for a total of 17 MPI processes. The resulting intercomms are tested for consistency (to ensure that the spawns didn't get confused among the threads). As an option, it will time the Spawn calls. If the spawn calls block the calling thread, this may show up in the timing of the calls.
No errors
Passed Taskmaster test - th_taskmaster
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a simple test that creates threads to verifiy compatibility between MPI and the pthread library.
No errors
MPI-Toolkit Interface - Score: 100% Passed
This group features tests that involve the MPI Tool interface available in MPI-3.0 and higher.
Passed Toolkit varlist test - varlist
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This program, copyrighted (c) 2014, Lawrence Livermore National Security, LLC., accesses the performance and control variables as defined by under MPI-3.0 and newer.
MPI_T Variable List MPI Thread support: MPI_THREAD_MULTIPLE MPI_T Thread support: MPI_THREAD_SINGLE =============================== Control Variables =============================== Found 1 control variables Found 1 control variables with verbosity <= D/A-9 Variable VRB Type Bind Scope Value --------------------------------------------------------------------- profiled_recv_request_id U/D-2 INT n/a LOCAL 0 --------------------------------------------------------------------- =============================== Performance Variables =============================== Found 6 performance variables Found 6 performance variables with verbosity <= D/A-9 Variable VRB Class Type Bind R/O CNT ATM -------------------------------------------------------------------------------- posted_recvq_length U/D-2 LEVEL UINT n/a YES YES NO unexpected_recvq_length U/D-2 LEVEL UINT n/a YES YES NO posted_recvq_match_attempts U/D-2 COUNTER ULLONG n/a YES YES NO unexpected_recvq_match_attempts U/D-2 COUNTER ULLONG n/a YES YES NO unexpected_recvq_buffer_size U/D-2 LEVEL ULLONG n/a YES YES NO profiled_recv_request_is_transferring U/D-2 LEVEL UINT n/a YES YES NO -------------------------------------------------------------------------------- No errors.
Passed MPI_T calls test 1 - cvarwrite
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test writes to control variables exposed by MPI_T functionality of MPI_3.0.
No errors
Passed MPI_T calls test 2 - mpi_t_str
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
A test that MPI_T string handling is working as expected.
No errors
Passed MPI_T calls test 3 - mpit_vars
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
To print out all MPI_T control variables, performance variables and their categories in the MPI implementation.
No errors
NA MPIT multithreaded test - mpit_threading
Build: Passed
Execution: Failed
Exit Status: MPI_THREAD_MULTIPLE required
MPI Processes: 1
Test Description:
This test is adapted from test/mpi/mpi_t/mpit_vars.c. But this is a multithreading version in which multiple threads will call MPI_T routines.
With verbose set, thread 0 will prints out MPI_T control variables, performance variables and their categories.
MPI does not support MPI_THREAD_MULTIPLE.
MPI-3.0 - Score: 97% Passed
This group features tests that exercises MPI-3.0 and higher functionality. Note that the test suite was designed to be compiled and executed under all versions of MPI. If the current version of MPI the test suite is less that MPI-3.0, the executed code will report "MPI-3.0 or higher required" and will exit.
Failed Iallreduce test - iallred
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 2
Test Description:
This test illustrates the use of MPI_Iallreduce() and MPI_Allreduce().
Test Output: None.
Passed Ibarrier test - ibarrier
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test calls MPI_Ibarrier() followed by an conditional loop containing usleep(1000) and MPI_Test(). This test hung indefinitely under some MPI implementations. Successfully completing this test indicates the error has been corrected.
No errors
Passed Ibcast,Wait,Ibarrier test 1 - nonblocking2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
This is a basic test of all 17 non-blocking collective operations specified by the MPI-3 standard. It only exercises the intracommunicator functionality, does not use MPI_IN_PLACE, and only transmits/receives simple integer types with relatively small counts. It does check a few fancier issues, such as ensuring that "premature user releases" of MPI_Op and MPI_Datatype objects does not result in a segfault.
No errors
Passed Ibcast,Wait,Ibarrier test 2 - nonblocking3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
This test attempts to execute multiple simultaneous non-blocking collective (NBC) MPI routines at the same time, and manages their completion with a variety of routines (MPI_{Wait,Test}{,_all,_any,_some}). The test also exercises a few point-to-point operations.
No errors
Failed Non-blocking collectives test - nonblocking4
Build: Passed
Execution: Failed
Exit Status: Failed with signal 9
MPI Processes: 4
Test Description:
This is a weak sanity test that all non-blocking collectives specified by MPI-3 are present in the library and accept arguments as expected. This test does not check for progress, matching issues, or sensible output buffer values.
Found 15 errors MPT ERROR: Assertion failed at nbc.c:749: "MPI_SUCCESS == mpi_errno" MPT ERROR: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize() aborting job
Passed Wait test - nonblocking
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 10
Test Description:
This is a very weak sanity test that all non-blocking collectives specified by MPI-3 are present in the library and take arguments as expected. This test does not check for progress, matching issues, or sensible output buffer values.
No errors
Passed Toolkit varlist test - varlist
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This program, copyrighted (c) 2014, Lawrence Livermore National Security, LLC., accesses the performance and control variables as defined by under MPI-3.0 and newer.
MPI_T Variable List MPI Thread support: MPI_THREAD_MULTIPLE MPI_T Thread support: MPI_THREAD_SINGLE =============================== Control Variables =============================== Found 1 control variables Found 1 control variables with verbosity <= D/A-9 Variable VRB Type Bind Scope Value --------------------------------------------------------------------- profiled_recv_request_id U/D-2 INT n/a LOCAL 0 --------------------------------------------------------------------- =============================== Performance Variables =============================== Found 6 performance variables Found 6 performance variables with verbosity <= D/A-9 Variable VRB Class Type Bind R/O CNT ATM -------------------------------------------------------------------------------- posted_recvq_length U/D-2 LEVEL UINT n/a YES YES NO unexpected_recvq_length U/D-2 LEVEL UINT n/a YES YES NO posted_recvq_match_attempts U/D-2 COUNTER ULLONG n/a YES YES NO unexpected_recvq_match_attempts U/D-2 COUNTER ULLONG n/a YES YES NO unexpected_recvq_buffer_size U/D-2 LEVEL ULLONG n/a YES YES NO profiled_recv_request_is_transferring U/D-2 LEVEL UINT n/a YES YES NO -------------------------------------------------------------------------------- No errors.
Passed Matched Probe test - mprobe
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
written Dr. Michael L. Stokes, Michael.Stokes@UAH.edu
This routine is designed to test the MPI-3.0 matched probe support. The support provided in MPI-2.2 was not thread safe allowing other threads to usurp messages probed in other threads.
The rank=0 process generates a random array of floats that is sent to mpi rank 1. Rank 1 send a message back to rank 0 with the message length of the received array. Rank 1 spawns 2 or more threads that each attempt to read the message sent by rank 0. In general, all of the threads have equal access to the data, but the first one to probe the data will eventually end of processing the data, and all the others will relent. The threads use MPI_Improbe(), so if there is nothing to read, the thread will rest for 0.1 secs before reprobing. If nothing is probed within a fixed number of cycles, the thread exists and sets it thread exit status to 1. If a thread is able to read the message, it returns an exit status of 0.
mpi_rank:1 thread 0 MPI_rank:1 mpi_rank:1 thread 0 used 1 read cycle. mpi_rank:1 thread 0 local memory request (bytes):400 of local allocation:800 mpi_rank:1 thread 1 MPI_rank:1 mpi_rank:1 thread 0 recv'd 100 MPI_FLOATs from rank:0. mpi_rank:1 thread 0 sending rank:0 the number of MPI_FLOATs received:100 mpi_rank:0 main() received message from rank:1 that the received message length was 400 bytes long. mpi_rank:1 thread 2 MPI_rank:1 mpi_rank:1 thread 3 MPI_rank:1 mpi_rank:1 main() thread 0 exit status:0 mpi_rank:1 thread 1 giving up reading data. mpi_rank:1 main() thread 1 exit status:1 mpi_rank:1 thread 2 giving up reading data. mpi_rank:1 thread 3 giving up reading data. mpi_rank:1 main() thread 2 exit status:1 mpi_rank:1 main() thread 3 exit status:1 No errors.
Passed RMA compliance test - badrma
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
The test uses various combinations of either zero size datatypes or zero size counts. All tests should pass to be compliant with the MPI-3.0 specification.
No errors
Passed Compare_and_swap test - compare_and_swap
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This function compares one element of type datatype in the compare buffer compare_addr with the buffer at offset target_disp in the target window specified by target_rank and window. It replaces the value at the target with the value in the origin buffer if both buffers are identical. The original value at the target is returned in the result buffer.
No errors
Passed RMA Shared Memory test - fence_shm
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This simple test uses MPI_Win_allocate_shared() with MPI_Win_fence(), MPI_Put() calls with assertions.
No errors
Passed Fetch_and_op test - fetch_and_op
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a simple test that executes the MPI_Fetch_and op() calls on RMA windows.
No errors
Passed Win_flush() test - flush
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Window Flush. This simple test flushes a shared window.
No errors
Passed Get_acculumate test 1 - get_acc_local
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Get Accumulated Test. This is a simple test of MPI_Get_accumulate().
No errors
Passed Get_accumulate test 2 - get_accumulate
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Get Accumulate Test. This is a simple test of MPI_Get_accumulate().
No errors
Passed Linked_list construction test 1 - linked_list_bench_lock_all
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Construct a distributed shared linked list using MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. Each process p then appends N new elements to the list when the tail reaches process p-1.
No errors
Passed Linked_list construction test 2 - linked_list_bench_lock_excl
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
MPI-3 distributed linked list construction example. Construct a distributed shared linked list using proposed MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. Each process "p" then appends N new elements to the list when the tail reaches process "p-1". The test uses the MPI_LOCK_EXCLUSIVE argument with MPI_Win_lock().
No errors
Passed Linked-list construction test 3 - linked_list_bench_lock_shr
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test constructs a distributed shared linked list using MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. Each process "p" then appends N new elements to the list when the tail reaches process "p-1". This test is similar to "rma/linked_list_bench_lock_excl" but uses an MPI_LOCK_SHARED parameter to MPI_Win_Lock().
No errors
Passed Linked_list construction test 4 - linked_list
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test constructs a distributed shared linked list using MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to an RMA window, and broadcasts the pointer to all processes. All processes then concurrently append N new elements to the list. When a process attempts to attach its element to the tail of list it may discover that its tail pointer is stale and it must chase ahead to the new tail before the element can be attached.
No errors
Passed Linked list construction test 5 - linked_list_fop
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test constructs a distributed shared linked list using MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. All processes then concurrently append N new elements to the list. When a process attempts to attach its element to the tail of list it may discover that its tail pointer is stale and it must chase ahead to the new tail before the element can be attached.
No errors
Passed Linked list construction test 6 - linked_list_lockall
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Construct a distributed shared linked list using MPI-3 dynamic RMA windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. All processes then concurrently append N new elements to the list. When a process attempts to attach its element to the tail of list it may discover that its tail pointer is stale and it must chase ahead to the new tail before the element can be attached. This version of the test suite uses MPI_Win_lock_all() instead of MPI_Win_lock(MPI_LOCK_EXCLUSIVE, ...).
No errors
Passed Request-based ops test - req_example
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Example 11.21 from the MPI 3.0 spec. The following example shows how request-based operations can be used to overlap communication with computation. Each process fetches, processes, and writes the result for NSTEPS chunks of data. Instead of a single buffer, M local buffers are used to allow up to M communication operations to overlap with computation.
No errors
Passed MPI RMA read-and-ops test - reqops
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test exercises atomic, one-sided read-and-operation calls.
No errors
Passed MPI_PROC_NULL test - rmanull
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test the MPI_PROC_NULL as a valid target.
No errors
Passed RMA zero-byte transfers test - rmazero
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test loops are used to run through a series of communicators that are subsets of MPI_COMM_WORLD.
No errors
Passed One-Sided accumulate test 4 - strided_getacc_indexed
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Author: James Dinan dinan@mcs.anl.gov
Date: December, 201
This code performs N strided put operations followed by get operations into
a 2d patch of a shared array. The array has dimensions [X, Y] and the
subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The
input and output buffers are specified using an MPI indexed type.
No errors
Passed Win_create_dynamic test - win_dynamic_acc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test exercises dynamic RMA windows using the MPI_Win_create_dynamic() and MPI_Accumulate() operations.
No errors
Passed Win_get_attr test - win_flavors
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test determines which "flavor" of RMA is created.
No errors
Passed Win_info test - win_info
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test creates an RMA info object, sets key/value pairs on the object, then duplicates the info object and confirms that the key/value pairs are in the same order as the original.
No errors
Passed MPI_Win_allocate_shared test - win_large_shm
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_WIN_ALLOCATE and MPI_WIN_ALLOCATE_SHARED when allocating SHM memory with size of 1GB per process.
No errors
Passed Win_allocate_shared test - win_zero
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_WIN_ALLOCATE_SHARED when size of total shared memory region is 0.
No errors
Passed MCS_Mutex_trylock test - mutex_bench
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test exercises the MCS_Mutex_lock calls.
No errors
Passed MPI_Get_library_version test - library_version
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
MPI-3.0 Test returns MPI library version.
HPE MPT 2.20 08/30/19 04:33:45 No errors
Passed Comm_split test 4 - cmsplit_type
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test only checks that the MPI_Comm_split_type routine doesn't fail. It does not check for correct behavior.
No errors
Passed Comm_create_group test 2 - comm_create_group4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This routine creates/frees groups using different schemes.
No errors
Passed Comm_create_group test 3 - comm_create_group8
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
This routine creates/frees groups using different schemes.
No errors
Passed Comm_create_group test 4 - comm_group_half2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This routine creates/frees groups using different schemes.
No errors
Passed Comm_create_group test 5 - comm_group_half4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This routine creates/frees groups using different schemes.
No errors
Passed Comm_creation_group test 6 - comm_group_half8
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
This routine creates/frees groups using different schemes.
No errors
Passed Comm_create_group test 7 - comm_group_rand2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This routine creates/frees groups using even-odd pairs.
No errors
Passed Comm_create_group test 8 - comm_group_rand4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This routine create/frees groups using modulus 4 random numbers.
No errors
Passed Comm_create_group test 1 - comm_group_rand8
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
This test is create/frees groups using different schemes.
No errors
Passed Comm_idup test 1 - comm_idup2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test exercises MPI_Comm_idup().
No errors
Passed Comm_idup test 2 - comm_idup4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test plan: Make rank 0 wait in a blocking recv until all other processes have posted their MPI_Comm_idup ops, then post last. Should ensure that idup doesn't block on the non-zero ranks, otherwise we'll get a deadlock.
No errors
Passed Comm_idup test 3 - comm_idup9
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 9
Test Description:
Test plan: Make rank 0 wait in a blocking recv until all other processes have posted their MPI_Comm_idup ops, then post last. Should ensure that idup doesn't block on the non-zero ranks, otherwise we'll get a deadlock.
No errors
Passed Comm_idup test 4 - comm_idup_mul
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test creating multiple communicators with MPI_Comm_idup.
No errors
Passed Comm_idup test 5 - comm_idup_overlap
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Each pair dups the communicator such that the dups are overlapping. If this were done with MPI_Comm_dup, this should deadlock.
No errors
Passed MPI_Info_create() test - comm_info
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 6
Test Description:
Comm_{set,get}_info test
No errors
Passed Comm_with_info() test 1 - dup_with_info2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test exercises MPI_Comm_dup_with_info().
No errors
Passed Comm_with_info test 2 - dup_with_info4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test exercises MPI_Comm_dup_with_info().
No errors
Passed Comm_with_info test 3 - dup_with_info9
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 9
Test Description:
This test exercises MPI_Comm_dup_with_info().
No errors
Passed C++ datatype test - cxx-types
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test checks for the existence of four new C++ named predefined datatypes that should be accessible from C and Fortran.
No errors
Passed Datatype structs test - get-struct
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test was motivated by the failure of an example program for RMA involving simple operations on a struct that included a struct. The observed failure was a SEGV in the MPI_Get.
No errors
Passed Type_create_hindexed_block test 1 - hindexed_block
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Tests behavior with a hindexed_block that can be converted to a contig easily. This is specifically for coverage. Returns the number of errors encountered.
No errors
Passed Type_create_hindexed_block test 2 - hindexed_block_contents
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test is a simple check of MPI_Type_create_hindexed_block() using MPI_Type_get_envelope() and MPI_Type_get_contents().
No errors
Passed Large count test - large-count
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test checks for large count functionality ("MPI_Count") mandated by MPI-3, as well as behavior of corresponding pre-MPI-3 interfaces that have better defined behavior when an "int" quantity would overflow.
No errors
Passed Type_contiguous test - large_type
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test checks that MPI can handle large datatypes.
No errors
Passed MPI_Dist_graph_create test - distgraph1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test excercises MPI_Dist_graph_create() and MPI_Dist_graph_adjacent().
No errors
Passed MPI_Info_get() test 1 - infoenv
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This is a simple test of MPI_Info_get().
No errors
Passed MPI_Status large count test - big_count_status
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test manipulates an MPI status object using MPI_Status_set_elements_x() with a large count value.
No errors
Passed MPI_Mprobe() test - mprobe1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test MPI_Mprobe() to get the status of a pending receive, then calls MPI_Mrecv() with that status value.
No errors
Passed MPI_T calls test 1 - cvarwrite
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test writes to control variables exposed by MPI_T functionality of MPI_3.0.
No errors
Passed MPI_T calls test 2 - mpi_t_str
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
A test that MPI_T string handling is working as expected.
No errors
Passed MPI_T calls test 3 - mpit_vars
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
To print out all MPI_T control variables, performance variables and their categories in the MPI implementation.
No errors
Passed Thread/RMA interaction test - multirma
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This is a simple test of threads in MPI.
No errors
Passed Threaded group test - comm_create_group_threads
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
In this test a number of threads are created with a distinct MPI communicator (or comm) group distringuished by its thread-id (used as a tag). Threads on even ranks join an even comm and threads on odd ranks join the odd comm.
No errors
Passed Easy thread test 2 - comm_idup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a simple test of threads in MPI
No Errors
Passed Multiple threads test 1 - ctxdup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test creates communications concurrently in different threads.
No errors
Passed Multiple threads test 2 - ctxidup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test creates communications concurrently in different threads.
No errors
NA MPIT multithreaded test - mpit_threading
Build: Passed
Execution: Failed
Exit Status: MPI_THREAD_MULTIPLE required
MPI Processes: 1
Test Description:
This test is adapted from test/mpi/mpi_t/mpit_vars.c. But this is a multithreading version in which multiple threads will call MPI_T routines.
With verbose set, thread 0 will prints out MPI_T control variables, performance variables and their categories.
MPI does not support MPI_THREAD_MULTIPLE.
MPI-2.2 - Score: 92% Passed
This group features tests that exercises MPI functionality of MPI-2.2 and earlier.
Passed Reduce_local test - reduce_local
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
A simple test of MPI_Reduce_local(). Uses MPI_SUM as well as user defined operators.
No errors
Passed Alloc_mem test - alloc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks if MPI_Alloc_mem() is supported. If test passes, then the following is reported "MPI_Alloc_mem is supported." else, "MPI_Alloc_mem NOT supported" is reported.
No errors
Passed Communicator attributes test - attributes
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Returns all communicator attributes that are not supported. The test is run as a single process MPI job.
No errors
Passed Extended collectives test - collectives
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Checks if "extended collectives" are supported. If the test fails to compile, then "extended collectives" are not supported. If the test compiles, then a 4-process MPI job is executed. If the job aborts, then "Extended collectives NOT supported" is reported. If the job executes and the correct value is returned, then "Extended collectives ARE supported" is reported.
No errors
Passed Deprecated routines test - deprecated
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks all MPI deprecated routines as of MPI-2.2.
MPI_Address(): is functional. MPI_Attr_delete(): is functional. MPI_Attr_get(): is functional. MPI_Attr_put(): is functional. MPI_Errhandler_create(): is functional. MPI_Errhandler_get(): is functional. MPI_Errhandler_set(): is functional. MPI_Keyval_create(): is functional. MPI_Keyval_free(): is functional. MPI_Type_extent(): is functional. MPI_Type_hindexed(): is functional. MPI_Type_hvector(): is functional. MPI_Type_lb(): is functional. MPI_Type_struct(): is functional. MPI_Type_ub(): is functional. No errors
Passed Dynamic process management test - dynamic
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks if the dynamic process management routines through MPI-2.2 are defined. If the test passes, then "No errors" is reported.
MPI_Comm_spawn(): verified MPI_Comm_get_parrent(): verified MPI_Open_port(): verified MPI_Comm_accept(): verified MPI_Comm_connect(): verified MPI_Publish_name(): verified MPI_Unpublish_name(): verified MPI_Lookup_name(): verified MPI_Comm_disconnect(): verified MPI_Comm_join(): verified Dynamic process management routines: verified No errors
Failed Error Handling test - errors
Build: Passed
Execution: Failed
Exit Status: Failed with signal 127
MPI Processes: 1
Test Description:
Reports the default action taken on an error. It also reports if error handling can be changed to "returns", and if so, if this functions properly.
MPI errors are fatal by default. MPI errors can be changed to MPI_ERRORS_RETURN. Call MPI_Send() with a bad destination rank. MPT ERROR: Assertion failed at gps.c:187: "MPI_UNDEFINED != grank" MPT ERROR: Rank 0(g:0) is aborting with error code 0. Process ID: 173825, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/utk/errors MPT Version: HPE MPT 2.20 08/30/19 04:33:45 MPT: --------stack traceback------- MPT: Attaching to program: /proc/173825/exe, process 173825 MPT: (no debugging symbols found)...done. MPT: [New LWP 173827] MPT: [Thread debugging using libthread_db enabled] MPT: Using host libthread_db library "/lib64/libthread_db.so.1". MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: Missing separate debuginfos, use: debuginfo-install glibc-2.17-307.el7.1.x86_64 libbitmask-2.0-sgi720r52.rhel76.x86_64 libcpuset-1.0-sgi720r102.rhel76.x86_64 libgcc-4.8.5-39.el7.x86_64 libibverbs-22.4-2.el7_8.x86_64 libnl3-3.2.28-4.el7.x86_64 libpsm2-11.2.80-1.x86_64 numactl-libs-2.0.12-5.el7.x86_64 numatools-2.0-sgi720r149.rhel76.x86_64 MPT: (gdb) #0 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: #1 0x00002aaaab61d806 in mpi_sgi_system ( MPT: #2 MPI_SGI_stacktraceback ( MPT: header=header@entry=0x7fffffffbb70 "MPT ERROR: Rank 0(g:0) is aborting with error code 0.\n\tProcess ID: 173825, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/utk/errors\n\tMPT Version: HPE MPT 2.20 08/30/19 04:33:45\n") at sig.c:340 MPT: #3 0x00002aaaab565fc9 in print_traceback (ecode=ecode@entry=0) at abort.c:246 MPT: #4 0x00002aaaab566476 in MPI_SGI_abort () at abort.c:122 MPT: #5 0x00002aaaab56d08a in MPI_SGI_assert_fail ( MPT: str=str@entry=0x2aaaab69a2e5 "MPI_UNDEFINED != grank", MPT: file=file@entry=0x2aaaab69a2c8 "gps.c", line=line@entry=187) at all.c:217 MPT: #6 0x00002aaaab5bd12b in MPI_SGI_gps_initialize ( MPT: dom=dom@entry=0x2aaaab8d7dc0 <dom_default>, grank=grank@entry=-3) MPT: at gps.c:187 MPT: #7 0x00002aaaab560892 in MPI_SGI_gps (grank=-3, MPT: dom=0x2aaaab8d7dc0 <dom_default>) at gps.h:149 MPT: #8 MPI_SGI_request_send (modes=modes@entry=9, MPT: ubuf=ubuf@entry=0x7fffffffc290, count=1, type=type@entry=3, MPT: des=des@entry=1, tag=tag@entry=-1, comm=1) at req.c:764 MPT: #9 0x00002aaaab61c1cd in PMPI_Send (buf=0x7fffffffc290, MPT: count=<optimized out>, type=3, des=1, tag=-1, comm=1) at send.c:34 MPT: #10 0x0000000000402318 in main () MPT: (gdb) A debugging session is active. MPT: MPT: Inferior 1 [process 173825] will be detached. MPT: MPT: Quit anyway? (y or n) [answered Y; input not from terminal] MPT: Detaching from program: /proc/173825/exe, process 173825 MPT: [Inferior 1 (process 173825) detached] MPT: -----stack traceback ends----- MPT ERROR: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize() aborting job
Passed Init argument test - init_args
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
In MPI-1.1, it is explicitly stated that an implementation is allowed to require that the arguments argc and argv passed by an application to MPI_Init in C be the same arguments passed into the application as the arguments to main. In MPI-2 implementations are not allowed to impose this requirement. Conforming implementations of MPI allow applications to pass NULL for both the argc and argv arguments of MPI_Init(). This test prints the result of the error status of MPI_Init(). If the test completes without error, it reports 'No errors.'
MPI_INIT accepts Null arguments for MPI_init(). No errors
Passed C/Fortran interoperability test - interoperability
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks if the C-Fortran (F77) interoperability functions are supported using MPI-2.2 specification.
No errors
Passed I/O modes test - io_modes
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks if all MPI predefined I/O modes are supported. If test passes, "No errors" is reported. Any modes not supported are indicated individually as not being supported.
MPI_MODE_APPEND:128 MPI_MODE_CREATE:1 MPI_MODE_DELETE_ON_CLOSE:16 MPI_MODE_EXCL:64 MPI_MODE_RDONLY:2 MPI_MODE_RDWR:8 MPI_MODE_SEQUENTIAL:256 MPI_MODE_UNIQUE_OPEN:32 MPI_MODE_WRONLY:4 No errors
Passed I/O verification test 1 - io_test
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Reports if MPI I/O is supported. If the MPI-I/O routines terminate normally and provide correct results, MPI-I/O reportes "No errors", otherwise error messages are generated.
rank:0/4 MPI-I/O is supported. No errors rank:1/4 MPI-I/O is supported. rank:2/4 MPI-I/O is supported. rank:3/4 MPI-I/O is supported.
Passed I/O verification test 2 - io_verify
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test is used to verify that the file created by io_test,c holds the correct values. If the test fails, the problem is reported. If all tests pass successfully it is reported that MPI-I/O is supported.
MPI-I/O: MPI_File_open() is verified. MPI-I/O: MPI_File_read() is verified. MPI-I/O: MPI_FILE_close() is verified. No errors
Passed Master/slave test - master
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test running as a single MPI process spawns four slave processes using MPI_Comm_spawn(). The master process sends and receives a message from each slave. If the test completes, it will report 'No errors.', otherwise specific error messages are listed.
MPI_UNIVERSE_SIZE read 33 MPI_UNIVERSE_SIZE forced to 33 master rank creating 4 slave processes. master error code for slave:0 is 0. master error code for slave:1 is 0. master error code for slave:2 is 0. master error code for slave:3 is 0. master rank:0/1 sent an int:4 to slave rank:0. slave rank:0/4 alive. master rank:0/1 sent an int:4 to slave rank:1. slave rank:1/4 alive. master rank:0/1 sent an int:4 to slave rank:2. slave rank:2/4 alive. master rank:0/1 sent an int:4 to slave rank:3. slave rank:3/4 alive. slave rank:3/4 received an int:4 from rank 0 master rank:0/1 recv an int:0 from slave rank:0 slave rank:0/4 received an int:4 from rank 0 slave rank:0/4 sent its rank to rank 0 slave rank 0 just before disconnecting from master_comm. slave rank: 0 after disconnecting from master_comm. master rank:0/1 recv an int:1 from slave rank:1 master rank:0/1 recv an int:2 from slave rank:2 master rank:0/1 recv an int:3 from slave rank:3 ./master ending with exit status:0 slave rank:1/4 received an int:4 from rank 0 slave rank:1/4 sent its rank to rank 0 slave rank 1 just before disconnecting from master_comm. slave rank: 1 after disconnecting from master_comm. slave rank:2/4 received an int:4 from rank 0 slave rank:2/4 sent its rank to rank 0 slave rank 2 just before disconnecting from master_comm. slave rank: 2 after disconnecting from master_comm. slave rank:3/4 sent its rank to rank 0 slave rank 3 just before disconnecting from master_comm. slave rank: 3 after disconnecting from master_comm. No errors
Failed MPI-2 Routines test 2 - mpi_2_functions_bcast
Build: Passed
Execution: Failed
Exit Status: Application_timed_out
MPI Processes: 2
Test Description:
This test simply checks all MPI-2 routines that replaced some MPI-1 routines. Since these routines were added to avoid ambiquity with MPI-2 functionality, they do not add functionality to the test suite.
Test Output: None.
Passed MPI-2 routines test 1 - mpi_2_functions
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks all MPI-2.2 routines that replaced deprecated routines. If the test passes, then "No errors" is reported, otherwise, specific errors are reported."
No errors
Passed One-sided fences test - one_sided_fences
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test Verifies that one-sided communication with active target synchronization with fences functions properly. If all operations succeed, one-sided communication with active target synchronization with fences is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with active target synchronization with fences is reported as NOT supported.
No errors
Passed One-sided communication test - one_sided_modes
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks MPI-2.2 one-sided communication modes reporting those that are not defined. If the test compiles, then "No errors" is reported, else, all undefined modes are reported as "not defined."
No errors
Passed One-sided passive test - one_sided_passive
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Verifies one-sided-communication with passive target synchronization functions properly. If all operations succeed, one-sided-communication with passive target synchronization is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with passive target synchronization with fences is reported as NOT supported.
No errors
Passed One-sided post test - one_sided_post
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Verifies one-sided-communication with active target synchronization with post/start/complete/wait functions properly. If all operations succeed, one-sided communication with active target synchronization with post/start/complete/wait is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with active target synchronization with post/start/complete/wait is reported as NOT supported.
No errors
Passed One-sided routines test - one_sided_routines
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Reports if one-sided communication routines are defined. If this test compiles, one-sided communication is reported as defined, otherwise "not supported".
No errors
Passed Thread support test - thread_safety
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Reports the level of thread support provided. This is either MPI_THREAD_SINGLE, MPI_THREAD_FUNNELED, MPI_THREAD_SERIALIZED, or MPI_THREAD_MULTIPLE.
MPI_THREAD_MULTIPLE requested. MPI_THREAD_MULTIPLE is supported. No errors
Passed Comm_create() test - iccreate
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
This program tests that MPI_Comm_create applies to intercommunicators. This is an extension added in MPI-2.
No errors
Passed Comm_split Test 1 - icsplit
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
This tests whether MPI_Comm_split() applies to intercommunicators which is an extension of MPI-2.
No errors
Passed MPI_Topo_test() test - dgraph_unwgt
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Specify a distributed graph of a bidirectional ring of the MPI_COMM_WORLD communicator. Thus each node in the graph has a left and right neighbor.
No errors
RMA - Score: 97% Passed
This group features tests that involve Remote Memory Access, sometimes called one-sided communication. Remote Memory Access is similar in fuctionality to shared memory access.
Passed Alloc_mem test - alloc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks if MPI_Alloc_mem() is supported. If test passes, then the following is reported "MPI_Alloc_mem is supported." else, "MPI_Alloc_mem NOT supported" is reported.
No errors
Passed One-sided fences test - one_sided_fences
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test Verifies that one-sided communication with active target synchronization with fences functions properly. If all operations succeed, one-sided communication with active target synchronization with fences is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with active target synchronization with fences is reported as NOT supported.
No errors
Passed One-sided communication test - one_sided_modes
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Checks MPI-2.2 one-sided communication modes reporting those that are not defined. If the test compiles, then "No errors" is reported, else, all undefined modes are reported as "not defined."
No errors
Passed One-sided passive test - one_sided_passive
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Verifies one-sided-communication with passive target synchronization functions properly. If all operations succeed, one-sided-communication with passive target synchronization is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with passive target synchronization with fences is reported as NOT supported.
No errors
Passed One-sided post test - one_sided_post
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Verifies one-sided-communication with active target synchronization with post/start/complete/wait functions properly. If all operations succeed, one-sided communication with active target synchronization with post/start/complete/wait is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with active target synchronization with post/start/complete/wait is reported as NOT supported.
No errors
Passed One-sided routines test - one_sided_routines
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Reports if one-sided communication routines are defined. If this test compiles, one-sided communication is reported as defined, otherwise "not supported".
No errors
Failed Accumulate with fence test 1 - accfence1
Build: Passed
Execution: Failed
Exit Status: Failed with signal 9
MPI Processes: 4
Test Description:
This simple test of Accumulate/Replace with fence.
MPT ERROR: Unrecognized type in MPI_SGI_unpacktype MPT ERROR: Rank 3(g:3) is aborting with error code 1. Process ID: 175911, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/rma/accfence1 MPT Version: HPE MPT 2.20 08/30/19 04:33:45 MPT: --------stack traceback------- MPT: Attaching to program: /proc/175911/exe, process 175911 MPT: (no debugging symbols found)...done. MPT: [New LWP 175926] MPT: [Thread debugging using libthread_db enabled] MPT: Using host libthread_db library "/lib64/libthread_db.so.1". MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: Missing separate debuginfos, use: debuginfo-install glibc-2.17-307.el7.1.x86_64 libbitmask-2.0-sgi720r52.rhel76.x86_64 libcpuset-1.0-sgi720r102.rhel76.x86_64 libgcc-4.8.5-39.el7.x86_64 libibverbs-22.4-2.el7_8.x86_64 libnl3-3.2.28-4.el7.x86_64 libpsm2-11.2.80-1.x86_64 numactl-libs-2.0.12-5.el7.x86_64 numatools-2.0-sgi720r149.rhel76.x86_64 MPT: (gdb) #0 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: #1 0x00002aaaab61d806 in mpi_sgi_system ( MPT: #2 MPI_SGI_stacktraceback ( MPT: header=header@entry=0x7fffffffaa00 "MPT ERROR: Rank 3(g:3) is aborting with error code 1.\n\tProcess ID: 175911, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/rma/accfence1\n\tMPT Version: HPE MPT 2.20 08/30/19 04:33:45"...) at sig.c:340 MPT: #3 0x00002aaaab565fc9 in print_traceback (ecode=ecode@entry=1) at abort.c:246 MPT: #4 0x00002aaaab56629a in PMPI_Abort (comm=comm@entry=1, MPT: errorcode=errorcode@entry=1) at abort.c:68 MPT: #5 0x00002aaaab637f75 in MPI_SGI_unpacktype ( MPT: packbuf=packbuf@entry=0x7fffffffafa0 "\a", buflen=24, MPT: bufpos=bufpos@entry=0x7fffffffb048, comm=4) at unpacktype.c:264 MPT: #6 0x00002aaaab619158 in MPI_SGI_rma_progress () at rma_progress.c:141 MPT: #7 0x00002aaaab55b57c in progress_rma () at progress.c:205 MPT: #8 MPI_SGI_progress (dom=0x2aaaab8d7dc0 <dom_default>) at progress.c:315 MPT: #9 0x00002aaaab562823 in MPI_SGI_request_wait ( MPT: request=request@entry=0x7fffffffb18c, MPT: status=status@entry=0x612ab0 <mpi_sgi_status_ignore>, MPT: set=set@entry=0x7fffffffb184, gen_rc=gen_rc@entry=0x7fffffffb188) MPT: at req.c:1662 MPT: #10 0x00002aaaab57faa3 in MPI_SGI_barrier_basic (comm=4) at barrier.c:62 MPT: #11 0x00002aaaab57fc6d in MPI_SGI_barrier (comm=<optimized out>) MPT: at barrier.c:210 MPT: #12 0x00002aaaab63d2c5 in PMPI_Win_fence (assert=<optimized out>, win=1) MPT: at win_fence.c:46 MPT: #13 0x00000000004022ad in main () MPT: (gdb) A debugging session is active. MPT: MPT: Inferior 1 [process 175911] will be detached. MPT: MPT: Quit anyway? (y or n) [answered Y; input not from terminal] MPT: Detaching from program: /proc/175911/exe, process 175911 MPT: [Inferior 1 (process 175911) detached] MPT: -----stack traceback ends----- MPT ERROR: MPI_COMM_WORLD rank 3 has terminated without calling MPI_Finalize() aborting job
Passed Accumulate with fence test 2 - accfence2_am
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Accumulate Fence. Test MPI_Accumulate with fence. This test is the same as accfence2 except that it uses alloc_mem() to allocate memory.
No errors
Passed Accumulate() with fence test 3 - accfence2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Accumulate Fence. Test MPI_Accumulate with fence. The test illustrates the use of the routines to run through a selection of communicators and datatypes. Use subsets of these for tests that do not involve combinations of communicators, datatypes, and counts of datatypes.
No errors
Passed Accumulate with Lock test - acc-loc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Accumulate Lock. This test uses MAXLOC and MINLOC with MPI_Accumulate
No errors
Failed RMA post/start/complete/wait test - accpscw1
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 4
Test Description:
Accumulate Post-Start-Complete-Wait. This test uses accumulate/replace with post/start/complete/wait.
MPT ERROR: Unrecognized type in MPI_SGI_unpacktype MPT ERROR: Rank 3(g:3) is aborting with error code 1. Process ID: 181073, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/rma/accpscw1 MPT Version: HPE MPT 2.20 08/30/19 04:33:45 MPT: --------stack traceback------- MPT: Attaching to program: /proc/181073/exe, process 181073 MPT: (no debugging symbols found)...done. MPT: [New LWP 181076] MPT: [Thread debugging using libthread_db enabled] MPT: Using host libthread_db library "/lib64/libthread_db.so.1". MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: Missing separate debuginfos, use: debuginfo-install glibc-2.17-307.el7.1.x86_64 libbitmask-2.0-sgi720r52.rhel76.x86_64 libcpuset-1.0-sgi720r102.rhel76.x86_64 libgcc-4.8.5-39.el7.x86_64 libibverbs-22.4-2.el7_8.x86_64 libnl3-3.2.28-4.el7.x86_64 libpsm2-11.2.80-1.x86_64 numactl-libs-2.0.12-5.el7.x86_64 numatools-2.0-sgi720r149.rhel76.x86_64 MPT: (gdb) #0 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: #1 0x00002aaaab61d806 in mpi_sgi_system ( MPT: #2 MPI_SGI_stacktraceback ( MPT: header=header@entry=0x7fffffffaa50 "MPT ERROR: Rank 3(g:3) is aborting with error code 1.\n\tProcess ID: 181073, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/rma/accpscw1\n\tMPT Version: HPE MPT 2.20 08/30/19 04:33:45\n") at sig.c:340 MPT: #3 0x00002aaaab565fc9 in print_traceback (ecode=ecode@entry=1) at abort.c:246 MPT: #4 0x00002aaaab56629a in PMPI_Abort (comm=comm@entry=1, MPT: errorcode=errorcode@entry=1) at abort.c:68 MPT: #5 0x00002aaaab637f75 in MPI_SGI_unpacktype ( MPT: packbuf=packbuf@entry=0x7fffffffaff0 "\a", buflen=24, MPT: bufpos=bufpos@entry=0x7fffffffb098, comm=5) at unpacktype.c:264 MPT: #6 0x00002aaaab619158 in MPI_SGI_rma_progress () at rma_progress.c:141 MPT: #7 0x00002aaaab55b57c in progress_rma () at progress.c:205 MPT: #8 MPI_SGI_progress (dom=dom@entry=0x2aaaab8d7dc0 <dom_default>) MPT: at progress.c:315 MPT: #9 0x00002aaaab640b13 in MPI_SGI_win_test (winptr=0x4010dc0, MPT: flag=flag@entry=0x0) at win_test.c:70 MPT: #10 0x00002aaaab6414bf in PMPI_Win_wait (win=1) at win_wait.c:28 MPT: #11 0x00000000004024b5 in main () MPT: (gdb) A debugging session is active. MPT: MPT: Inferior 1 [process 181073] will be detached. MPT: MPT: Quit anyway? (y or n) [answered Y; input not from terminal] MPT: Detaching from program: /proc/181073/exe, process 181073 MPT: [Inferior 1 (process 181073) detached] MPT: -----stack traceback ends----- MPT ERROR: MPI_COMM_WORLD rank 3 has terminated without calling MPI_Finalize() aborting job
Passed ADLB mimic test - adlb_mimic1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 3
Test Description:
This test uses one server process (S), one target process (T) and a bunch of origin processes (O). 'O' PUTs (LOCK/PUT/UNLOCK) data to a distinct part of the window, and sends a message to 'S' once the UNLOCK has completed. The server forwards this message to 'T'. 'T' GETS the data from this buffer after it receives the message from 'S', to see if it contains the correct contents.

No errors
Passed Alloc_mem test - allocmem
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Allocate Memory. Simple test where MPI_Alloc_mem() and MPI_Free_mem() work together.
No errors
Passed Attributes order test - attrorderwin
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Test creating and inserting attributes in different orders to ensure the list management code handles all cases.
No errors
Passed RMA compliance test - badrma
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
The test uses various combinations of either zero size datatypes or zero size counts. All tests should pass to be compliant with the MPI-3.0 specification.
No errors
Passed RMA attributes test - baseattrwin
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test creates a window, then extracts its attributes through a series of MPI calls.
No errors
Passed Compare_and_swap test - compare_and_swap
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This function compares one element of type datatype in the compare buffer compare_addr with the buffer at offset target_disp in the target window specified by target_rank and window. It replaces the value at the target with the value in the origin buffer if both buffers are identical. The original value at the target is returned in the result buffer.
No errors
Passed Contented Put test 2 - contention_put
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Contented RMA put test by James Dinan dinan@mcs.anl.gov. Each process issues COUNT put operations to non-overlapping locations on every other process.
No errors
Passed Contented Put test 1 - contention_putget
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Contented RMA put test by James Dinan dinan@mcs.anl.gov. Each process issues COUNT put and get operations to non-overlapping locations on every other process.
No errors
Passed Contiguous Get test - contig_displ
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This program calls MPI_Get with an indexed datatype. The datatype comprises a single integer at an initial displacement of 1 integer. That is, the first integer in the array is to be skipped. This program found a bug in IBM's MPI in which MPI_Get ignored the displacement and got the first integer instead of the second. Run with one (1) process.
No errors
Passed Put() with fences test - epochtest
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Put with Fences used to seperate epochs. This test looks at the behavior of MPI_Win_fence and epochs. Each MPI_Win_fence may both begin and end both the exposure and access epochs. Thus, it is not necessary to use MPI_Win_fence in pairs.
The tests have the following form:
Process A Process B fence fence put,put fence fence put,put fence fence put,put put,put fence fence
No errors
Passed RMA Shared Memory test - fence_shm
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This simple test uses MPI_Win_allocate_shared() with MPI_Win_fence(), MPI_Put() calls with assertions.
No errors
Passed Fetch_and_add test 2 - fetchandadd_am
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
MPI fetch and add test. Fetch and add example from Using MPI-2 (the non-scalable version,Fig. 6.12). This test is the same as rma/fetchandadd but uses alloc_mem.
No errors
Passed Fetch_and_add test 1 - fetchandadd
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
Fetch and add example from Using MPI-2 (the non-scalable version,Fig. 6.12).
No errors
Passed Fetch_and_add test 4 - fetchandadd_tree_am
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
This is the tree-based scalable version of the fetch-and-add example from Using MPI-2, pg 206-207. The code in the book (Fig 6.16) has bugs that are fixed in this test.
No errors
Passed Fetch_and_add test 3 - fetchandadd_tree
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
This is the tree-based scalable version of the fetch-and-add example from the book Using MPI-2, p. 206-207. This test is functionally attempting to perform an atomic read-modify-write sequence using MPI-2 one-sided operations.
No errors
Passed Fetch_and_op test - fetch_and_op
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a simple test that executes the MPI_Fetch_and op() calls on RMA windows.
No errors
Passed Keyvalue create/delete test - fkeyvalwin
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Free keyval window. Test freeing keyvals while still attached to an RMA windown, then make sure that the keyval delete code is still executed.
No errors
Passed Win_flush() test - flush
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Window Flush. This simple test flushes a shared window.
No errors
Passed Get_acculumate test 1 - get_acc_local
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Get Accumulated Test. This is a simple test of MPI_Get_accumulate().
No errors
Passed Get_accumulate test 2 - get_accumulate
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Get Accumulate Test. This is a simple test of MPI_Get_accumulate().
No errors
Passed Get with fence test - getfence1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Get with Fence. This is a simple test using MPI_Get() with fence.
No errors
Passed Win_get_group test - getgroup
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This is a simple test of MPI_Win_get_group().
No errors
Passed Parallel pi calculation test - ircpi
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test calculates pi by integrating the function 4/(1+x*x). It was converted from an interactive program to a batch program to facilitate it's use in the test suite.
Enter the number of intervals: (0 quits) Number if intervals used: 10 pi is approximately 3.1424259850010978, Error is 0.0008333314113047 Enter the number of intervals: (0 quits) Number if intervals used: 100 pi is approximately 3.1416009869231241, Error is 0.0000083333333309 Enter the number of intervals: (0 quits) Number if intervals used: 1000 pi is approximately 3.1415927369231271, Error is 0.0000000833333340 Enter the number of intervals: (0 quits) Number if intervals used: 10000 pi is approximately 3.1415926544231247, Error is 0.0000000008333316 Enter the number of intervals: (0 quits) Number if intervals used: 100000 pi is approximately 3.1415926535981344, Error is 0.0000000000083413 Enter the number of intervals: (0 quits) Number if intervals used: 1000000 pi is approximately 3.1415926535898899, Error is 0.0000000000000968 Enter the number of intervals: (0 quits) Number if intervals used: 10000000 pi is approximately 3.1415926535898064, Error is 0.0000000000000133 Enter the number of intervals: (0 quits) Number if intervals used: 0 No errors.
Passed Linked_list construction test 1 - linked_list_bench_lock_all
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Construct a distributed shared linked list using MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. Each process p then appends N new elements to the list when the tail reaches process p-1.
No errors
Passed Linked_list construction test 2 - linked_list_bench_lock_excl
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
MPI-3 distributed linked list construction example. Construct a distributed shared linked list using proposed MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. Each process "p" then appends N new elements to the list when the tail reaches process "p-1". The test uses the MPI_LOCK_EXCLUSIVE argument with MPI_Win_lock().
No errors
Passed Linked-list construction test 3 - linked_list_bench_lock_shr
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test constructs a distributed shared linked list using MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. Each process "p" then appends N new elements to the list when the tail reaches process "p-1". This test is similar to "rma/linked_list_bench_lock_excl" but uses an MPI_LOCK_SHARED parameter to MPI_Win_Lock().
No errors
Passed Linked_list construction test 4 - linked_list
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test constructs a distributed shared linked list using MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to an RMA window, and broadcasts the pointer to all processes. All processes then concurrently append N new elements to the list. When a process attempts to attach its element to the tail of list it may discover that its tail pointer is stale and it must chase ahead to the new tail before the element can be attached.
No errors
Passed Linked list construction test 5 - linked_list_fop
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test constructs a distributed shared linked list using MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. All processes then concurrently append N new elements to the list. When a process attempts to attach its element to the tail of list it may discover that its tail pointer is stale and it must chase ahead to the new tail before the element can be attached.
No errors
Passed Linked list construction test 6 - linked_list_lockall
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Construct a distributed shared linked list using MPI-3 dynamic RMA windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. All processes then concurrently append N new elements to the list. When a process attempts to attach its element to the tail of list it may discover that its tail pointer is stale and it must chase ahead to the new tail before the element can be attached. This version of the test suite uses MPI_Win_lock_all() instead of MPI_Win_lock(MPI_LOCK_EXCLUSIVE, ...).
No errors
Passed RMA contention test 1 - lockcontention2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 8
Test Description:
Test for lock contention. Tests for lock contention, including special cases within the MPI implementation; in this case, our coverage analysis showed that the lockcontention test was not covering all cases, and in fact, this test revealed a bug in the code). In all of these tests, each process writes (or accesses) the values rank + i*size_of_world for NELM times. This test strives to avoid operations not strictly permitted by MPI RMA, for example, it doesn't target the same locations with multiple put/get calls in the same access epoch.
No errors
Failed RMA contention test 2 - lockcontention3
Build: Passed
Execution: Failed
Exit Status: Failed with signal 9
MPI Processes: 8
Test Description:
Additional test for lock contention. Additional tests for lock contention. These are designed to exercise some of the optimizations within MPICH, but all are valid MPI programs. Tests structure includes:
lock local (must happen
at this time since application can use load store after thelock)
send message to partner
receive message
send ack
receive ack
Provide a delay so that the partner will see the conflict
partner executes:
lock // Note: this may block rma operations (see below)
unlock
send back to partner
unlock
receive from partner
check for correct data
The delay may be implemented as a ring of message communication; this is likely to automatically scale the time to what is needed.
MPT ERROR: Rank 1(g:1) received signal SIGSEGV(11). Process ID: 174219, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/rma/lockcontention3 MPT Version: HPE MPT 2.20 08/30/19 04:33:45 MPT: --------stack traceback------- MPT: Attaching to program: /proc/174219/exe, process 174219 MPT: (no debugging symbols found)...done. MPT: [New LWP 174227] MPT: [Thread debugging using libthread_db enabled] MPT: Using host libthread_db library "/lib64/libthread_db.so.1". MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: Missing separate debuginfos, use: debuginfo-install glibc-2.17-307.el7.1.x86_64 libbitmask-2.0-sgi720r52.rhel76.x86_64 libcpuset-1.0-sgi720r102.rhel76.x86_64 libgcc-4.8.5-39.el7.x86_64 libibverbs-22.4-2.el7_8.x86_64 libnl3-3.2.28-4.el7.x86_64 libpsm2-11.2.80-1.x86_64 numactl-libs-2.0.12-5.el7.x86_64 numatools-2.0-sgi720r149.rhel76.x86_64 MPT: (gdb) #0 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: #1 0x00002aaaab61d806 in mpi_sgi_system ( MPT: #2 MPI_SGI_stacktraceback ( MPT: header=header@entry=0x7fffffff9dc0 "MPT ERROR: Rank 1(g:1) received signal SIGSEGV(11).\n\tProcess ID: 174219, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/rma/lockcontention3\n\tMPT Version: HPE MPT 2.20 08/30/19 04:3"...) at sig.c:340 MPT: #3 0x00002aaaab61da02 in first_arriver_handler (signo=signo@entry=11, MPT: stack_trace_sem=stack_trace_sem@entry=0x2aaaaea20080) at sig.c:489 MPT: #4 0x00002aaaab61dd9b in slave_sig_handler (signo=11, MPT: siginfo=<optimized out>, extra=<optimized out>) at sig.c:565 MPT: #5 <signal handler called> MPT: #6 0x00002aaaabf546ef in __memcpy_ssse3_back () from /lib64/libc.so.6 MPT: #7 0x00002aaaab5645a4 in mpt_memcpy (len=<optimized out>, s=<optimized out>, MPT: d=<optimized out>) at ../../../../include/mpiimpl.h:515 MPT: #8 pack_memcpy (len=<optimized out>, _src=<optimized out>, MPT: _dst=<optimized out>) at ../../../../include/mpiimpl.h:539 MPT: #9 do_rdma (len=<optimized out>, value=0x0, loc_addr=<optimized out>, MPT: rem_addr=<optimized out>, modes=0, gps=0x618c30) at shared.c:1081 MPT: #10 MPI_SGI_shared_do_rdma (request=0x2fc0c80) at shared.c:1110 MPT: #11 0x00002aaaab55a0fd in MPI_SGI_packet_state_rdma ( MPT: request=request@entry=0x2fc0c80) at packet_state.c:576 MPT: #12 0x00002aaaab5572a3 in MPI_SGI_packet_send_rdma ( MPT: request=request@entry=0x2fc0c80) at packet_send.c:152 MPT: #13 0x00002aaaab560eda in MPI_SGI_request_rdma (dom=<optimized out>, MPT: modes=modes@entry=0, loc_addr=loc_addr@entry=0x2f992a4, MPT: value=<optimized out>, value@entry=0x0, rad=rad@entry=0x7fffffffb050, MPT: len=len@entry=508) at req.c:1023 MPT: #14 0x00002aaaab608c42 in rdma_read (len=508, locp=0x2f992a4, MPT: rad=0x7fffffffb050, dom=<optimized out>) at rdma.c:140 MPT: #15 MPI_SGI_rdma_get (area=<optimized out>, rank=rank@entry=0, MPT: remp=remp@entry=0xe5c, locp=locp@entry=0x2f992a4, len=len@entry=508, MPT: isamo=isamo@entry=0) at rdma.c:435 MPT: #16 0x00002aaaab5ba780 in rdma_get (flags=0, winptr=0x2f9ccd0, MPT: target_datatype=3, target_count=<optimized out>, MPT: target_disp=<optimized out>, target_rank=0, MPT: origin_datatype=<optimized out>, origin_count=<optimized out>, MPT: origin_addr=<optimized out>) at get.c:128 MPT: #17 MPI_SGI_get (flags=0, win=<optimized out>, MPT: target_datatype=<optimized out>, target_count=<optimized out>, MPT: target_disp=<optimized out>, target_rank=0, MPT: origin_datatype=<optimized out>, origin_count=<optimized out>, MPT: origin_addr=<optimized out>) at get.c:206 MPT: #18 PMPI_Get (origin_addr=<optimized out>, origin_count=<optimized out>, MPT: origin_datatype=<optimized out>, target_rank=0, MPT: target_disp=<optimized out>, target_count=<optimized out>, MPT: target_datatype=3, win=1) at get.c:253 MPT: #19 0x0000000000402921 in main () MPT: (gdb) A debugging session is active. MPT: MPT: Inferior 1 [process 174219] will be detached. MPT: MPT: Quit anyway? (y or n) [answered Y; input not from terminal] MPT: Detaching from program: /proc/174219/exe, process 174219 MPT: [Inferior 1 (process 174219) detached] MPT: -----stack traceback ends----- MPT: On host r2i7n16, Program /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/rma/lockcontention3, Rank 1, Process 174219: Dumping core on signal SIGSEGV(11) into directory /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/rma MPT ERROR: MPI_COMM_WORLD rank 1 has terminated without calling MPI_Finalize() aborting job MPT: Received signal 11
Passed RMA contention test 3 - lockcontention
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 3
Test Description:
This is a modified version of rma/test4. Submitted by Liwei Peng, Microsoft. tests passive target RMA on 3 processes. Tests the lock-single_op-unlock optimization.
No errors
Passed Locks with no RMA ops test - locknull
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test creates a window, clears the memory in it using memset(), locks and unlocks it, then terminates.
No errors
Passed Lock-single_op-unlock test - lockopts
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test passive target RMA on 2 processes with the original datatype derived from the target datatype.
No errors
Passed RMA many ops test 1 - manyrma2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test is a simplification of the one in "perf/manyrma" that tests for correct handling of the case where many RMA operations occur between synchronization events. This is one of the ways that RMA may be used, and is used in the reference implementation of the graph500 benchmark.
No errors
Passed RMA many ops test 2 - manyrma3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Many RMA operations. This simple test creates an RMA window, locks it, and performs many accumulate operations on it.
No errors
Passed Mixed synchronization test - mixedsync
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Perform several communication operations, mixing synchronization types. Use multiple communication to avoid the single-operation optimization that may be present.
No errors
Passed RMA fence test 1 - nullpscw
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 7
Test Description:
This simple test creates a window then performs a post/start/complete/wait operation.
No errors
Passed RMA fence test 2 - pscw_ordering
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test checks an oddball case for generalized active target synchronization where the start occurs before the post. Since start can block until the corresponding post, the group passed to start must be disjoint from the group passed to post for processes to avoid a circular wait. Here, odd/even groups are used to accomplish this and the even group reverses its start/post calls.
No errors
Passed RMA fence test 3 - put_base
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Base Author: James Dinan dinan@mcs.anl.gov This code performs N strided put operations into a 2d patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI datatype. This test generates a datatype that is relative to an arbitrary base address in memory and tests the RMA implementation's ability to perform the correct transfer.
No errors
Passed RMA fence test 4 - put_bottom
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
One-Sided MPI 2-D Strided Put Test. This code performs N strided put operations into a 2d patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI datatype. This test generates a datatype that is relative to MPI_BOTTOM and tests the RMA implementation's ability to perform the correct transfer.
No errors
Passed RMA fence test 5 - putfence1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test illustrates the use of MPI routines to run through a selection of communicators and datatypes.
No errors
Passed RMA fence test 6 - putfidx
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
One MPI Implementation fails this test with sufficiently large values of blksize - it appears to convert this type to an incorrect contiguous move.
No errors
Passed RMA fence test 7 - putpscw1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test illustrates the use of MPI routines to run through a selection of communicators and datatypes.
No errors
Passed Request-based ops test - req_example
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Example 11.21 from the MPI 3.0 spec. The following example shows how request-based operations can be used to overlap communication with computation. Each process fetches, processes, and writes the result for NSTEPS chunks of data. Instead of a single buffer, M local buffers are used to allow up to M communication operations to overlap with computation.
No errors
Passed MPI RMA read-and-ops test - reqops
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test exercises atomic, one-sided read-and-operation calls.
No errors
Passed RMA contiguous calls test - rma-contig
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test exercises the one-sided contiguous MPI calls.
Starting one-sided contiguous performance test with 2 processes Synchronization mode: Exclusive lock Trg. Rank Xfer Size Get (usec) Put (usec) Acc (usec) Get (MiB/s) Put (MiB/s) Acc (MiB/s) 0 8 0.439 0.378 0.538 17.381 20.201 14.174 0 16 0.408 0.411 0.570 37.358 37.146 26.777 0 32 0.409 0.411 0.570 74.580 74.183 53.572 0 64 0.407 0.412 0.561 150.062 148.062 108.709 0 128 0.413 0.410 0.578 295.737 297.535 211.275 0 256 0.414 0.420 0.585 589.279 580.707 417.228 0 512 0.418 0.419 0.604 1168.929 1164.381 808.554 0 1024 0.426 0.433 0.684 2294.262 2256.160 1427.682 0 2048 0.443 0.452 0.789 4404.308 4319.213 2474.520 0 4096 0.490 0.488 0.944 7970.596 8002.046 4136.269 0 8192 0.561 0.565 1.337 13928.982 13823.470 5844.818 0 16384 0.954 0.949 2.216 16381.156 16457.560 7051.808 0 32768 1.404 1.420 3.860 22256.626 22013.706 8096.047 0 65536 2.388 2.385 7.093 26176.186 26201.946 8812.103 0 131072 5.486 4.344 13.350 22783.365 28776.636 9363.245 0 262144 10.682 8.241 26.028 23403.225 30336.730 9604.954 0 524288 40.781 37.767 112.844 12260.539 13239.122 4430.904 0 1048576 137.790 127.689 325.088 7257.439 7831.540 3076.088 0 2097152 309.185 292.110 672.412 6468.611 6846.731 2974.365 1 8 0.476 0.445 0.592 16.026 17.147 12.881 1 16 0.477 0.478 0.625 31.975 31.894 24.403 1 32 0.477 0.482 0.622 63.937 63.270 49.066 1 64 0.480 0.480 0.623 127.206 127.181 98.009 1 128 0.479 0.484 0.623 254.851 252.399 196.044 1 256 0.487 0.483 0.650 501.719 505.544 375.500 1 512 0.483 0.486 0.998 1010.398 1005.635 489.458 1 1024 0.490 0.498 0.754 1993.740 1962.788 1295.616 1 2048 0.511 0.510 0.862 3822.134 3828.234 2265.965 1 4096 0.548 0.554 1.008 7127.110 7044.641 3875.383 1 8192 0.627 0.642 1.397 12452.267 12171.641 5590.879 1 16384 1.009 1.029 2.281 15480.273 15187.242 6849.122 1 32768 1.482 1.490 3.928 21090.541 20973.153 7955.662 1 65536 2.465 2.483 7.165 25354.232 25167.010 8722.898 1 131072 7.846 4.427 13.464 15931.951 28232.978 9283.988 1 262144 14.208 8.340 26.181 17595.886 29976.020 9549.010 1 524288 46.542 36.556 110.457 10742.954 13677.567 4526.646 1 1048576 139.846 129.926 323.969 7150.700 7696.685 3086.715 1 2097152 312.147 289.654 667.231 6407.229 6904.778 2997.465 Starting one-sided contiguous performance test with 2 processes Synchronization mode: Exclusive lock, MPI_MODE_NOCHECK Trg. Rank Xfer Size Get (usec) Put (usec) Acc (usec) Get (MiB/s) Put (MiB/s) Acc (MiB/s) 0 8 0.405 0.377 0.508 18.820 20.241 15.033 0 16 0.407 0.411 0.555 37.461 37.148 27.518 0 32 0.407 0.408 0.560 74.949 74.840 54.510 0 64 0.411 0.412 0.558 148.560 148.043 109.294 0 128 0.412 0.413 0.564 296.607 295.837 216.466 0 256 0.414 0.420 0.584 589.336 580.876 417.897 0 512 0.422 0.419 0.849 1155.842 1165.551 575.025 0 1024 0.425 0.436 0.709 2296.803 2241.146 1378.332 0 2048 0.449 0.445 0.789 4347.888 4385.984 2474.779 0 4096 0.488 0.488 0.949 8005.887 8000.490 4117.167 0 8192 0.568 0.579 1.329 13762.094 13504.049 5878.119 0 16384 0.957 0.956 2.215 16327.186 16349.556 7054.370 0 32768 1.410 1.411 3.856 22156.926 22150.483 8103.369 0 65536 2.403 2.387 7.108 26011.877 26186.340 8792.651 0 131072 6.067 4.362 13.364 20601.777 28655.140 9353.188 0 262144 10.640 8.240 26.069 23496.279 30340.720 9589.799 0 524288 39.277 38.480 111.532 12730.128 12993.714 4483.028 0 1048576 135.924 125.638 328.801 7357.036 7959.407 3041.357 0 2097152 309.890 293.576 674.199 6453.901 6812.535 2966.485 1 8 0.473 0.449 0.569 16.134 16.990 13.416 1 16 0.475 0.482 0.626 32.136 31.662 24.385 1 32 0.472 0.477 0.624 64.653 64.031 48.881 1 64 0.474 0.476 0.625 128.835 128.323 97.593 1 128 0.479 0.485 0.629 254.622 251.822 194.146 1 256 0.482 0.485 0.647 506.976 503.365 377.437 1 512 0.483 0.490 0.895 1010.574 997.354 545.296 1 1024 0.489 0.490 0.797 1997.039 1991.011 1225.649 1 2048 0.508 0.517 0.851 3841.259 3774.178 2294.086 1 4096 0.556 0.551 1.002 7029.692 7092.817 3897.409 1 8192 0.638 0.644 1.393 12236.347 12128.247 5610.305 1 16384 1.011 1.025 2.267 15460.059 15246.198 6892.957 1 32768 1.501 1.497 3.923 20814.305 20869.846 7965.262 1 65536 2.446 2.456 7.165 25554.947 25448.548 8723.420 1 131072 6.531 4.434 13.462 19138.075 28191.135 9285.319 1 262144 12.567 8.316 26.355 19893.531 30062.181 9485.869 1 524288 47.789 38.447 113.351 10462.653 13004.816 4411.068 1 1048576 133.916 127.357 326.064 7467.366 7851.926 3066.882 1 2097152 312.028 289.118 670.938 6409.677 6917.589 2980.903 Starting one-sided contiguous performance test with 2 processes Synchronization mode: Shared lock Trg. Rank Xfer Size Get (usec) Put (usec) Acc (usec) Get (MiB/s) Put (MiB/s) Acc (MiB/s) 0 8 0.507 0.476 0.894 15.039 16.040 8.535 0 16 0.509 0.512 0.940 29.960 29.807 16.236 0 32 0.510 0.509 0.940 59.869 59.972 32.455 0 64 0.511 0.509 0.948 119.435 119.820 64.390 0 128 0.510 0.513 0.946 239.506 237.892 129.092 0 256 0.515 0.519 0.963 473.685 470.575 253.435 0 512 0.519 0.524 1.221 940.945 932.723 399.838 0 1024 0.527 0.534 1.088 1854.134 1829.685 897.622 0 2048 0.545 0.550 1.169 3586.358 3550.140 1670.173 0 4096 0.588 0.585 1.321 6640.963 6671.686 2957.306 0 8192 0.673 0.678 1.711 11608.503 11515.846 4565.913 0 16384 1.051 1.052 2.593 14862.754 14849.183 6026.669 0 32768 1.508 1.511 4.233 20718.000 20675.808 7382.133 0 65536 2.472 2.478 7.483 25283.467 25219.730 8352.285 0 131072 6.059 4.466 13.757 20628.915 27990.505 9086.094 0 262144 13.070 8.380 26.588 19128.463 29834.263 9402.874 0 524288 43.504 37.062 113.850 11493.324 13490.776 4391.743 0 1048576 135.747 129.278 329.317 7366.642 7735.278 3036.586 0 2097152 313.011 292.721 672.737 6389.556 6832.439 2972.930 1 8 0.615 0.586 1.053 12.407 13.012 7.243 1 16 0.615 0.621 1.089 24.831 24.567 14.011 1 32 0.612 0.618 1.095 49.862 49.359 27.867 1 64 0.609 0.618 1.099 100.294 98.762 55.517 1 128 0.611 0.619 1.090 199.753 197.212 111.960 1 256 0.621 0.626 1.120 393.016 389.773 218.075 1 512 0.637 0.642 1.365 766.388 760.504 357.643 1 1024 0.658 0.660 1.256 1484.444 1479.651 777.276 1 2048 0.671 0.671 1.342 2910.755 2909.729 1455.471 1 4096 0.696 0.694 1.475 5608.623 5627.020 2648.971 1 8192 0.780 0.770 1.899 10014.398 10140.734 4114.169 1 16384 1.166 1.175 2.774 13402.912 13297.373 5631.699 1 32768 1.617 1.633 4.437 19325.674 19132.163 7042.893 1 65536 2.600 2.610 7.656 24036.967 23941.941 8163.550 1 131072 6.851 4.562 13.924 18245.783 27398.865 8977.572 1 262144 12.882 8.447 26.707 19406.487 29595.275 9360.709 1 524288 44.754 37.063 113.680 11172.234 13490.677 4398.326 1 1048576 135.442 126.589 328.147 7383.213 7899.592 3047.414 1 2097152 310.797 293.467 674.829 6435.076 6815.083 2963.714 Starting one-sided contiguous performance test with 2 processes Synchronization mode: Shared lock, MPI_MODE_NOCHECK Trg. Rank Xfer Size Get (usec) Put (usec) Acc (usec) Get (MiB/s) Put (MiB/s) Acc (MiB/s) 0 8 0.507 0.478 0.893 15.036 15.977 8.548 0 16 0.510 0.513 0.939 29.931 29.759 16.249 0 32 0.510 0.512 0.942 59.807 59.581 32.380 0 64 0.513 0.513 0.950 119.039 119.013 64.245 0 128 0.514 0.511 0.941 237.438 239.025 129.686 0 256 0.522 0.516 0.975 468.140 473.274 250.491 0 512 0.525 0.520 1.272 929.445 939.563 383.833 0 1024 0.536 0.530 1.093 1820.379 1844.027 893.228 0 2048 0.554 0.546 1.170 3526.266 3574.994 1668.975 0 4096 0.584 0.593 1.332 6683.244 6588.916 2933.379 0 8192 0.660 0.674 1.721 11830.599 11590.071 4538.294 0 16384 1.041 1.055 2.589 15009.484 14813.983 6035.257 0 32768 1.526 1.525 4.247 20478.556 20493.846 7357.840 0 65536 2.537 2.488 7.520 24633.347 25125.565 8311.622 0 131072 6.443 4.469 13.778 19401.988 27970.465 9072.170 0 262144 12.379 8.379 26.584 20194.969 29836.042 9404.231 0 524288 41.953 36.884 110.689 11918.006 13555.996 4517.155 0 1048576 136.123 128.143 327.869 7346.276 7803.761 3050.002 0 2097152 313.284 294.000 656.832 6383.980 6802.713 3044.919 1 8 0.611 0.584 1.052 12.484 13.072 7.250 1 16 0.611 0.615 1.091 24.964 24.817 13.987 1 32 0.613 0.615 1.094 49.744 49.589 27.903 1 64 0.611 0.618 1.095 99.851 98.740 55.751 1 128 0.615 0.618 1.092 198.545 197.541 111.817 1 256 0.616 0.616 1.113 396.012 396.081 219.402 1 512 0.633 0.630 1.362 771.015 774.726 358.601 1 1024 0.656 0.647 1.248 1489.349 1509.518 782.629 1 2048 0.669 0.672 1.333 2920.319 2907.617 1465.404 1 4096 0.687 0.694 1.478 5683.779 5629.994 2642.727 1 8192 0.770 0.779 1.888 10144.374 10032.953 4138.178 1 16384 1.168 1.175 2.758 13374.357 13299.364 5664.803 1 32768 1.622 1.623 4.403 19265.834 19258.159 7097.977 1 65536 2.603 2.598 7.645 24012.970 24061.088 8175.015 1 131072 7.328 4.564 13.911 17057.745 27389.514 8985.570 1 262144 14.017 8.458 26.665 17835.027 29557.048 9375.568 1 524288 44.689 36.907 111.697 11188.393 13547.660 4476.385 1 1048576 136.260 128.025 323.638 7338.897 7810.963 3089.870 1 2097152 314.230 293.768 659.330 6364.774 6808.101 3033.380 No errors
Passed MPI_PROC_NULL test - rmanull
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test the MPI_PROC_NULL as a valid target.
No errors
Passed RMA zero-byte transfers test - rmazero
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Test loops are used to run through a series of communicators that are subsets of MPI_COMM_WORLD.
No errors
Passed RMA (rank=0) test - selfrma
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Test that calls many RMA calls to root=0.
No errors
Passed One-Sided accumulate test 1 - strided_acc_indexed
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Author: James Dinan dinan@mcs.anl.gov
Date: December, 201
This code performs N accumulates into a 2d patch of a shared array. The
array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y]
and begins at index [0, 0]. The input and output buffers are specified
using an MPI indexed type.
No errors
Passed Another one-sided accumulate test 2 - strided_acc_onelock
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This code performs one-sided accumulate into a 2-D patch of a shared array.
No errors
Passed One-Sided accumulate test 3 - strided_acc_subarray
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Author: James Dinan dinan@mcs.anl.gov
Date: December, 2010
This code performs N accumulates into a 2d patch of a shared array. The
array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y]
and begins at index [0, 0]. The input and output buffers are specified
using an MPI subarray type.
No errors
Passed One-Sided accumulate test 4 - strided_getacc_indexed
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Author: James Dinan dinan@mcs.anl.gov
Date: December, 201
This code performs N strided put operations followed by get operations into
a 2d patch of a shared array. The array has dimensions [X, Y] and the
subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The
input and output buffers are specified using an MPI indexed type.
No errors
Passed One-Sided accumulate test 6 - strided_get_indexed
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Author: James Dinan dinan@mcs.anl.gov
Date: December, 2010
This code performs N strided put operations followed by get operations into
a 2d patch of a shared array. The array has dimensions [X, Y] and the subarray
has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and
output buffers are specified using an MPI indexed type.
No errors
Passed One-sided accumulate test 7 - strided_putget_indexed
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Author: James Dinan dinan@mcs.anl.gov
Date: December, 2010
This code performs N strided put operations followed by get operations into
a 2-D patch of a shared array. The array has dimensions [X, Y] and the
subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input
and output buffers are specified using an MPI indexed datatype.
No errors
Passed Put,Gets,Accumulate test 1 - test1_am
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests a series of put, get, and accumulate on 2 processes using fence. This test is the same as rma/test1 but uses alloc_mem.
No errors
Passed Put,Gets,Accumulate test 2 - test1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests a series of puts, gets, and accumulate on 2 processes using fence.
No errors
Passed Put,Gets,Accumulate test 3 - test1_dt
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests a series of puts, gets, and accumulate on 2 processes using fence. Same as rma/test1 but uses derived datatypes to receive data.
No errors
Passed Put,Gets,Accumulate test 4 - test2_am
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests put and get with post/start/complete/wait on 2 processes. Same as rma/test1 but uses alloc_mem.
No errors
Passed Put,Gets,Accumulate test 5 - test2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests put and get with post/start/complete/wait on 2 processes.
No errors
Passed Put,Gets,Accumulate test 6 - test3_am
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests the example in Fig 6.8, pg 142, MPI-2 standard. Process 1 has a blocking MPI_Recv between the Post and Wait. Therefore, this example will not run if the one-sided operations are simply implemented on top of MPI_Isends and Irecvs. They either need to be implemented inside the progress engine or using threads with Isends and Irecvs. In MPICH-2, they are implemented in the progress engine. This test is the same as rma/test3 but uses alloc_mem.
No errors
Passed Put,Gets,Accumulate test 7 - test3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests the example in Fig 6.8, pg 142, MPI-2 standard. Process 1 has a blocking MPI_Recv between the Post and Wait. Therefore, this example will not run if the one-sided operations are simply implemented on top of MPI_Isends and Irecvs. They either need to be implemented inside the progress engine or using threads with Isends and Irecvs. In MPICH-2 (in MPICH), they are implemented in the progress engine.
No errors
Passed Put,Gets,Accumulate test 8 - test4_am
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests passive target RMA on 2 processes. tests the lock-single_op-unlock optimization. Same as rma/test4 but uses alloc_mem.
No errors
Passed Put,Gets,Accumulate test 9 - test4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests passive target RMA on 2 processes using a lock-single_op-unlock optimization.
No errors
Passed Get test 1 - test5_am
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests a series of Gets. Run with 2 processors. Same as rma/test5 but uses alloc_mem.
No errors
Passed Get test 2 - test5
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests a series of Gets. Runs using exactly two processors.
No errors
Passed Matrix transpose test 1 - transpose1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Transposes a matrix using put, fence, and derived datatypes. Uses vector and hvector (Example 3.32 from MPI 1.1 Standard). Run using 2 processors.
No errors
Passed Matrix transpose test 2 - transpose2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Transposes a matrix using put, fence, and derived datatypes. Uses vector and struct (Example 3.33 from MPI 1.1 Standard). We could use vector and type_create_resized instead. Run using exactly 2 processors.
No errors
Passed Matrix transpose test 3 - transpose3
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Transposes a matrix using post/start/complete/wait and derived datatypes. Uses vector and hvector (Example 3.32 from MPI 1.1 Standard). Run using 2 processors.
No errors
Passed Matrix transpose test 4 - transpose4
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Transposes a matrix using passive target RMA and derived datatypes. Uses vector and hvector (Example 3.32 from MPI 1.1 Standard). Run using 2 processors.
No errors
Passed Matrix transpose test 5 - transpose5
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This does a transpose-accumulate operation. Uses vector and hvector datatypes (Example 3.32 from MPI 1.1 Standard). Run using 2 processors.
No errors
Passed Matrix transpose test 6 - transpose6
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This does a local transpose-accumulate operation. Uses vector and hvector datatypes (Example 3.32 from MPI 1.1 Standard). Run using exactly 1 processor.
No errors
Passed Matrix transpose test 7 - transpose7
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test transpose a matrix with a get operation, fence, and derived datatypes. Uses vector and hvector (Example 3.32 from MPI 1.1 Standard). Run using exactly 2 processorss.
No errors
Passed Win_errhandler test - wincall
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test creates and frees MPI error handlers in a loop (1000 iterations) to test the internal MPI RMA memory allocation routines.
No errors
Passed Win_create_errhandler test - window_creation
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test creates 1000 RMA windows using MPI_Alloc_mem(), then frees the dynamic memory and the RMA windows that were created.
No errors
Passed Win_create_dynamic test - win_dynamic_acc
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test exercises dynamic RMA windows using the MPI_Win_create_dynamic() and MPI_Accumulate() operations.
No errors
Passed Win_get_attr test - win_flavors
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test determines which "flavor" of RMA is created.
No errors
Passed Win_info test - win_info
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test creates an RMA info object, sets key/value pairs on the object, then duplicates the info object and confirms that the key/value pairs are in the same order as the original.
No errors
Passed MPI_Win_allocate_shared test - win_large_shm
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_WIN_ALLOCATE and MPI_WIN_ALLOCATE_SHARED when allocating SHM memory with size of 1GB per process.
No errors
Passed {Get,set}_name test - winname
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This simple test exercises the MPI_Win_set_name().
No errors
Passed RMA post/start/complete test - wintest
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
Tests put and get with post/start/complete/test on 2 processes. Same as rma/test2, but uses win_test instead of win_wait.
No errors
Passed Win_allocate_shared test - win_zero
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
Test MPI_WIN_ALLOCATE_SHARED when size of total shared memory region is 0.
No errors
Passed MCS_Mutex_trylock test - mutex_bench
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test exercises the MCS_Mutex_lock calls.
No errors
Passed Thread/RMA interaction test - multirma
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This is a simple test of threads in MPI.
No errors
Attributes Tests - Score: 90% Passed
This group features tests that involve attributes objects.
Passed Attribute/Datatype test - attr2type
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This program creates a contiguous datatype from type MPI_INT, attaches an attribute to the type, duplicates it, then deletes both the original and duplicate type.
No errors
Passed Attribute delete/get test - attrdeleteget
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This program illustrates the use of MPI_Comm_create_keyval() that creates a new attribute key.
No errors
Passed At_Exit test 1 - attrend2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 5
Test Description:
The MPI-2.2 specification makes it clear that attributes are called on MPI_COMM_WORLD and MPI_COMM_SELF at the very beginning of MPI_Finalize in LIFO order with respect to the order in which they are set. This is useful for tools that want to perform the MPI equivalent of an "at_exit" action.
This test uses 20 attribues to ensure that the hash-table based MPI implementations do not accidentally pass the test except by being extremely "lucky". There are (20!) possible permutations providing about a 1 in 2.43e18 chance of getting LIFO ordering out of a hash table assuming a decent hash function is used.
No errors
Passed At_Exit test 2 - attrend
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test demonstrates how to attach an "at-exit()" function to MPI_Finalize().
No errors
Passed Attribute Error test - attrerr
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test exercises attribute routines. It checks for correct behavior of the copy and delete functions on an attribute, particularly the correct behavior when the routine returns a failure.
MPI 1.2 Clarification: Clarification of Error Behavior of Attribute Callback Functions. Any return value other than MPI_SUCCESS is erroneous. The specific value returned to the user is undefined (other than it can't be MPI_SUCCESS). Proposals to specify particular values (e.g., user's value) failed.
No errors
Passed Attribute Callback Functions test - attrerrcomm
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test exercises attribute routines. It checks for correct behavior of the copy and delete functions on an attribute, particularly the correct behavior when the routine returns failure.
MPI 1.2 Clarification: Clarification of Error Behavior of Attribute Callback Functions. Any return value other than MPI_SUCCESS is erroneous. The specific value returned to the user is undefined (other than it can't be MPI_SUCCESS). Proposals to specify particular values (e.g., user's value) failed. This test is similar in function to attrerr but uses a different set of tests.
No errors
Failed Attribute error test - attrerrtype
Build: Passed
Execution: Failed
Exit Status: Failed with signal 9
MPI Processes: 1
Test Description:
This test checks for correct behavior of the copy and delete functions on an attribute, particularly the correct behavior when the routine returns failure.
Any return value other than MPI_SUCCESS is erroneous. The specific value returned to the user is undefined (other than it can't be MPI_SUCCESS). Proposals to specify particular values (e.g., user's value) have not been successful.
dup did not return MPI_DATATYPE_NULL on error MPT ERROR: Rank 0(g:0) received signal SIGSEGV(11). Process ID: 159432, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/attr/attrerrtype MPT Version: HPE MPT 2.20 08/30/19 04:33:45 MPT: --------stack traceback------- MPT: Attaching to program: /proc/159432/exe, process 159432 MPT: (no debugging symbols found)...done. MPT: [New LWP 159433] MPT: [Thread debugging using libthread_db enabled] MPT: Using host libthread_db library "/lib64/libthread_db.so.1". MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: Missing separate debuginfos, use: debuginfo-install glibc-2.17-307.el7.1.x86_64 libbitmask-2.0-sgi720r52.rhel76.x86_64 libcpuset-1.0-sgi720r102.rhel76.x86_64 libgcc-4.8.5-39.el7.x86_64 libibverbs-22.4-2.el7_8.x86_64 libnl3-3.2.28-4.el7.x86_64 libpsm2-11.2.80-1.x86_64 numactl-libs-2.0.12-5.el7.x86_64 numatools-2.0-sgi720r149.rhel76.x86_64 MPT: (gdb) #0 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: #1 0x00002aaaab61d806 in mpi_sgi_system ( MPT: #2 MPI_SGI_stacktraceback ( MPT: header=header@entry=0x7fffffffb740 "MPT ERROR: Rank 0(g:0) received signal SIGSEGV(11).\n\tProcess ID: 159432, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/attr/attrerrtype\n\tMPT Version: HPE MPT 2.20 08/30/19 04:33:4"...) at sig.c:340 MPT: #3 0x00002aaaab61da02 in first_arriver_handler (signo=signo@entry=11, MPT: stack_trace_sem=stack_trace_sem@entry=0x2aaaace20080) at sig.c:489 MPT: #4 0x00002aaaab61dd9b in slave_sig_handler (signo=11, MPT: siginfo=<optimized out>, extra=<optimized out>) at sig.c:565 MPT: #5 <signal handler called> MPT: #6 MPI_SGI_type_free (type=959520882) at type.c:126 MPT: #7 0x00002aaaab629480 in PMPI_Type_free (type=0x7fffffffc920) MPT: at type_free.c:30 MPT: #8 0x00000000004023cb in main () MPT: (gdb) A debugging session is active. MPT: MPT: Inferior 1 [process 159432] will be detached. MPT: MPT: Quit anyway? (y or n) [answered Y; input not from terminal] MPT: Detaching from program: /proc/159432/exe, process 159432 MPT: [Inferior 1 (process 159432) detached] MPT: -----stack traceback ends----- MPT: On host r2i7n16, Program /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/attr/attrerrtype, Rank 0, Process 159432: Dumping core on signal SIGSEGV(11) into directory /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/attr MPT ERROR: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize() aborting job MPT: Received signal 11
Passed Intercommunicators test - attric
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test exercises communicator routines from intercommunicators.
No errors
Passed Attribute order test - attrorder
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test creates and inserts attribues in different orders to ensure that the list management code handles all cases properly.
No errors
Passed Communicator Attribute Order test - attrordercomm
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test creates and inserts attributes in different orders to ensure that the list management code handles all cases properly.
No errors
Passed Type Attribute Order test - attrordertype
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test creates and inserts attributes in different orders to ensure that the list management codes handles all cases properly.
No errors
Passed Communicator test - attrt
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test is similar to attr/attrordertype but uses a different strategy of mixing attribute order, types, and with different types of communicators.
No errors
Passed Base attribute test - baseattr2
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This program tests the integrity of the MPI-3.0 base attributes. The attribute keys tested are:
- MPI_TAG_UB
- MPI_HOST
- MPI_IO
- MPI_WTIME_IS_GLOBAL
- MPI_APPNUM
- MPI_UNIVERSE_SIZE
- MPI_LASTUSEDCODE
No errors
Passed Basic Attributes test - baseattrcomm
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test accesses many attribues such as MPI_TAG_UB, MPI_HOST, MPI_IO, MPI_WTIME_IS_GLOBAL, and many others and reports any errors.
No errors
Passed Function keyval test - fkeyval
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test illustrates the use of the copy and delete functions used in the manipulation of keyvals. It also tests to confirm that attributes are copied when communicators are duplicated.
No errors
Passed Keyval communicators test - fkeyvalcomm
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test tests freeing of keyvals while still attached to a communicator, then tests to make sure that the keyval delete and copy functions are executed properly.
No errors
Failed Keyval test with types test - fkeyvaltype
Build: Passed
Execution: Failed
Exit Status: Failed with signal 9
MPI Processes: 1
Test Description:
This tests illustrates the use of keyvals associated with datatypes.
*** Error in `./fkeyvaltype': double free or corruption (fasttop): 0x00000000035c0230 *** ======= Backtrace: ========= /lib64/libc.so.6(+0x81299)[0x2aaaabe80299] /p/app/hpe/mpt-2.20/lib/libmpi.so(+0x12187b)[0x2aaaab62487b] ======= Memory map: ======== 00400000-00411000 r-xp 00000000 b5:84fa2 288231840869792497 /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/attr/fkeyvaltype 00611000-00612000 r--p 00011000 b5:84fa2 288231840869792497 /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/attr/fkeyvaltype 00612000-00613000 rw-p 00012000 b5:84fa2 288231840869792497 /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/attr/fkeyvaltype 00613000-00705000 rw-p 00000000 00:00 0 [heap] 00705000-04031000 rw-p 00000000 00:00 0 [heap] 2aaaaaaab000-2aaaaaacd000 r-xp 00000000 00:14 110680 /usr/lib64/ld-2.17.so 2aaaaaacd000-2aaaaaacf000 r-xp 00000000 00:00 0 [vdso] 2aaaaaacf000-2aaaaaad1000 rw-p 00000000 00:00 0 2aaaaaad1000-2aaaaaad2000 r--s dabbad00030d0000 00:05 18483 /dev/hfi1_0 2aaaaaad2000-2aaaaaad3000 ---p 00000000 00:03 4026532498 /proc/numatools 2aaaaaad3000-2aaaaaad8000 -w-s dabbad00020d0000 00:05 18483 /dev/hfi1_0 2aaaaaad8000-2aaaaaadd000 -w-s dabbad00010d0000 00:05 18483 /dev/hfi1_0 2aaaaaadd000-2aaaaaade000 r--s ffffaf25b1538000 00:05 18483 /dev/hfi1_0 2aaaaaade000-2aaaaaadf000 rw-s dabbad00060d0000 00:05 18483 /dev/hfi1_0 2aaaaaadf000-2aaaaaae0000 r--s ffffaf2559c85000 00:05 18483 /dev/hfi1_0 2aaaaaae0000-2aaaaaae1000 r--s dabbad00080d0000 00:05 18483 /dev/hfi1_0 2aaaaaae1000-2aaaaaae7000 rw-p 00000000 00:00 0 2aaaaaae7000-2aaaaab07000 rw-s 00000000 00:04 64764422 /dev/zero (deleted) 2aaaaab07000-2aaaaab47000 r--s dabbad00040d0000 00:05 18483 /dev/hfi1_0 2aaaaab47000-2aaaaab48000 ---p 00000000 00:03 4026532498 /proc/numatools 2aaaaab48000-2aaaaab49000 rw-p 00000000 00:05 2054 /dev/zero 2aaaaab49000-2aaaaab4a000 rw-p 00000000 00:05 2054 /dev/zero 2aaaaab4a000-2aaaaab4b000 rw-p 00000000 00:00 0 2aaaaaccc000-2aaaaaccd000 r--p 00021000 00:14 110680 /usr/lib64/ld-2.17.so 2aaaaaccd000-2aaaaacce000 rw-p 00022000 00:14 110680 /usr/lib64/ld-2.17.so 2aaaaacce000-2aaaaaccf000 rw-p 00000000 00:00 0 2aaaaaccf000-2aaaaacd4000 r-xp 00000000 00:14 109146 /usr/lib64/libdplace.so.0.0.0 2aaaaacd4000-2aaaaaed3000 ---p 00005000 00:14 109146 /usr/lib64/libdplace.so.0.0.0 2aaaaaed3000-2aaaaaed4000 r--p 00004000 00:14 109146 /usr/lib64/libdplace.so.0.0.0 2aaaaaed4000-2aaaaaed5000 rw-p 00005000 00:14 109146 /usr/lib64/libdplace.so.0.0.0 2aaaaaed5000-2aaaaaeec000 r-xp 00000000 00:14 107905 /usr/lib64/libpthread-2.17.so 2aaaaaeec000-2aaaab0eb000 ---p 00017000 00:14 107905 /usr/lib64/libpthread-2.17.so 2aaaab0eb000-2aaaab0ec000 r--p 00016000 00:14 107905 /usr/lib64/libpthread-2.17.so 2aaaab0ec000-2aaaab0ed000 rw-p 00017000 00:14 107905 /usr/lib64/libpthread-2.17.so 2aaaab0ed000-2aaaab0f1000 rw-p 00000000 00:00 0 2aaaab0f1000-2aaaab0fd000 r-xp 00000000 00:14 113973 /usr/lib64/libcpuset.so.1.1.0 2aaaab0fd000-2aaaab2fd000 ---p 0000c000 00:14 113973 /usr/lib64/libcpuset.so.1.1.0 2aaaab2fd000-2aaaab2fe000 r--p 0000c000 00:14 113973 /usr/lib64/libcpuset.so.1.1.0 2aaaab2fe000-2aaaab2ff000 rw-p 0000d000 00:14 113973 /usr/lib64/libcpuset.so.1.1.0 2aaaab2ff000-2aaaab302000 r-xp 00000000 00:14 107675 /usr/lib64/libbitmask.so.1.0.1 2aaaab302000-2aaaab501000 ---p 00003000 00:14 107675 /usr/lib64/libbitmask.so.1.0.1 2aaaab501000-2aaaab502000 r--p 00002000 00:14 107675 /usr/lib64/libbitmask.so.1.0.1 2aaaab502000-2aaaab503000 rw-p 00003000 00:14 107675 /usr/lib64/libbitmask.so.1.0.1 2aaaab503000-2aaaab6c9000 r-xp 00000000 dd4:14d1e 288231623755915686 /p/app/hpe/mpt-2.20/lib/libmpi_mt.so 2aaaab6c9000-2aaaab8c8000 ---p 001c6000 dd4:14d1e 288231623755915686 /p/app/hpe/mpt-2.20/lib/libmpi_mt.so 2aaaab8c8000-2aaaab8ca000 r--p 001c5000 dd4:14d1e 288231623755915686 /p/app/hpe/mpt-2.20/lib/libmpi_mt.so 2aaaab8ca000-2aaaab8cf000 rw-p 001c7000 dd4:14d1e 288231623755915686 /p/app/hpe/mpt-2.20/lib/libmpi_mt.so 2aaaab8cf000-2aaaab8e7000 rw-p 00000000 00:00 0 2aaaab8e7000-2aaaab9e8000 r-xp 00000000 00:14 111240 /usr/lib64/libm-2.17.so 2aaaab9e8000-2aaaabbe7000 ---p 00101000 00:14 111240 /usr/lib64/libm-2.17.so 2aaaabbe7000-2aaaabbe8000 r--p 00100000 00:14 111240 /usr/lib64/libm-2.17.so 2aaaabbe8000-2aaaabbe9000 rw-p 00101000 00:14 111240 /usr/lib64/libm-2.17.so 2aaaabbe9000-2aaaabbfe000 r-xp 00000000 00:14 109208 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 2aaaabbfe000-2aaaabdfd000 ---p 00015000 00:14 109208 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 2aaaabdfd000-2aaaabdfe000 r--p 00014000 00:14 109208 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 2aaaabdfe000-2aaaabdff000 rw-p 00015000 00:14 109208 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 2aaaabdff000-2aaaabfc2000 r-xp 00000000 00:14 105034 /usr/lib64/libc-2.17.so 2aaaabfc2000-2aaaac1c2000 ---p 001c3000 00:14 105034 /usr/lib64/libc-2.17.so 2aaaac1c2000-2aaaac1c6000 r--p 001c3000 00:14 105034 /usr/lib64/libc-2.17.so 2aaaac1c6000-2aaaac1c8000 rw-p 001c7000 00:14 105034 /usr/lib64/libc-2.17.so 2aaaac1c8000-2aaaac1cd000 rw-p 00000000 00:00 0 2aaaac1cd000-2aaaac1cf000 r-xp 00000000 00:14 110647 /usr/lib64/libdl-2.17.so 2aaaac1cf000-2aaaac3cf000 ---p 00002000 00:14 110647 /usr/lib64/libdl-2.17.so 2aaaac3cf000-2aaaac3d0000 r--p 00002000 00:14 110647 /usr/lib64/libdl-2.17.so 2aaaac3d0000-2aaaac3d1000 rw-p 00003000 00:14 110647 /usr/lib64/libdl-2.17.so 2aaaac3d1000-2aaaac3db000 r-xp 00000000 00:14 104094 /usr/lib64/libnuma.so.1.0.0 2aaaac3db000-2aaaac5db000 ---p 0000a000 00:14 104094 /usr/lib64/libnuma.so.1.0.0 2aaaac5db000-2aaaac5dc000 r--p 0000a000 00:14 104094 /usr/lib64/libnuma.so.1.0.0 2aaaac5dc000-2aaaac5dd000 rw-p 0000b000 00:14 104094 /usr/lib64/libnuma.so.1.0.0 2aaaac5dd000-2aaaac5e4000 r-xp 00000000 00:14 104908 /usr/lib64/librt-2.17.so 2aaaac5e4000-2aaaac7e3000 ---p 00007000 00:14 104908 /usr/lib64/librt-2.17.so 2aaaac7e3000-2aaaac7e4000 r--p 00006000 00:14 104908 /usr/lib64/librt-2.17.so 2aaaac7e4000-2aaaac7e5000 rw-p 00007000 00:14 104908 /usr/lib64/librt-2.17.so 2aaaac7e5000-2aaaace26000 rw-s 00000000 00:04 64764418 /dev/zero (deleted) 2aaaace26000-2aaaaceb5000 r-xp 00000000 00:14 111298 /usr/lib64/libpsm2.so.2.1 2aaaaceb5000-2aaaad0b5000 ---p 0008f000 00:14 111298 /usr/lib64/libpsm2.so.2.1 2aaaad0b5000-2aaaad0b6000 r--p 0008f000 00:14 111298 /usr/lib64/libpsm2.so.2.1 2aaaad0b6000-2aaaad0b8000 rw-p 00090000 00:14 111298 /usr/lib64/libpsm2.so.2.1 2aaaad0b8000-2aaaad0ba000 rw-p 00000000 00:00 0 2aaaad0ba000-2aaaad0d2000 r-xp 00000000 00:14 110610 /usr/lib64/libibverbs.so.1.5.22.4 2aaaad0d2000-2aaaad2d1000 ---p 00018000 00:14 110610 /usr/lib64/libibverbs.so.1.5.22.4 2aaaad2d1000-2aaaad2d2000 r--p 00017000 00:14 110610 /usr/lib64/libibverbs.so.1.5.22.4 2aaaad2d2000-2aaaad2d3000 rw-p 00018000 00:14 110610 /usr/lib64/libibverbs.so.1.5.22.4 2aaaad2d3000-2aaaad337000 r-xp 00000000 00:14 110342 /usr/lib64/libnl-route-3.so.200.23.0 2aaaad337000-2aaaad536000 ---p 00064000 00:14 110342 /usr/lib64/libnl-route-3.so.200.23.0 2aaaad536000-2aaaad539000 r--p 00063000 00:14 110342 /usr/lib64/libnl-route-3.so.200.23.0 2aaaad539000-2aaaad53e000 rw-p 00066000 00:14 110342 /usr/lib64/libnl-route-3.so.200.23.0 2aaaad53e000-2aaaad540000 rw-p 00000000 00:00 0 2aaaad540000-2aaaad55e000 r-xp 00000000 00:14 109225 /usr/lib64/libnl-3.so.200.23.0 2aaaad55e000-2aaaad75e000 ---p 0001e000 00:14 109225 /usr/lib64/libnl-3.so.200.23.0 2aaaad75e000-2aaaad760000 r--p 0001e000 00:14 109225 /usr/lib64/libnl-3.so.200.23.0 2aaaad760000-2aaaad761000 rw-p 00020000 00:14 109225 /usr/lib64/libnl-3.so.200.23.0 2aaaad761000-2aaaad767000 r-xp 00000000 00:14 110331 /usr/lib64/libibverbs/libbnxt_re-rdmav22.so 2aaaad767000-2aaaad966000 ---p 00006000 00:14 110331 /usr/lib64/libibverbs/libbnxt_re-rdmav22.so 2aaaad966000-2aaaad967000 r--p 00005000 00:14 110331 /usr/lib64/libibverbs/libbnxt_re-rdmav22.so 2aaaad967000-2aaaad968000 rw-p 00006000 00:14 110331 /usr/lib64/libibverbs/libbnxt_re-rdmav22.so 2aaaad968000-2aaaad96d000 r-xp 00000000 00:14 110326 /usr/lib64/libibverbs/libcxgb3-rdmav22.so 2aaaad96d000-2aaaadb6d000 ---p 00005000 00:14 110326 /usr/lib64/libibverbs/libcxgb3-rdmav22.so 2aaaadb6d000-2aaaadb6e000 r--p 00005000 00:14 110326 /usr/lib64/libibverbs/libcxgb3-rdmav22.so 2aaaadb6e000-2aaaadb6f000 rw-p 00006000 00:14 110326 /usr/lib64/libibverbs/libcxgb3-rdmav22.so 2aaaadb6f000-2aaaadb78000 r-xp 00000000 00:14 110330 /usr/lib64/libibverbs/libcxgb4-rdmav22.so 2aaaadb78000-2aaaadd78000 ---p 00009000 00:14 110330 /usr/lib64/libibverbs/libcxgb4-rdmav22.so 2aaaadd78000-2aaaadd79000 r--p 00009000 00:14 110330 /usr/lib64/libibverbs/libcxgb4-rdmav22.so 2aaaadd79000-2aaaadd7a000 rw-p 0000a000 00:14 110330 /usr/lib64/libibverbs/libcxgb4-rdmav22.so 2aaaadd7a000-2aaaadd7e000 r-xp 00000000 00:14 110333 /usr/lib64/libibverbs/libhfi1verbs-rdmav22.so 2aaaadd7e000-2aaaadf7d000 ---p 00004000 00:14 110333 /usr/lib64/libibverbs/libhfi1verbs-rdmav22.so 2aaaadf7d000-2aaaadf7e000 r--p 00003000 00:14 110333 /usr/lib64/libibverbs/libhfi1verbs-rdmav22.so 2aaaadf7e000-2aaaadf7f000 rw-p 00004000 00:14 110333 /usr/lib64/libibverbs/libhfi1verbs-rdmav22.so 2aaaadf7f000-2aaaadf87000 r-xp 00000000 00:14 110325 /usr/lib64/libibverbs/libhns-rdmav22.so 2aaaadf87000-2aaaae186000 ---p 00008000 00:14 110325 /usr/lib64/libibverbs/libhns-rdmav22.so 2aaaae186000-2aaaae187000 r--p 00007000 00:14 110325 /usr/lib64/libibverbs/libhns-rdmav22.so 2aaaae187000-2aaaae188000 rw-p 00008000 00:14 110325 /usr/lib64/libibverbs/libhns-rdmav22.so 2aaaae188000-2aaaae18f000 r-xp 00000000 00:14 110329 /usr/lib64/libibverbs/libi40iw-rdmav22.so 2aaaae18f000-2aaaae38e000 ---p 00007000 00:14 110329 /usr/lib64/libibverbs/libi40iw-rdmav22.so 2aaaae38e000-2aaaae38f000 r--p 00006000 00:14 110329 /usr/lib64/libibverbs/libi40iw-rdmav22.so 2aaaae38f000-2aaaae390000 rw-p 00007000 00:14 110329 /usr/lib64/libibverbs/libi40iw-rdmav22.so 2aaaae390000-2aaaae394000 r-xp 00000000 00:14 110334 /usr/lib64/libibverbs/libipathverbs-rdmav22.so 2aaaae394000-2aaaae593000 ---p 00004000 00:14 110334 /usr/lib64/libibverbs/libipathverbs-rdmav22.so 2aaaae593000-2aaaae594000 r--p 00003000 00:14 110334 /usr/lib64/libibverbs/libipathverbs-rdmav22.so 2aaaae594000-2aaaae595000 rw-p 00004000 00:14 110334 /usr/lib64/libibverbs/libipathverbs-rdmav22.so 2aaaae595000-2aaaae5a0000 r-xp 00000000 00:14 104912 /usr/lib64/libmlx4.so.1.0.22.4 2aaaae5a0000-2aaaae79f000 ---p 0000b000 00:14 104912 /usr/lib64/libmlx4.so.1.0.22.4 2aaaae79f000-2aaaae7a0000 r--p 0000a000 00:14 104912 /usr/lib64/libmlx4.so.1.0.22.4 2aaaae7a0000-2aaaae7a1000 rw-p 0000b000 00:14 104912 /usr/lib64/libmlx4.so.1.0.22.4 2aaaae7a1000-2aaaae7c8000 r-xp 00000000 00:14 111009 /usr/lib64/libmlx5.so.1.8.22.4 2aaaae7c8000-2aaaae9c7000 ---p 00027000 00:14 111009 /usr/lib64/libmlx5.so.1.8.22.4 2aaaae9c7000-2aaaae9c8000 r--p 00026000 00:14 111009 /usr/lib64/libmlx5.so.1.8.22.4 2aaaae9c8000-2aaaae9c9000 rw-p 00027000 00:14 111009 /usr/lib64/libmlx5.so.1.8.22.4 2aaaae9c9000-2aaaae9d1000 r-xp 00000000 00:14 110328 /usr/lib64/libibverbs/libmthca-rdmav22.so 2aaaae9d1000-2aaaaebd0000 ---p 00008000 00:14 110328 /usr/lib64/libibverbs/libmthca-rdmav22.so 2aaaaebd0000-2aaaaebd1000 r--p 00007000 00:14 110328 /usr/lib64/libibverbs/libmthca-rdmav22.so 2aaaaebd1000-2aaaaebd2000 rw-p 00008000 00:14 110328 /usr/lib64/libibverbs/libmthca-rdmav22.so 2aaaaebd2000-2aaaaebd7000 r-xp 00000000 00:14 110327 /usr/lib64/libibverbs/libnes-rdmav22.so 2aaaaebd7000-2aaaaedd7000 ---p 00005000 00:14 110327 /usr/lib64/libibverbs/libnes-rdmav22.so 2aaaaedd7000-2aaaaedd8000 r--p 00005000 00:14 110327 /usr/lib64/libibverbs/libnes-rdmav22.so 2aaaaedd8000-2aaaaedd9000 rw-p 00006000 00:14 110327 /usr/lib64/libibverbs/libnes-rdmav22.so 2aaaaedd9000-2aaaaeddf000 r-xp 00000000 00:14 110323 /usr/lib64/libibverbs/libocrdma-rdmav22.so 2aaaaeddf000-2aaaaefde000 ---p 00006000 00:14 110323 /usr/lib64/libibverbs/libocrdma-rdmav22.so 2aaaaefde000-2aaaaefdf000 r--p 00005000 00:14 110323 /usr/lib64/libibverbs/libocrdma-rdmav22.so 2aaaaefdf000-2aaaaefe0000 rw-p 00006000 00:14 110323 /usr/lib64/libibverbs/libocrdma-rdmav22.so 2aaaaefe0000-2aaaaefe9000 r-xp 00000000 00:14 110332 /usr/lib64/libibverbs/libqedr-rdmav22.so 2aaaaefe9000-2aaaaf1e8000 ---p 00009000 00:14 110332 /usr/lib64/libibverbs/libqedr-rdmav22.so 2aaaaf1e8000-2aaaaf1e9000 r--p 00008000 00:14 110332 /usr/lib64/libibverbs/libqedr-rdmav22.so 2aaaaf1e9000-2aaaaf1ea000 rw-p 00009000 00:14 110332 /usr/lib64/libibverbs/libqedr-rdmav22.so 2aaaaf1ea000-2aaaaf1ee000 r-xp 00000000 00:14 110324 /usr/lib64/libibverbs/librxe-rdmav22.so 2aaaaf1ee000-2aaaaf3ed000 ---p 00004000 00:14 110324 /usr/lib64/libibverbs/librxe-rdmav22.so 2aaaaf3ed000-2aaaaf3ee000 r--p 00003000 00:14 110324 /usr/lib64/libibverbs/librxe-rdmav22.so 2aaaaf3ee000-2aaaaf3ef000 rw-p 00004000 00:14 110324 /usr/lib64/libibverbs/librxe-rdmav22.so 2aaaaf3ef000-2aaaaf3f3000 r-xp 00000000 00:14 110335 /usr/lib64/libibverbs/libvmw_pvrdma-rdmav22.so 2aaaaf3f3000-2aaaaf5f3000 ---p 00004000 00:14 110335 /usr/lib64/libibverbs/libvmw_pvrdma-rdmav22.so 2aaaaf5f3000-2aaaaf5f4000 r--p 00004000 00:14 110335 /usr/lib64/libibverbs/libvmw_pvrdma-rdmav22.so 2aaaaf5f4000-2aaaaf5f5000 rw-p 00005000 00:14 110335 /usr/lib64/libibverbs/libvmw_pvrdma-rdmav22.so 2aaaaf5f5000-2aaaafdf5000 r--s dabbad00050d0000 00:05 18483 /dev/hfi1_0 2aaaafdf5000-2aaaafdf6000 ---p 00000000 00:00 0 2aaaafdf6000-2aaaafff6000 rw-p 00000000 00:00 0 2aaaafff6000-2aaab0419000 rw-s 00000000 00:12 64781665 /dev/shm/psm2_shm.9167500000000016980d020 2aaab0419000-2aaab0619000 rw-s 00000000 00:00 0 2aaab0619000-2acab0619000 rw-s 00000000 00:00 0 2acab0619000-2acab4619000 rw-s 00000000 00:00 0 2acab8000000-2acab8021000 rw-p 00000000 00:00 0 2acab8021000-2acabc000000 ---p 00000000 00:00 0 7ffffffdd000-7ffffffff000 rw-p 00000000 00:00 0 [stack] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] MPT ERROR: Rank 0(g:0) received signal SIGABRT/SIGIOT(6). Process ID: 159994, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/attr/fkeyvaltype MPT Version: HPE MPT 2.20 08/30/19 04:33:45 MPT: --------stack traceback------- MPT: Attaching to program: /proc/159994/exe, process 159994 MPT: (no debugging symbols found)...done. MPT: [New LWP 159995] MPT: [Thread debugging using libthread_db enabled] MPT: Using host libthread_db library "/lib64/libthread_db.so.1". MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: (no debugging symbols found)...done. MPT: 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: Missing separate debuginfos, use: debuginfo-install glibc-2.17-307.el7.1.x86_64 libbitmask-2.0-sgi720r52.rhel76.x86_64 libcpuset-1.0-sgi720r102.rhel76.x86_64 libgcc-4.8.5-39.el7.x86_64 libibverbs-22.4-2.el7_8.x86_64 libnl3-3.2.28-4.el7.x86_64 libpsm2-11.2.80-1.x86_64 numactl-libs-2.0.12-5.el7.x86_64 numatools-2.0-sgi720r149.rhel76.x86_64 MPT: (gdb) #0 0x00002aaaaaee41d9 in waitpid () from /lib64/libpthread.so.0 MPT: #1 0x00002aaaab61d806 in mpi_sgi_system ( MPT: #2 MPI_SGI_stacktraceback ( MPT: header=header@entry=0x7fffffffaa00 "MPT ERROR: Rank 0(g:0) received signal SIGABRT/SIGIOT(6).\n\tProcess ID: 159994, Host: r2i7n16, Program: /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/attr/fkeyvaltype\n\tMPT Version: HPE MPT 2.20 08/30/19 0"...) at sig.c:340 MPT: #3 0x00002aaaab61da02 in first_arriver_handler (signo=signo@entry=6, MPT: stack_trace_sem=stack_trace_sem@entry=0x2aaaace20080) at sig.c:489 MPT: #4 0x00002aaaab61dd9b in slave_sig_handler (signo=6, MPT: siginfo=<optimized out>, extra=<optimized out>) at sig.c:565 MPT: #5 <signal handler called> MPT: #6 0x00002aaaabe35387 in raise () from /lib64/libc.so.6 MPT: #7 0x00002aaaabe36a78 in abort () from /lib64/libc.so.6 MPT: #8 0x00002aaaabe77ed7 in __libc_message () from /lib64/libc.so.6 MPT: #9 0x00002aaaabe80299 in _int_free () from /lib64/libc.so.6 MPT: #10 0x00002aaaab62487b in MPI_SGI_type_free (type=76) at type.c:179 MPT: #11 0x00002aaaab629480 in PMPI_Type_free (type=0x7fffffffc880) MPT: at type_free.c:30 MPT: #12 0x0000000000403c99 in MTestFreeDatatype () MPT: #13 0x000000000040245b in main () MPT: (gdb) A debugging session is active. MPT: MPT: Inferior 1 [process 159994] will be detached. MPT: MPT: Quit anyway? (y or n) [answered Y; input not from terminal] MPT: Detaching from program: /proc/159994/exe, process 159994 MPT: [Inferior 1 (process 159994) detached] MPT: -----stack traceback ends----- MPT: On host r2i7n16, Program /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/attr/fkeyvaltype, Rank 0, Process 159994: Dumping core on signal SIGABRT/SIGIOT(6) into directory /p/home/withheld/BCT_MPI/mpt_2.20/mpitests/attr MPT ERROR: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize() aborting job MPT: Received signal 6
Passed Multiple keyval_free test - keyval_double_free
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This tests multiple invocations of keyval_free on the same keyval.
No errors
Passed Communicator attributes test - attributes
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Returns all communicator attributes that are not supported. The test is run as a single process MPI job.
No errors
Passed RMA attributes test - baseattrwin
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
This test creates a window, then extracts its attributes through a series of MPI calls.
No errors
Performance - Score: 79% Passed
This group features tests that involve realtime latency performance analysis of MPI appications. Although performance testing is not an established goal of this test suite, these few tests were included because there has been discussion of including performance testing in future versions of the test suite. Such tests might be useful to aide users in determining what MPI features should be used for their particular application. These tests are exemplary of what future tests could provide.
Passed Network tests - netmpi
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test calculates bulk transfer rates and latency as a function of message buffer size. The arguments are:
- -reps #iterations
- -time stop_time
- -start initial_msg_size
- -end final_msg_size
- -out outputfile
- -nocache
- -headtohead
- -pert
- -noprint
- -onebuffer largest_buffer_size
This test requires exactly two processes.
0: r2i7n16 1: r2i7n16 Latency: 0.000000345 Sync Time: 0.000000848 Now starting main loop 0: 997 bytes 362904 times --> 11435.00 Mbps in 0.000000665 sec 1: 1000 bytes 187726 times --> 11514.58 Mbps in 0.000000663 sec 2: 1003 bytes 189032 times --> 11518.81 Mbps in 0.000000664 sec 3: 1497 bytes 189097 times --> 15613.92 Mbps in 0.000000731 sec 4: 1500 bytes 227849 times --> 15375.68 Mbps in 0.000000744 sec 5: 1503 bytes 224148 times --> 15673.92 Mbps in 0.000000732 sec 6: 1997 bytes 228267 times --> 17937.63 Mbps in 0.000000849 sec 7: 2000 bytes 220785 times --> 18145.22 Mbps in 0.000000841 sec 8: 2003 bytes 223117 times --> 18263.70 Mbps in 0.000000837 sec 9: 2497 bytes 149765 times --> 21497.81 Mbps in 0.000000886 sec 10: 2500 bytes 169246 times --> 21466.46 Mbps in 0.000000889 sec 11: 2503 bytes 168931 times --> 21625.12 Mbps in 0.000000883 sec 12: 3497 bytes 170111 times --> 26092.55 Mbps in 0.000001023 sec 13: 3500 bytes 174649 times --> 26440.93 Mbps in 0.000001010 sec 14: 3503 bytes 176890 times --> 26556.46 Mbps in 0.000001006 sec 15: 4497 bytes 106656 times --> 29464.72 Mbps in 0.000001164 sec 16: 4500 bytes 119261 times --> 29494.71 Mbps in 0.000001164 sec 17: 4503 bytes 119366 times --> 29502.46 Mbps in 0.000001164 sec 18: 6497 bytes 119381 times --> 33376.23 Mbps in 0.000001485 sec 19: 6500 bytes 116541 times --> 33216.56 Mbps in 0.000001493 sec 20: 6503 bytes 115954 times --> 33306.99 Mbps in 0.000001490 sec 21: 8497 bytes 64623 times --> 34719.29 Mbps in 0.000001867 sec 22: 8500 bytes 70877 times --> 34693.53 Mbps in 0.000001869 sec 23: 8503 bytes 70822 times --> 34889.96 Mbps in 0.000001859 sec 24: 12497 bytes 71220 times --> 37258.46 Mbps in 0.000002559 sec 25: 12500 bytes 66432 times --> 37384.22 Mbps in 0.000002551 sec 26: 12503 bytes 66648 times --> 37360.02 Mbps in 0.000002553 sec 27: 16497 bytes 35271 times --> 40171.00 Mbps in 0.000003133 sec 28: 16500 bytes 41102 times --> 40212.28 Mbps in 0.000003131 sec 29: 16503 bytes 41144 times --> 39600.19 Mbps in 0.000003179 sec No errors.
Passed Variable message length test - adapt
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 3
Test Description:
This test measures the latency involved in sending/receiving messages of varying size.
0: r2i7n16 1: r2i7n16 2: r2i7n16 To determine 2 <-> 0 latency, using 262144 reps. To determine 0 <-> 1 latency, using 262144 reps. To determine 2 <-- 0 --> 1 latency, using 131072 reps Latency20_ : 0.000000347 Latency_01 : 0.000000359 Latency201 : 0.000000765 Now starting main loop 0: 72 bytes 149980 times --> 1240.38 Mbps in 0.000000443 sec 0: 72 bytes 144795 times --> 1141.19 Mbps in 0.000000481 sec 0: 72 bytes 67990 times --> 0.000000623 0.000000774 0.000000845 0.000000882 0.000000904 0.000000919 0.000000924 0.000000928 0.000000930 0.000000926 0.000000926 0.000000926 0.000000931 0.000000925 0.000000933 0.000000933 0.000000934 1: 75 bytes 112902 times --> 1279.81 Mbps in 0.000000447 sec 1: 75 bytes 103873 times --> 1187.08 Mbps in 0.000000482 sec 1: 75 bytes 53535 times --> 0.000000629 0.000000779 0.000000855 0.000000888 0.000000918 0.000000926 0.000000925 0.000000926 0.000000926 0.000000923 0.000000927 0.000000928 0.000000930 0.000000933 0.000000932 0.000000936 2: 78 bytes 116304 times --> 1310.53 Mbps in 0.000000454 sec 2: 78 bytes 107877 times --> 1227.06 Mbps in 0.000000485 sec 2: 78 bytes 55568 times --> 0.000000630 0.000000779 0.000000851 0.000000890 0.000000914 0.000000929 0.000000925 0.000000935 0.000000924 0.000000934 0.000000928 0.000000933 0.000000927 0.000000934 0.000000929 0.000000933 No errors.
Passed MPI_Group_Translate_ranks() test - gtranksperf
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 20
Test Description:
Measure and compare the relative performance of MPI_Group_translate_ranks with small and large group2 sizes but a constant number of ranks. This serves as a performance sanity check for the Scalasca use case where we translate to MPI_COMM_WORLD ranks. The performance should only depend on the number of ranks passed, not the size of either group (especially group2). This test is probably only meaningful for large-ish process counts.
No errors
Passed MPI-Tracing package test - allredtrace
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 32
Test Description:
This code is intended to test the trace overhead when using an MPI tracing package. The test is currently run in verbose mode with the number of processes set to 32 to run on the greatest number of HPC systems.
For delay count 1024, time is 1.066719e-03 No errors.
Passed Group creation test - commcreatep
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 32
Test Description:
This is a performance test indexed by group number. The cost should be linear or at worst ts*log(ts), where ts <= number of communicators.
No errors
Failed MPI_{pack,unpack}() test 1 - dtpack
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 1
Test Description:
This code may be used to test the performance of some of the noncontiguous datatype operations, including vector and indexed pack and unpack operations. To simplify the use of this code for tuning an MPI implementation, it uses no communication, just the MPI_Pack and MPI_Unpack routines. In addition, the individual tests are in separate routines, making it easier to compare the compiler-generated code for the user (manual) pack/unpack with the code used by the MPI implementation. Further, to be fair to the MPI implementation, the routines are passed the source and destination buffers; this ensures that the compiler can't optimize for statically allocated buffers.
VecPackDouble: MPI Pack code is too slow: MPI 0.000288355 User 2.524e-05 VecUnPackDouble: MPI Unpack code is too slow: MPI 0.000293755 User 3.11631e-05 VecPack2Double: MPI Pack code is too slow: MPI 0.00034109 User 4.45764e-05 Found 3 performance problems
Failed MPI_{pack,unpack}() test 2 - indexperf
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 1
Test Description:
Tests that basic optimizations are performed on indexed datatypes. If PACK_IS_NATIVE is defined, MPI_Pack stores exactly the same bytes as the user would pack manually; in that case, there is a consistency check.
MPI_Pack (block index) time = 1.905113e-05, manual pack time = 1.030043e-06 MPI_Pack time should be less than 2 times the manual time For most informative results, be sure to compile this test with optimization MPI_Pack (struct of block index)) time = 1.907209e-05, manual pack time = 1.030043e-06 MPI_Pack time should be less than 2 times the manual time For most informative results, be sure to compile this test with optimization Found 2 errors
Failed MPI_{pack,unpack}() test 3 - nestvec2
Build: Passed
Execution: Failed
Exit Status: Application_ended_with_error(s)
MPI Processes: 1
Test Description:
Tests that the performance of a struct that contains a vector type exploits the vector type correctly. If PACK_IS_NATIVE is defined, MPI_Pack stores exactly the same bytes as the user would pack manually; in that case, there is a consistency check.
MPI_Pack time using struct with vector = 1.199111e-03, manual pack time = 1.550959e-04 MPI_Pack time should be less than 4 times the manual time For most informative results, be sure to compile this test with optimization MPI_Pack using vector = 1.199741e-03, manual pack time = 1.550959e-04 MPI_Pack time should be less than 4 times the manual time For most informative results, be sure to compile this test with optimization Found 2 errors
Passed MPI_{pack,unpack}() test 4 - nestvec
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Tests that basic optimizations are performed on vector of vector datatypes. As the "leaf" element is a large block (when properly optimized), the performance of an MPI datatype should be nearly as good (if not better) than manual packing (the threshold used in this test is *very* forgiving). This test may be run with one process.
If PACK_IS_NATIVE is defined, MPI_Pack stores exactly the same bytes as the user would pack manually; in that case, there is a consistency check.
No Errors
Passed Synchonization test - non_zero_root
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 4
Test Description:
This test compares the time it takes between a synchronization step between rank 0 and rank 1. If that difference is greater than 10 percent, it is considered an error.
No errors
Passed Send/Receive test - sendrecv1
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This program provides a simple test of send-receive performance between two (or more) processes. This test is sometimes called head-to-head or ping-ping test, as both processes send at the same time.
No errors
Passed Timer test - timer
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Check that the timer produces monotone nondecreasing times and that the tick is reasonable.
No errors
Passed Transposition test - transp-datatype
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 2
Test Description:
This test transposes a (100x100) two-dimensional array using various options and times the resulting operations.
Transpose time with datatypes is more than twice time without datatypes 0.000101 0.000007 0.000008 Found 1 errors
Passed Datatype creation test - twovec
Build: Passed
Execution: Passed
Exit Status: Execution_completed_no_errors
MPI Processes: 1
Test Description:
Make sure datatype creation is independent of data size. However, that there is no guarantee or expectation that the time would be constant. In particular, some optimizations might take more time than others.
The real goal of this is to ensure that the time to create a datatype doesn't increase strongly with the number of elements within the datatype, particularly for these datatypes that are quite simple patterns.
No errors