MPI Test Suite Result Details for

CRAY-MPICH MPI 8.1.25 on Warhawk (WARHAWK.AFRL.HPC.MIL)

Run Environment

  • HPC Center:AFRL
  • HPC System: HPE Cray-EX (Warhawk)
  • Run Date: Mon Apr 15 11:04:26 EDT 2024
  • MPI: CRAY-MPICH MPI 8.1.25 (Implements MPI 3.1 Standard)
  • Shell:/bin/tcsh
  • Launch Command:/opt/cray/pe/pals/1.2.11/bin/aprun
Compilers Used
Language Executable Path
C cc /opt/cray/pe/craype/2.7.20/bin/cc
C++ CC /opt/cray/pe/craype/2.7.20/bin/CC
F77 ftn /opt/cray/pe/craype/2.7.20/bin/ftn
F90 ftn /opt/cray/pe/craype/2.7.20/bin/ftn

The following modules were loaded when the MPI Test Suite was run:

  • craype-x86-rome
  • craype-network-ofi
  • perftools-base/23.03.0
  • bct-env/0.1
  • bc_mod/1.3.4
  • /p/app/startup/shell.module
  • /p/app/startup/alias.module
  • libfabric/1.12.1.2.2.0
  • /p/app/startup/login.module
  • /p/app/startup/set_ACCOUNT.module
  • /p/app/startup/login2.module
  • cce/15.0.1
  • craype/2.7.20
  • cray-dsmml/0.2.2
  • cray-libsci/23.02.1.1
  • PrgEnv-cray/8.3.3
  • cray-pals/1.2.11
  • pals-lib/1.0
  • cray-mpich/8.1.25
Scheduler Environment Variables
Variable Name Value
PBS_ACCOUNT withheld
PBS_ENVIRONMENT PBS_BATCH
PBS_JOBDIR /p/home/withheld
PBS_JOBNAME MPICH_8.1.25
PBS_MOMPORT 15003
PBS_NODEFILE /var/spool/pbs/aux/1457546.warhawk-pbs
PBS_NODENUM withheld
PBS_O_HOME withheld
PBS_O_HOST warhawk07.hsn.warhawk.afrl.hpc.mil
PBS_O_LOGNAME withheld
PBS_O_PATH /opt/cray/pe/pals/1.2.11/bin:/opt/cray/pe/mpich/8.1.25/ofi/cray/10.0/bin:/opt/cray/pe/mpich/8.1.25/bin:/opt/cray/pe/craype/2.7.20/bin:/opt/cray/pe/cce/15.0.1/binutils/x86_64/x86_64-pc-linux-gnu/bin:/opt/cray/pe/cce/15.0.1/binutils/cross/x86_64-aarch64/aarch64-linux-gnu/../bin:/opt/cray/pe/cce/15.0.1/utils/x86_64/bin:/opt/cray/pe/cce/15.0.1/bin:/p/app/local/opt/cray/libfabric/1.12.1.2.2.0/bin:/usr/local/ossh/bin:/p/app/Modules/4.7.1/bin:/opt/cray/pe/perftools/23.03.0/bin:/opt/cray/pe/papi/7.0.0.1/bin:/opt/clmgr/sbin:/opt/clmgr/bin:/opt/sgi/sbin:/opt/sgi/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/openssh-8.6p1a.SLES12/bin:/usr/local/bin:/opt/c3/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin:/opt/pbs/bin:/app/bin:/app/COTS/bin:/usr/local/krb5/bin:/app/BCT/bin:/p/app/local/bin:.:/opt/cray/pe/bin
PBS_O_QUEUE standard
PBS_O_SHELL /bin/tcsh
PBS_O_SYSTEM Linux
PBS_O_WORKDIR withheld
PBS_QUEUE standard
PBS_TASKNUM 1
MPI Environment Variables
Variable Name Value
MPI_DISPLAY_SETTINGS false

Topology - Score: 100% Passed

The Network topology tests are designed to examine the operation of specific communication patterns such as Cartesian and Graph topology.

Passed MPI_Cart_create basic - cartcreates

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test creates a cartesian mesh and tests for errors.

No errors

Passed MPI_Cart_map basic - cartmap1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test creates a cartesian map and tests for errors.

No errors

Passed MPI_Cart_shift basic - cartshift1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises MPI_Cart_shift().

No errors

Passed MPI_Cart_sub basic - cartsuball

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises MPI_Cart_sub().

No errors

Passed MPI_Cartdim_get zero-dim - cartzero

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Check that the MPI implementation properly handles zero-dimensional Cartesian communicators - the original standard implies that these should be consistent with higher dimensional topologies and therefore should work with any MPI implementation. MPI 2.1 made this requirement explicit.

No errors

Passed MPI_Dims_create nodes - dims1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test uses multiple variations for the arguments of MPI_Dims_create() and tests whether the product of ndims (number of dimensions) and the returned dimensions are equal to nnodes (number of nodes) thereby determining if the decomposition is correct. The test also checks for compliance with the MPI_- standard section 6.5 regarding decomposition with increasing dimensions. The test considers dimensions 2-4.

No errors

Passed MPI_Dims_create special 2d/4d - dims2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test is similar to topo/dims1 but only exercises dimensions 2 and 4 including test cases whether all dimensions are specified.

No errors

Passed MPI_Dims_create special 3d/4d - dims3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test is similar to topo/dims1 but only considers special cases using dimensions 3 and 4.

No errors

Passed MPI_Dist_graph_create - distgraph1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test excercises MPI_Dist_graph_create() and MPI_Dist_graph_adjacent().

using graph layout 'deterministic complete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'every other edge deleted'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'only self-edges'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'no edges'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'a random incomplete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'a random incomplete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'a random incomplete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'a random incomplete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'a random incomplete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'a random incomplete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
testing MPI_Dist_graph_create w/ no graph
testing MPI_Dist_graph_create w/ no graph
testing MPI_Dist_graph_create w/ no graph -- NULLs
testing MPI_Dist_graph_create w/ no graph -- NULLs+MPI_UNWEIGHTED
testing MPI_Dist_graph_create_adjacent w/ no graph
testing MPI_Dist_graph_create_adjacent w/ no graph -- MPI_WEIGHTS_EMPTY
testing MPI_Dist_graph_create_adjacent w/ no graph -- NULLs
testing MPI_Dist_graph_create_adjacent w/ no graph -- NULLs+MPI_UNWEIGHTED
No errors

Passed MPI_Graph_create null/dup - graphcr2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Create a communicator with a graph that contains null edges and one that contains duplicate edges.

No errors

Passed MPI_Graph_create zero procs - graphcr

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Create a communicator with a graph that contains no processes.

No errors

Passed MPI_Graph_map basic - graphmap1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Simple test of MPI_Graph_map().

No errors

Passed MPI_Topo_test datatypes - topotest

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Check that topo test returns the correct type, including MPI_UNDEFINED.

No errors

Passed MPI_Topo_test dgraph - dgraph_unwgt

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Specify a distributed graph of a bidirectional ring of the MPI_COMM_WORLD communicator. Thus each node in the graph has a left and right neighbor.

No errors

Passed MPI_Topo_test dup - topodup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Create a cartesian topology, get its characteristics, then dup it and check that the new communicator has the same properties.

No errors

Passed Neighborhood collectives - neighb_coll

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

A basic test for the 10 (5 patterns x {blocking,non-blocking}) MPI-3 neighborhood collective routines.

No errors

Basic Functionality - Score: 94% Passed

This group features tests that emphasize basic MPI functionality such as initializing MPI and retrieving its rank.

Passed Basic send/recv - srtest

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This is a basic test of the send/receive with a barrier using MPI_Send() and MPI_Recv().

No errors

Passed Const cast - const

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test is designed to test the new MPI-3.0 const cast applied to a "const *" buffer pointer.

No errors.

Passed Elapsed walltime - wtime

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test measures how accurately MPI can measure 1 second.

sleep(1): start:236.512, finish:237.512, duration:1.00012
No errors.

Passed Generalized request basic - greq1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Simple test of generalized requests. This simple code allows us to check that requests can be created, tested, and waited on in the case where the request is complete before the wait is called.

No errors

Passed Init arguments - init_args

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

In MPI-1.1, it is explicitly stated that an implementation is allowed to require that the arguments argc and argv passed by an application to MPI_Init in C be the same arguments passed into the application as the arguments to main. In MPI-2 implementations are not allowed to impose this requirement. Conforming implementations of MPI allow applications to pass NULL for both the argc and argv arguments of MPI_Init(). This test prints the result of the error status of MPI_Init(). If the test completes without error, it reports 'No errors.'

MPI_INIT accepts Null arguments for MPI_init().
No errors

Passed Input queuing - eagerdt

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test of a large number of MPI datatype messages with no preposted receive so that an MPI implementation may have to queue up messages on the sending side. Uses MPI_Type_Create_indexed_block to create the send datatype and receives data as ints.

No errors

Passed Intracomm communicator - mtestcheck

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This program calls MPI_Reduce with all Intracomm Communicators.

No errors

Passed Isend and Request_free - rqfreeb

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test multiple non-blocking send routines with MPI_Request_Free. Creates non-blocking messages with MPI_Isend(), MPI_Ibsend(), MPI_Issend(), and MPI_Irsend() then frees each request.

About create and free Isend request
About create and free Ibsend request
About create and free Issend request
About create and free Irsend request
No errors
About  free Irecv request

Passed Large send/recv - sendrecv

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test sends the length of a message, followed by the message body.

No errors.

Passed MPI Attribues test - attrself

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This is a test of creating and inserting attribues in different orders to ensure that the list management code handles all cases.

No errors

Passed MPI_ANY_{SOURCE,TAG} - anyall

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test uses MPI_ANY_SOURCE and MPI_ANY_TAG in repeated MPI_Irecv() calls. One implementation delivered incorrect data when using both ANY_SOURCE and ANY_TAG.

No errors

Passed MPI_Abort() return exit - abortexit

Build: Passed

Execution: Failed

Exit Status: Intentional_failure_was_successful

MPI Processes: 1

Test Description:

This program calls MPI_Abort and confirms that the exit status in the call is returned to the invoking environment.

MPI_Abort() with return exit code:6
MPICH ERROR [Rank 0] [job id b0409dd2-ee21-49c3-b47a-bea013bf9ace] [Mon Apr 15 11:02:31 2024] [x1001c5s5b0n0] - Abort(6) (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 6) - process 0
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 6

Passed MPI_BOTTOM basic - bottom

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Simple test using MPI_BOTTOM for MPI_Send() and MPI_Recv().

No errors

Passed MPI_Bsend alignment - bsend1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Simple test for MPI_Bsend() that sends and receives multiple messages with message sizes chosen to expose alignment problems.

No errors

Passed MPI_Bsend buffer alignment - bsendalign

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test bsend with a buffer with alignment between 1 and 7 bytes.

No errors

Passed MPI_Bsend detach - bsendpending

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test the handling of MPI_Bsend() operations when a detach occurs between MPI_Bsend() and MPI_Recv(). Uses busy wait to ensure detach occurs between MPI routines and tests with a selection of communicators.

No errors

Passed MPI_Bsend ordered - bsendfrag

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test bsend message handling where different messages are received in different orders.

No errors

Passed MPI_Bsend repeat - bsend2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Simple test for MPI_Bsend() that repeatedly sends and receives messages.

No errors

Passed MPI_Bsend with init and start - bsend3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Simple test for MPI_Bsend() that uses MPI_Bsend_init() to create a persistent communication request and then repeatedly sends and receives messages. Includes tests using MPI_Start() and MPI_Startall().

No errors

Passed MPI_Bsend() intercomm - bsend5

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Simple test for MPI_Bsend() that creates an intercommunicator with two evenly sized groups and then repeatedly sends and receives messages between groups.

No errors

Passed MPI_Cancel completed sends - scancel2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Calls MPI_Isend(), forces it to complete with a barrier, calls MPI_Cancel(), then checks cancel status. Such a cancel operation should silently fail. This test returns a failure status if the cancel succeeds.

Starting scancel test
(0) About to create isend and cancel
Starting scancel test
Completed wait on isend
(1) About to create isend and cancel
Completed wait on isend
(2) About to create isend and cancel
Completed wait on isend
(3) About to create isend and cancel
Completed wait on isend
No errors

Failed MPI_Cancel sends - scancel

Build: Passed

Execution: Failed

Exit Status: Failed with signal 15

MPI Processes: 2

Test Description:

Test of various send cancel calls. Sends messages with MPI_Isend(), MPI_Ibsend(), MPI_Irsend(), and MPI_Issend() and then immediately cancels them. Then verifies message was cancelled and was not received by destination process.

Starting scancel test
(0) About to create isend and cancel
Starting scancel test
Completed wait on isend
Failed to cancel an Isend request
About to create and cancel ibsend
Failed to cancel an Ibsend request
Assertion failed in file ../src/include/mpir_request.h at line 340: ((req))->ref_count >= 0
/opt/cray/pe/lib64/libmpi_cray.so.12(MPL_backtrace_show+0x26) [0x15378c8cbc2b]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1fef8d4) [0x15378c3028d4]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0xf264f8) [0x15378b2394f8]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1f7ca48) [0x15378c28fa48]
/opt/cray/pe/lib64/libmpi_cray.so.12(MPI_Buffer_detach+0xf7) [0x15378aa49ef7]
/var/run/palsd/746d2159-5ad4-4068-9170-f94621ac93c5/files/scancel() [0x2045f2]
/lib64/libc.so.6(__libc_start_main+0xef) [0x15378969424d]
/var/run/palsd/746d2159-5ad4-4068-9170-f94621ac93c5/files/scancel() [0x2040ca]
MPICH ERROR [Rank 0] [job id 746d2159-5ad4-4068-9170-f94621ac93c5] [Mon Apr 15 11:03:24 2024] [x1001c5s5b0n0] - Abort(1): Internal error
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 1
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 1 died from signal 15

Passed MPI_Finalized() test - finalized

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This tests when MPI_Finalized() will work correctly if MPI_INit() was not called. This behaviour is not defined by the MPI standard, therefore this test is not garanteed.

No errors

Passed MPI_Get_library_version test - library_version

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

MPI-3.0 Test returns MPI library version.

MPI VERSION    : CRAY MPICH version 8.1.25.17 (ANL base 3.4a2)
MPI BUILD INFO : Sun Feb 26 14:33 2023 (git hash aecd99f)
No errors

Passed MPI_Get_version() test - version

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This MPI_3.0 test prints the MPI version. If running a version of MPI < 3.0, it simply prints "No Errors".

No errors

Passed MPI_Ibsend repeat - bsend4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Simple test for MPI_Ibsend() that repeatedly sends and receives messages.

No errors

Passed MPI_Isend root - isendself

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This is a simple test case of sending a non-blocking message to the root process. Includes test with a null pointer. This test uses a single process.

No errors

Passed MPI_Isend root cancel - issendselfcancel

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test case has the root send a non-blocking synchronous message to itself, cancels it, then attempts to read it.

No errors

Passed MPI_Isend root probe - isendselfprobe

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This is a simple test case of the root sending a message to itself and probing this message.

No errors

Passed MPI_Mprobe() series - mprobe1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This tests MPI_Mprobe() using a series of tests. Includes tests with send and Mprobe+Mrecv, send and Mprobe+Imrecv, send and Improbe+Mrecv, send and Improbe+Irecv, Mprobe+Mrecv with MPI_PROC_NULL, Mprobe+Imrecv with MPI_PROC_NULL, Improbe+Mrecv with MPI_PROC_NULL, Improbe+Imrecv, and test to verify MPI_Message_c2f() and MPI_Message_f2c() are present.

No errors

Passed MPI_Probe() null source - probenull

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This program checks that MPI_Iprobe() and MPI_Probe() correctly handle a source of MPI_PROC_NULL.

No errors

Passed MPI_Probe() unexpected - probe-unexp

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This program verifies that MPI_Probe() is operating properly in the face of unexpected messages arriving after MPI_Probe() has been called. This program may hang if MPI_Probe() does not return when the message finally arrives. Tested with a variety of message sizes and number of messages.

testing messages of size 1
Message count 0
testing messages of size 1
Message count 0
Message count 1
testing messages of size 1
Message count 0
testing messages of size 1
Message count 0
Message count 1
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 2
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 4
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 8
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 16
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 2
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 4
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 8
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 2
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 4
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 8
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 16
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 32
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 64
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 2
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 4
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 8
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 16
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 32
Message count 0
Message count 2
Message count 3
Message count 4
testing messages of size 32
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 64
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 128
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 256
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 16
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 32
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 64
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 128
Message count 0
Message count 1
testing messages of size 128
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 256
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 64
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 128
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 256
Message count 0
Message count 2
Message count 3
Message count 4
testing messages of size 256
Message count 0
Message count 1
Message count 1
Message count 2
Message count 3
Message count 4
Message count 2
Message count 3
Message count 4
testing messages of size 512
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 1024
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 512
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 1024
Message count 0
Message count 2
Message count 3
Message count 4
testing messages of size 512
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 1024
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 2048
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 512
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 1024
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 2048
Message count 0
Message count 1
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 2048
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 4096
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 8192
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 2048
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 4096
Message count 0
Message count 1
testing messages of size 4096
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 8192
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
Message count 2
Message count 3
Message count 4
testing messages of size 4096
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 8192
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 16384
Message count 0
Message count 1
Message count 2
Message count 3
Message count 2
Message count 3
Message count 4
testing messages of size 8192
Message count 0
Message count 1
Message count 2
testing messages of size 16384
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 32768
Message count 0
Message count 1
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 16384
Message count 0
Message count 1
Message count 4
testing messages of size 32768
Message count 0
Message count 3
Message count 4
testing messages of size 16384
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
Message count 2
Message count 3
Message count 4
testing messages of size 32768
Message count 0
testing messages of size 32768
Message count 0
Message count 1
Message count 1
Message count 2
Message count 2
Message count 3
Message count 4
testing messages of size 65536
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
Message count 2
Message count 3
Message count 4
testing messages of size 65536
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 65536
Message count 0
Message count 1
Message count 1
Message count 2
Message count 3
testing messages of size 65536
Message count 0
Message count 1
Message count 2
Message count 3
Message count 4
testing messages of size 131072
Message count 0
Message count 2
Message count 3
Message count 4
Message count 4
testing messages of size 131072
Message count 0
Message count 3
Message count 4
testing messages of size 131072
Message count 0
Message count 1
Message count 2
testing messages of size 131072
Message count 0
Message count 1
Message count 1
Message count 2
Message count 2
Message count 3
Message count 4
Message count 1
Message count 2
Message count 3
Message count 3
Message count 3
Message count 4
Message count 4
Message count 4
testing messages of size 262144
Message count 0
testing messages of size 262144
Message count 0
testing messages of size 262144
Message count 0
testing messages of size 262144
Message count 0
Message count 1
Message count 1
Message count 2
Message count 1
Message count 1
Message count 2
Message count 2
Message count 3
Message count 2
Message count 3
Message count 3
Message count 3
Message count 4
Message count 4
Message count 4
Message count 4
testing messages of size 524288
Message count 0
testing messages of size 524288
Message count 0
testing messages of size 524288
Message count 0
testing messages of size 524288
Message count 0
Message count 1
Message count 1
Message count 1
Message count 2
Message count 1
Message count 2
Message count 2
Message count 3
Message count 3
Message count 4
Message count 4
testing messages of size 1048576
Message count 0
testing messages of size 1048576
Message count 0
Message count 1
Message count 2
Message count 3
Message count 3
Message count 4
Message count 4
testing messages of size 1048576
Message count 0
testing messages of size 1048576
Message count 0
Message count 1
Message count 1
Message count 1
Message count 2
Message count 2
Message count 2
Message count 2
Message count 3
Message count 3
Message count 3
Message count 3
Message count 4
Message count 4
Message count 4
Message count 4
testing messages of size 2097152
Message count 0
testing messages of size 2097152
Message count 0
testing messages of size 2097152
Message count 0
testing messages of size 2097152
Message count 0
Message count 1
Message count 1
Message count 1
Message count 1
Message count 2
Message count 2
Message count 2
Message count 2
Message count 3
Message count 3
Message count 3
Message count 3
Message count 4
Message count 4
Message count 4
Message count 4
testing messages of size 4194304
Message count 0
testing messages of size 4194304
Message count 0
testing messages of size 4194304
Message count 0
testing messages of size 4194304
Message count 0
Message count 1
Message count 1
Message count 1
Message count 1
Message count 2
Message count 2
Message count 2
Message count 2
Message count 3
Message count 3
Message count 3
Message count 3
Message count 4
Message count 4
Message count 4
Message count 4
No errors

Passed MPI_Request many irecv - sendall

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test issues many non-blocking receives followed by many blocking MPI_Send() calls, then issues an MPI_Wait() on all pending receives using multiple processes and increasing array sizes. This test may fail due to bugs in the handling of request completions or in queue operations.

length = 1 ints
length = 2 ints
length = 4 ints
length = 8 ints
length = 16 ints
length = 32 ints
length = 64 ints
length = 128 ints
length = 256 ints
length = 512 ints
length = 1024 ints
length = 2048 ints
length = 4096 ints
length = 8192 ints
length = 16384 ints
No errors

Passed MPI_Request_get_status - rqstatus

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test MPI_Request_get_status(). Sends a message with MPI_Ssend() and creates receives request with MPI_Irecv(). Verifies Request_get_status does not return correct values prior to MPI_Wait() and returns correct values afterwards. The test also checks that MPI_REQUEST_NULL and MPI_STATUS_IGNORE work as arguments as required beginning with MPI-2.2.

No errors

Passed MPI_Send intercomm - icsend

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Simple test of intercommunicator send and receive using a selection of intercommunicators.

No errors

Passed MPI_Status large count - big_count_status

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test manipulates an MPI status object using MPI_Status_set_elements_x() with various large count values and verifying MPI_Get_elements_x() and MPI_Test_cancelled() produce the correct values.

No errors

Passed MPI_Test pt2pt - inactivereq

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test program checks that the point-to-point completion routines can be applied to an inactive persistent request, as required by the MPI-1 standard. See section 3.7.3. It is allowed to call MPI TEST with a null or inactive request argument. In such a case the operation returns with flag = true and empty status. Tests both persistent send and persistent receive requests.

No errors

Passed MPI_Waitany basic - waitany-null

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This is a simple test of MPI_Waitany().

No errors

Passed MPI_Waitany comprehensive - waittestnull

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This program checks that the various MPI_Test and MPI_Wait routines allow both null requests and in the multiple completion cases, empty lists of requests.

No errors

Passed MPI_Wtime() test - timeout

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This program tests the ability of mpiexec to timeout a process after no more than 3 minutes. By default, it will run for 30 secs.

No errors

Passed MPI_{Is,Query}_thread() test - initstat

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test examines the MPI_Is_thread() and MPI_Query_thread() call after being initilized using MPI_Init_thread().

No errors

Passed MPI_{Send,Receive} basic - sendrecv1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a simple test using MPI_Send() and MPI_Recv(), MPI_Sendrecv(), and MPI_Sendrecv_replace() to send messages between two processes using a selection of communicators and datatypes and increasing array sizes.

No errors

Failed MPI_{Send,Receive} large backoff - sendrecv3

Build: Passed

Execution: Failed

Exit Status: Application_ended_with_error(s)

MPI Processes: 2

Test Description:

Head to head MPI_Send() and MPI_Recv() to test backoff in device when large messages are being transferred. Includes a test that has one process sleep prior to calling send and recv.

Isends for 100 messages of size 100 took too long (1.000263 seconds)
100 Isends for size = 100 took 1.000263 seconds
100 Isends for size = 100 took 0.000270 seconds
10 Isends for size = 1000 took 0.000004 seconds
10 Isends for size = 1000 took 0.000015 seconds
10 Isends for size = 10000 took 0.000036 seconds
10 Isends for size = 10000 took 0.000062 seconds
4 Isends for size = 100000 took 0.000002 seconds
Found 1 errors
4 Isends for size = 100000 took 0.000005 seconds
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 1

Passed MPI_{Send,Receive} vector - sendrecv2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This is a simple test of MPI_Send() and MPI_Recv() using MPI_Type_vector() to create datatypes with an increasing number of blocks.

No errors

Passed Many send/cancel order - rcancel

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test of various receive cancel calls. Creates multiple receive requests then cancels three requests in a more interesting order to ensure the queue operation works properly. The other request receives the message.

Completed wait on irecv[2]
Completed wait on irecv[3]
Completed wait on irecv[0]
No errors

Passed Message patterns - patterns

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test sends/receives a number of messages in different patterns to make sure that all messages are received in the order they are sent. Two processes are used in the test.

No errors.

Failed Persistent send/cancel - pscancel

Build: Passed

Execution: Failed

Exit Status: Failed with signal 15

MPI Processes: 2

Test Description:

Test cancelling persistent send calls. Tests various persistent send calls including MPI_Send_init(), MPI_Bsend_init(), MPI_Rsend_init(), and MPI_Ssend_init() followed by calls to MPI_Cancel().

Failed to cancel a persistent send request
Failed to cancel a persistent bsend request
Assertion failed in file ../src/include/mpir_request.h at line 340: ((req))->ref_count >= 0
/opt/cray/pe/lib64/libmpi_cray.so.12(MPL_backtrace_show+0x26) [0x14ba23891c2b]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1fef8d4) [0x14ba232c88d4]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0xf264f8) [0x14ba221ff4f8]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1f7ca48) [0x14ba23255a48]
/opt/cray/pe/lib64/libmpi_cray.so.12(MPI_Buffer_detach+0xf7) [0x14ba21a0fef7]
/var/run/palsd/fdfac103-c53b-444f-8529-a1f09e97faff/files/pscancel() [0x2045db]
/lib64/libc.so.6(__libc_start_main+0xef) [0x14ba2065a24d]
/var/run/palsd/fdfac103-c53b-444f-8529-a1f09e97faff/files/pscancel() [0x2040ea]
MPICH ERROR [Rank 0] [job id fdfac103-c53b-444f-8529-a1f09e97faff] [Mon Apr 15 11:03:22 2024] [x1001c5s5b0n0] - Abort(1): Internal error
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 1
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 1 died from signal 15

Passed Ping flood - pingping

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test sends a large number of messages in a loop in the source process, and receives a large number of messages in a loop in the destination process using a selection of communicators, datatypes, and array sizes.

Sending count = 1 of sendtype MPI_INT of total size 4 bytes
Sending count = 1 of sendtype MPI_DOUBLE of total size 8 bytes
Sending count = 1 of sendtype MPI_FLOAT_INT of total size 8 bytes
Sending count = 1 of sendtype dup of MPI_INT of total size 4 bytes
Sending count = 1 of sendtype int-vector of total size 4 bytes
Sending count = 1 of sendtype int-indexed(4-int) of total size 16 bytes
Sending count = 1 of sendtype int-indexed(2 blocks) of total size 4 bytes
Sending count = 1 of sendtype MPI_INT of total size 4 bytes
Sending count = 1 of sendtype MPI_SHORT of total size 2 bytes
Sending count = 1 of sendtype MPI_LONG of total size 8 bytes
Sending count = 1 of sendtype MPI_CHAR of total size 1 bytes
Sending count = 1 of sendtype MPI_UINT64_T of total size 8 bytes
Sending count = 1 of sendtype MPI_FLOAT of total size 4 bytes
Sending count = 1 of sendtype MPI_INT of total size 4 bytes
Sending count = 2 of sendtype MPI_INT of total size 8 bytes
Sending count = 2 of sendtype MPI_DOUBLE of total size 16 bytes
Sending count = 2 of sendtype MPI_FLOAT_INT of total size 16 bytes
Sending count = 2 of sendtype dup of MPI_INT of total size 8 bytes
Sending count = 2 of sendtype int-vector of total size 16 bytes
Sending count = 2 of sendtype int-indexed(4-int) of total size 64 bytes
Sending count = 2 of sendtype int-indexed(2 blocks) of total size 16 bytes
Sending count = 2 of sendtype MPI_INT of total size 8 bytes
Sending count = 2 of sendtype MPI_SHORT of total size 4 bytes
Sending count = 2 of sendtype MPI_LONG of total size 16 bytes
Sending count = 2 of sendtype MPI_CHAR of total size 2 bytes
Sending count = 2 of sendtype MPI_UINT64_T of total size 16 bytes
Sending count = 2 of sendtype MPI_FLOAT of total size 8 bytes
Sending count = 2 of sendtype MPI_INT of total size 8 bytes
Sending count = 4 of sendtype MPI_INT of total size 16 bytes
Sending count = 4 of sendtype MPI_DOUBLE of total size 32 bytes
Sending count = 4 of sendtype MPI_FLOAT_INT of total size 32 bytes
Sending count = 4 of sendtype dup of MPI_INT of total size 16 bytes
Sending count = 4 of sendtype int-vector of total size 64 bytes
Sending count = 4 of sendtype int-indexed(4-int) of total size 256 bytes
Sending count = 4 of sendtype int-indexed(2 blocks) of total size 64 bytes
Sending count = 4 of sendtype MPI_INT of total size 16 bytes
Sending count = 4 of sendtype MPI_SHORT of total size 8 bytes
Sending count = 4 of sendtype MPI_LONG of total size 32 bytes
Sending count = 4 of sendtype MPI_CHAR of total size 4 bytes
Sending count = 4 of sendtype MPI_UINT64_T of total size 32 bytes
Sending count = 4 of sendtype MPI_FLOAT of total size 16 bytes
Sending count = 4 of sendtype MPI_INT of total size 16 bytes
Sending count = 8 of sendtype MPI_INT of total size 32 bytes
Sending count = 8 of sendtype MPI_DOUBLE of total size 64 bytes
Sending count = 8 of sendtype MPI_FLOAT_INT of total size 64 bytes
Sending count = 8 of sendtype dup of MPI_INT of total size 32 bytes
Sending count = 8 of sendtype int-vector of total size 256 bytes
Sending count = 8 of sendtype int-indexed(4-int) of total size 1024 bytes
Sending count = 8 of sendtype int-indexed(2 blocks) of total size 256 bytes
Sending count = 8 of sendtype MPI_INT of total size 32 bytes
Sending count = 8 of sendtype MPI_SHORT of total size 16 bytes
Sending count = 8 of sendtype MPI_LONG of total size 64 bytes
Sending count = 8 of sendtype MPI_CHAR of total size 8 bytes
Sending count = 8 of sendtype MPI_UINT64_T of total size 64 bytes
Sending count = 8 of sendtype MPI_FLOAT of total size 32 bytes
Sending count = 8 of sendtype MPI_INT of total size 32 bytes
Sending count = 16 of sendtype MPI_INT of total size 64 bytes
Sending count = 16 of sendtype MPI_DOUBLE of total size 128 bytes
Sending count = 16 of sendtype MPI_FLOAT_INT of total size 128 bytes
Sending count = 16 of sendtype dup of MPI_INT of total size 64 bytes
Sending count = 16 of sendtype int-vector of total size 1024 bytes
Sending count = 16 of sendtype int-indexed(4-int) of total size 4096 bytes
Sending count = 16 of sendtype int-indexed(2 blocks) of total size 1024 bytes
Sending count = 16 of sendtype MPI_INT of total size 64 bytes
Sending count = 16 of sendtype MPI_SHORT of total size 32 bytes
Sending count = 16 of sendtype MPI_LONG of total size 128 bytes
Sending count = 16 of sendtype MPI_CHAR of total size 16 bytes
Sending count = 16 of sendtype MPI_UINT64_T of total size 128 bytes
Sending count = 16 of sendtype MPI_FLOAT of total size 64 bytes
Sending count = 16 of sendtype MPI_INT of total size 64 bytes
Sending count = 32 of sendtype MPI_INT of total size 128 bytes
Sending count = 32 of sendtype MPI_DOUBLE of total size 256 bytes
Sending count = 32 of sendtype MPI_FLOAT_INT of total size 256 bytes
Sending count = 32 of sendtype dup of MPI_INT of total size 128 bytes
Sending count = 32 of sendtype int-vector of total size 4096 bytes
Sending count = 32 of sendtype int-indexed(4-int) of total size 16384 bytes
Sending count = 32 of sendtype int-indexed(2 blocks) of total size 4096 bytes
Sending count = 32 of sendtype MPI_INT of total size 128 bytes
Sending count = 32 of sendtype MPI_SHORT of total size 64 bytes
Sending count = 32 of sendtype MPI_LONG of total size 256 bytes
Sending count = 32 of sendtype MPI_CHAR of total size 32 bytes
Sending count = 32 of sendtype MPI_UINT64_T of total size 256 bytes
Sending count = 32 of sendtype MPI_FLOAT of total size 128 bytes
Sending count = 32 of sendtype MPI_INT of total size 128 bytes
Sending count = 64 of sendtype MPI_INT of total size 256 bytes
Sending count = 64 of sendtype MPI_DOUBLE of total size 512 bytes
Sending count = 64 of sendtype MPI_FLOAT_INT of total size 512 bytes
Sending count = 64 of sendtype dup of MPI_INT of total size 256 bytes
Sending count = 64 of sendtype int-vector of total size 16384 bytes
Sending count = 64 of sendtype int-indexed(4-int) of total size 65536 bytes
Sending count = 64 of sendtype int-indexed(2 blocks) of total size 16384 bytes
Sending count = 64 of sendtype MPI_INT of total size 256 bytes
Sending count = 64 of sendtype MPI_SHORT of total size 128 bytes
Sending count = 64 of sendtype MPI_LONG of total size 512 bytes
Sending count = 64 of sendtype MPI_CHAR of total size 64 bytes
Sending count = 64 of sendtype MPI_UINT64_T of total size 512 bytes
Sending count = 64 of sendtype MPI_FLOAT of total size 256 bytes
Sending count = 64 of sendtype MPI_INT of total size 256 bytes
Sending count = 128 of sendtype MPI_INT of total size 512 bytes
Sending count = 128 of sendtype MPI_DOUBLE of total size 1024 bytes
Sending count = 128 of sendtype MPI_FLOAT_INT of total size 1024 bytes
Sending count = 128 of sendtype dup of MPI_INT of total size 512 bytes
Sending count = 128 of sendtype int-vector of total size 65536 bytes
Sending count = 128 of sendtype int-indexed(4-int) of total size 262144 bytes
Sending count = 128 of sendtype int-indexed(2 blocks) of total size 65536 bytes
Sending count = 128 of sendtype MPI_INT of total size 512 bytes
Sending count = 128 of sendtype MPI_SHORT of total size 256 bytes
Sending count = 128 of sendtype MPI_LONG of total size 1024 bytes
Sending count = 128 of sendtype MPI_CHAR of total size 128 bytes
Sending count = 128 of sendtype MPI_UINT64_T of total size 1024 bytes
Sending count = 128 of sendtype MPI_FLOAT of total size 512 bytes
Sending count = 128 of sendtype MPI_INT of total size 512 bytes
Sending count = 256 of sendtype MPI_INT of total size 1024 bytes
Sending count = 256 of sendtype MPI_DOUBLE of total size 2048 bytes
Sending count = 256 of sendtype MPI_FLOAT_INT of total size 2048 bytes
Sending count = 256 of sendtype dup of MPI_INT of total size 1024 bytes
Sending count = 256 of sendtype int-vector of total size 262144 bytes
Sending count = 256 of sendtype int-indexed(4-int) of total size 1048576 bytes
Sending count = 256 of sendtype int-indexed(2 blocks) of total size 262144 bytes
Sending count = 256 of sendtype MPI_INT of total size 1024 bytes
Sending count = 256 of sendtype MPI_SHORT of total size 512 bytes
Sending count = 256 of sendtype MPI_LONG of total size 2048 bytes
Sending count = 256 of sendtype MPI_CHAR of total size 256 bytes
Sending count = 256 of sendtype MPI_UINT64_T of total size 2048 bytes
Sending count = 256 of sendtype MPI_FLOAT of total size 1024 bytes
Sending count = 256 of sendtype MPI_INT of total size 1024 bytes
Sending count = 512 of sendtype MPI_INT of total size 2048 bytes
Sending count = 512 of sendtype MPI_DOUBLE of total size 4096 bytes
Sending count = 512 of sendtype MPI_FLOAT_INT of total size 4096 bytes
Sending count = 512 of sendtype dup of MPI_INT of total size 2048 bytes
Sending count = 512 of sendtype int-vector of total size 1048576 bytes
Sending count = 512 of sendtype int-indexed(4-int) of total size 4194304 bytes
Sending count = 512 of sendtype int-indexed(2 blocks) of total size 1048576 bytes
Sending count = 512 of sendtype MPI_INT of total size 2048 bytes
Sending count = 512 of sendtype MPI_SHORT of total size 1024 bytes
Sending count = 512 of sendtype MPI_LONG of total size 4096 bytes
Sending count = 512 of sendtype MPI_CHAR of total size 512 bytes
Sending count = 512 of sendtype MPI_UINT64_T of total size 4096 bytes
Sending count = 512 of sendtype MPI_FLOAT of total size 2048 bytes
Sending count = 512 of sendtype MPI_INT of total size 2048 bytes
Sending count = 1024 of sendtype MPI_INT of total size 4096 bytes
Sending count = 1024 of sendtype MPI_DOUBLE of total size 8192 bytes
Sending count = 1024 of sendtype MPI_FLOAT_INT of total size 8192 bytes
Sending count = 1024 of sendtype dup of MPI_INT of total size 4096 bytes
Sending count = 1024 of sendtype int-vector of total size 4194304 bytes
Sending count = 1024 of sendtype int-indexed(4-int) of total size 16777216 bytes
Sending count = 1024 of sendtype int-indexed(2 blocks) of total size 4194304 bytes
Sending count = 1024 of sendtype MPI_INT of total size 4096 bytes
Sending count = 1024 of sendtype MPI_SHORT of total size 2048 bytes
Sending count = 1024 of sendtype MPI_LONG of total size 8192 bytes
Sending count = 1024 of sendtype MPI_CHAR of total size 1024 bytes
Sending count = 1024 of sendtype MPI_UINT64_T of total size 8192 bytes
Sending count = 1024 of sendtype MPI_FLOAT of total size 4096 bytes
Sending count = 1024 of sendtype MPI_INT of total size 4096 bytes
Sending count = 2048 of sendtype MPI_INT of total size 8192 bytes
Sending count = 2048 of sendtype MPI_DOUBLE of total size 16384 bytes
Sending count = 2048 of sendtype MPI_FLOAT_INT of total size 16384 bytes
Sending count = 2048 of sendtype dup of MPI_INT of total size 8192 bytes
Sending count = 2048 of sendtype int-vector of total size 16777216 bytes
Sending count = 2048 of sendtype int-indexed(4-int) of total size 67108864 bytes
Sending count = 2048 of sendtype int-indexed(2 blocks) of total size 16777216 bytes
Sending count = 2048 of sendtype MPI_INT of total size 8192 bytes
Sending count = 2048 of sendtype MPI_SHORT of total size 4096 bytes
Sending count = 2048 of sendtype MPI_LONG of total size 16384 bytes
Sending count = 2048 of sendtype MPI_CHAR of total size 2048 bytes
Sending count = 2048 of sendtype MPI_UINT64_T of total size 16384 bytes
Sending count = 2048 of sendtype MPI_FLOAT of total size 8192 bytes
Sending count = 2048 of sendtype MPI_INT of total size 8192 bytes
Sending count = 1 of sendtype MPI_INT of total size 4 bytes
Sending count = 1 of sendtype MPI_DOUBLE of total size 8 bytes
Sending count = 1 of sendtype MPI_FLOAT_INT of total size 8 bytes
Sending count = 1 of sendtype dup of MPI_INT of total size 4 bytes
Sending count = 1 of sendtype int-vector of total size 4 bytes
Sending count = 1 of sendtype int-indexed(4-int) of total size 16 bytes
Sending count = 1 of sendtype int-indexed(2 blocks) of total size 4 bytes
Sending count = 1 of sendtype MPI_INT of total size 4 bytes
Sending count = 1 of sendtype MPI_SHORT of total size 2 bytes
Sending count = 1 of sendtype MPI_LONG of total size 8 bytes
Sending count = 1 of sendtype MPI_CHAR of total size 1 bytes
Sending count = 1 of sendtype MPI_UINT64_T of total size 8 bytes
Sending count = 1 of sendtype MPI_FLOAT of total size 4 bytes
Sending count = 1 of sendtype MPI_INT of total size 4 bytes
Sending count = 2 of sendtype MPI_INT of total size 8 bytes
Sending count = 2 of sendtype MPI_DOUBLE of total size 16 bytes
Sending count = 2 of sendtype MPI_FLOAT_INT of total size 16 bytes
Sending count = 2 of sendtype dup of MPI_INT of total size 8 bytes
Sending count = 2 of sendtype int-vector of total size 16 bytes
Sending count = 2 of sendtype int-indexed(4-int) of total size 64 bytes
Sending count = 2 of sendtype int-indexed(2 blocks) of total size 16 bytes
Sending count = 2 of sendtype MPI_INT of total size 8 bytes
Sending count = 2 of sendtype MPI_SHORT of total size 4 bytes
Sending count = 2 of sendtype MPI_LONG of total size 16 bytes
Sending count = 2 of sendtype MPI_CHAR of total size 2 bytes
Sending count = 2 of sendtype MPI_UINT64_T of total size 16 bytes
Sending count = 2 of sendtype MPI_FLOAT of total size 8 bytes
Sending count = 2 of sendtype MPI_INT of total size 8 bytes
Sending count = 4 of sendtype MPI_INT of total size 16 bytes
Sending count = 4 of sendtype MPI_DOUBLE of total size 32 bytes
Sending count = 4 of sendtype MPI_FLOAT_INT of total size 32 bytes
Sending count = 4 of sendtype dup of MPI_INT of total size 16 bytes
Sending count = 4 of sendtype int-vector of total size 64 bytes
Sending count = 4 of sendtype int-indexed(4-int) of total size 256 bytes
Sending count = 4 of sendtype int-indexed(2 blocks) of total size 64 bytes
Sending count = 4 of sendtype MPI_INT of total size 16 bytes
Sending count = 4 of sendtype MPI_SHORT of total size 8 bytes
Sending count = 4 of sendtype MPI_LONG of total size 32 bytes
Sending count = 4 of sendtype MPI_CHAR of total size 4 bytes
Sending count = 4 of sendtype MPI_UINT64_T of total size 32 bytes
Sending count = 4 of sendtype MPI_FLOAT of total size 16 bytes
Sending count = 4 of sendtype MPI_INT of total size 16 bytes
Sending count = 8 of sendtype MPI_INT of total size 32 bytes
Sending count = 8 of sendtype MPI_DOUBLE of total size 64 bytes
Sending count = 8 of sendtype MPI_FLOAT_INT of total size 64 bytes
Sending count = 8 of sendtype dup of MPI_INT of total size 32 bytes
Sending count = 8 of sendtype int-vector of total size 256 bytes
Sending count = 8 of sendtype int-indexed(4-int) of total size 1024 bytes
Sending count = 8 of sendtype int-indexed(2 blocks) of total size 256 bytes
Sending count = 8 of sendtype MPI_INT of total size 32 bytes
Sending count = 8 of sendtype MPI_SHORT of total size 16 bytes
Sending count = 8 of sendtype MPI_LONG of total size 64 bytes
Sending count = 8 of sendtype MPI_CHAR of total size 8 bytes
Sending count = 8 of sendtype MPI_UINT64_T of total size 64 bytes
Sending count = 8 of sendtype MPI_FLOAT of total size 32 bytes
Sending count = 8 of sendtype MPI_INT of total size 32 bytes
Sending count = 16 of sendtype MPI_INT of total size 64 bytes
Sending count = 16 of sendtype MPI_DOUBLE of total size 128 bytes
Sending count = 16 of sendtype MPI_FLOAT_INT of total size 128 bytes
Sending count = 16 of sendtype dup of MPI_INT of total size 64 bytes
Sending count = 16 of sendtype int-vector of total size 1024 bytes
Sending count = 16 of sendtype int-indexed(4-int) of total size 4096 bytes
Sending count = 16 of sendtype int-indexed(2 blocks) of total size 1024 bytes
Sending count = 16 of sendtype MPI_INT of total size 64 bytes
Sending count = 16 of sendtype MPI_SHORT of total size 32 bytes
Sending count = 16 of sendtype MPI_LONG of total size 128 bytes
Sending count = 16 of sendtype MPI_CHAR of total size 16 bytes
Sending count = 16 of sendtype MPI_UINT64_T of total size 128 bytes
Sending count = 16 of sendtype MPI_FLOAT of total size 64 bytes
Sending count = 16 of sendtype MPI_INT of total size 64 bytes
Sending count = 32 of sendtype MPI_INT of total size 128 bytes
Sending count = 32 of sendtype MPI_DOUBLE of total size 256 bytes
Sending count = 32 of sendtype MPI_FLOAT_INT of total size 256 bytes
Sending count = 32 of sendtype dup of MPI_INT of total size 128 bytes
Sending count = 32 of sendtype int-vector of total size 4096 bytes
Sending count = 32 of sendtype int-indexed(4-int) of total size 16384 bytes
Sending count = 32 of sendtype int-indexed(2 blocks) of total size 4096 bytes
Sending count = 32 of sendtype MPI_INT of total size 128 bytes
Sending count = 32 of sendtype MPI_SHORT of total size 64 bytes
Sending count = 32 of sendtype MPI_LONG of total size 256 bytes
Sending count = 32 of sendtype MPI_CHAR of total size 32 bytes
Sending count = 32 of sendtype MPI_UINT64_T of total size 256 bytes
Sending count = 32 of sendtype MPI_FLOAT of total size 128 bytes
Sending count = 32 of sendtype MPI_INT of total size 128 bytes
Sending count = 64 of sendtype MPI_INT of total size 256 bytes
Sending count = 64 of sendtype MPI_DOUBLE of total size 512 bytes
Sending count = 64 of sendtype MPI_FLOAT_INT of total size 512 bytes
Sending count = 64 of sendtype dup of MPI_INT of total size 256 bytes
Sending count = 64 of sendtype int-vector of total size 16384 bytes
Sending count = 64 of sendtype int-indexed(4-int) of total size 65536 bytes
Sending count = 64 of sendtype int-indexed(2 blocks) of total size 16384 bytes
Sending count = 64 of sendtype MPI_INT of total size 256 bytes
Sending count = 64 of sendtype MPI_SHORT of total size 128 bytes
Sending count = 64 of sendtype MPI_LONG of total size 512 bytes
Sending count = 64 of sendtype MPI_CHAR of total size 64 bytes
Sending count = 64 of sendtype MPI_UINT64_T of total size 512 bytes
Sending count = 64 of sendtype MPI_FLOAT of total size 256 bytes
Sending count = 64 of sendtype MPI_INT of total size 256 bytes
Sending count = 128 of sendtype MPI_INT of total size 512 bytes
Sending count = 128 of sendtype MPI_DOUBLE of total size 1024 bytes
Sending count = 128 of sendtype MPI_FLOAT_INT of total size 1024 bytes
Sending count = 128 of sendtype dup of MPI_INT of total size 512 bytes
Sending count = 128 of sendtype int-vector of total size 65536 bytes
Sending count = 128 of sendtype int-indexed(4-int) of total size 262144 bytes
Sending count = 128 of sendtype int-indexed(2 blocks) of total size 65536 bytes
Sending count = 128 of sendtype MPI_INT of total size 512 bytes
Sending count = 128 of sendtype MPI_SHORT of total size 256 bytes
Sending count = 128 of sendtype MPI_LONG of total size 1024 bytes
Sending count = 128 of sendtype MPI_CHAR of total size 128 bytes
Sending count = 128 of sendtype MPI_UINT64_T of total size 1024 bytes
Sending count = 128 of sendtype MPI_FLOAT of total size 512 bytes
Sending count = 128 of sendtype MPI_INT of total size 512 bytes
Sending count = 256 of sendtype MPI_INT of total size 1024 bytes
Sending count = 256 of sendtype MPI_DOUBLE of total size 2048 bytes
Sending count = 256 of sendtype MPI_FLOAT_INT of total size 2048 bytes
Sending count = 256 of sendtype dup of MPI_INT of total size 1024 bytes
Sending count = 256 of sendtype int-vector of total size 262144 bytes
Sending count = 256 of sendtype int-indexed(4-int) of total size 1048576 bytes
Sending count = 256 of sendtype int-indexed(2 blocks) of total size 262144 bytes
Sending count = 256 of sendtype MPI_INT of total size 1024 bytes
Sending count = 256 of sendtype MPI_SHORT of total size 512 bytes
Sending count = 256 of sendtype MPI_LONG of total size 2048 bytes
Sending count = 256 of sendtype MPI_CHAR of total size 256 bytes
Sending count = 256 of sendtype MPI_UINT64_T of total size 2048 bytes
Sending count = 256 of sendtype MPI_FLOAT of total size 1024 bytes
Sending count = 256 of sendtype MPI_INT of total size 1024 bytes
Sending count = 512 of sendtype MPI_INT of total size 2048 bytes
Sending count = 512 of sendtype MPI_DOUBLE of total size 4096 bytes
Sending count = 512 of sendtype MPI_FLOAT_INT of total size 4096 bytes
Sending count = 512 of sendtype dup of MPI_INT of total size 2048 bytes
Sending count = 512 of sendtype int-vector of total size 1048576 bytes
Sending count = 512 of sendtype int-indexed(4-int) of total size 4194304 bytes
Sending count = 512 of sendtype int-indexed(2 blocks) of total size 1048576 bytes
Sending count = 512 of sendtype MPI_INT of total size 2048 bytes
Sending count = 512 of sendtype MPI_SHORT of total size 1024 bytes
Sending count = 512 of sendtype MPI_LONG of total size 4096 bytes
Sending count = 512 of sendtype MPI_CHAR of total size 512 bytes
Sending count = 512 of sendtype MPI_UINT64_T of total size 4096 bytes
Sending count = 512 of sendtype MPI_FLOAT of total size 2048 bytes
Sending count = 512 of sendtype MPI_INT of total size 2048 bytes
Sending count = 1024 of sendtype MPI_INT of total size 4096 bytes
Sending count = 1024 of sendtype MPI_DOUBLE of total size 8192 bytes
Sending count = 1024 of sendtype MPI_FLOAT_INT of total size 8192 bytes
Sending count = 1024 of sendtype dup of MPI_INT of total size 4096 bytes
Sending count = 1024 of sendtype int-vector of total size 4194304 bytes
Sending count = 1024 of sendtype int-indexed(4-int) of total size 16777216 bytes
Sending count = 1024 of sendtype int-indexed(2 blocks) of total size 4194304 bytes
Sending count = 1024 of sendtype MPI_INT of total size 4096 bytes
Sending count = 1024 of sendtype MPI_SHORT of total size 2048 bytes
Sending count = 1024 of sendtype MPI_LONG of total size 8192 bytes
Sending count = 1024 of sendtype MPI_CHAR of total size 1024 bytes
Sending count = 1024 of sendtype MPI_UINT64_T of total size 8192 bytes
Sending count = 1024 of sendtype MPI_FLOAT of total size 4096 bytes
Sending count = 1024 of sendtype MPI_INT of total size 4096 bytes
Sending count = 2048 of sendtype MPI_INT of total size 8192 bytes
Sending count = 2048 of sendtype MPI_DOUBLE of total size 16384 bytes
Sending count = 2048 of sendtype MPI_FLOAT_INT of total size 16384 bytes
Sending count = 2048 of sendtype dup of MPI_INT of total size 8192 bytes
Sending count = 2048 of sendtype int-vector of total size 16777216 bytes
Sending count = 2048 of sendtype int-indexed(4-int) of total size 67108864 bytes
Sending count = 2048 of sendtype int-indexed(2 blocks) of total size 16777216 bytes
Sending count = 2048 of sendtype MPI_INT of total size 8192 bytes
Sending count = 2048 of sendtype MPI_SHORT of total size 4096 bytes
Sending count = 2048 of sendtype MPI_LONG of total size 16384 bytes
Sending count = 2048 of sendtype MPI_CHAR of total size 2048 bytes
Sending count = 2048 of sendtype MPI_UINT64_T of total size 16384 bytes
Sending count = 2048 of sendtype MPI_FLOAT of total size 8192 bytes
Sending count = 2048 of sendtype MPI_INT of total size 8192 bytes
Sending count = 1 of sendtype MPI_INT of total size 4 bytes
Sending count = 1 of sendtype MPI_DOUBLE of total size 8 bytes
Sending count = 1 of sendtype MPI_FLOAT_INT of total size 8 bytes
Sending count = 1 of sendtype dup of MPI_INT of total size 4 bytes
Sending count = 1 of sendtype int-vector of total size 4 bytes
Sending count = 1 of sendtype int-indexed(4-int) of total size 16 bytes
Sending count = 1 of sendtype int-indexed(2 blocks) of total size 4 bytes
Sending count = 1 of sendtype MPI_INT of total size 4 bytes
Sending count = 1 of sendtype MPI_SHORT of total size 2 bytes
Sending count = 1 of sendtype MPI_LONG of total size 8 bytes
Sending count = 1 of sendtype MPI_CHAR of total size 1 bytes
Sending count = 1 of sendtype MPI_UINT64_T of total size 8 bytes
Sending count = 1 of sendtype MPI_FLOAT of total size 4 bytes
Sending count = 1 of sendtype MPI_INT of total size 4 bytes
Sending count = 2 of sendtype MPI_INT of total size 8 bytes
Sending count = 2 of sendtype MPI_DOUBLE of total size 16 bytes
Sending count = 2 of sendtype MPI_FLOAT_INT of total size 16 bytes
Sending count = 2 of sendtype dup of MPI_INT of total size 8 bytes
Sending count = 2 of sendtype int-vector of total size 16 bytes
Sending count = 2 of sendtype int-indexed(4-int) of total size 64 bytes
Sending count = 2 of sendtype int-indexed(2 blocks) of total size 16 bytes
Sending count = 2 of sendtype MPI_INT of total size 8 bytes
Sending count = 2 of sendtype MPI_SHORT of total size 4 bytes
Sending count = 2 of sendtype MPI_LONG of total size 16 bytes
Sending count = 2 of sendtype MPI_CHAR of total size 2 bytes
Sending count = 2 of sendtype MPI_UINT64_T of total size 16 bytes
Sending count = 2 of sendtype MPI_FLOAT of total size 8 bytes
Sending count = 2 of sendtype MPI_INT of total size 8 bytes
Sending count = 4 of sendtype MPI_INT of total size 16 bytes
Sending count = 4 of sendtype MPI_DOUBLE of total size 32 bytes
Sending count = 4 of sendtype MPI_FLOAT_INT of total size 32 bytes
Sending count = 4 of sendtype dup of MPI_INT of total size 16 bytes
Sending count = 4 of sendtype int-vector of total size 64 bytes
Sending count = 4 of sendtype int-indexed(4-int) of total size 256 bytes
Sending count = 4 of sendtype int-indexed(2 blocks) of total size 64 bytes
Sending count = 4 of sendtype MPI_INT of total size 16 bytes
Sending count = 4 of sendtype MPI_SHORT of total size 8 bytes
Sending count = 4 of sendtype MPI_LONG of total size 32 bytes
Sending count = 4 of sendtype MPI_CHAR of total size 4 bytes
Sending count = 4 of sendtype MPI_UINT64_T of total size 32 bytes
Sending count = 4 of sendtype MPI_FLOAT of total size 16 bytes
Sending count = 4 of sendtype MPI_INT of total size 16 bytes
Sending count = 8 of sendtype MPI_INT of total size 32 bytes
Sending count = 8 of sendtype MPI_DOUBLE of total size 64 bytes
Sending count = 8 of sendtype MPI_FLOAT_INT of total size 64 bytes
Sending count = 8 of sendtype dup of MPI_INT of total size 32 bytes
Sending count = 8 of sendtype int-vector of total size 256 bytes
Sending count = 8 of sendtype int-indexed(4-int) of total size 1024 bytes
Sending count = 8 of sendtype int-indexed(2 blocks) of total size 256 bytes
Sending count = 8 of sendtype MPI_INT of total size 32 bytes
Sending count = 8 of sendtype MPI_SHORT of total size 16 bytes
Sending count = 8 of sendtype MPI_LONG of total size 64 bytes
Sending count = 8 of sendtype MPI_CHAR of total size 8 bytes
Sending count = 8 of sendtype MPI_UINT64_T of total size 64 bytes
Sending count = 8 of sendtype MPI_FLOAT of total size 32 bytes
Sending count = 8 of sendtype MPI_INT of total size 32 bytes
Sending count = 16 of sendtype MPI_INT of total size 64 bytes
Sending count = 16 of sendtype MPI_DOUBLE of total size 128 bytes
Sending count = 16 of sendtype MPI_FLOAT_INT of total size 128 bytes
Sending count = 16 of sendtype dup of MPI_INT of total size 64 bytes
Sending count = 16 of sendtype int-vector of total size 1024 bytes
Sending count = 16 of sendtype int-indexed(4-int) of total size 4096 bytes
Sending count = 16 of sendtype int-indexed(2 blocks) of total size 1024 bytes
Sending count = 16 of sendtype MPI_INT of total size 64 bytes
Sending count = 16 of sendtype MPI_SHORT of total size 32 bytes
Sending count = 16 of sendtype MPI_LONG of total size 128 bytes
Sending count = 16 of sendtype MPI_CHAR of total size 16 bytes
Sending count = 16 of sendtype MPI_UINT64_T of total size 128 bytes
Sending count = 16 of sendtype MPI_FLOAT of total size 64 bytes
Sending count = 16 of sendtype MPI_INT of total size 64 bytes
Sending count = 32 of sendtype MPI_INT of total size 128 bytes
Sending count = 32 of sendtype MPI_DOUBLE of total size 256 bytes
Sending count = 32 of sendtype MPI_FLOAT_INT of total size 256 bytes
Sending count = 32 of sendtype dup of MPI_INT of total size 128 bytes
Sending count = 32 of sendtype int-vector of total size 4096 bytes
Sending count = 32 of sendtype int-indexed(4-int) of total size 16384 bytes
Sending count = 32 of sendtype int-indexed(2 blocks) of total size 4096 bytes
Sending count = 32 of sendtype MPI_INT of total size 128 bytes
Sending count = 32 of sendtype MPI_SHORT of total size 64 bytes
Sending count = 32 of sendtype MPI_LONG of total size 256 bytes
Sending count = 32 of sendtype MPI_CHAR of total size 32 bytes
Sending count = 32 of sendtype MPI_UINT64_T of total size 256 bytes
Sending count = 32 of sendtype MPI_FLOAT of total size 128 bytes
Sending count = 32 of sendtype MPI_INT of total size 128 bytes
Sending count = 64 of sendtype MPI_INT of total size 256 bytes
Sending count = 64 of sendtype MPI_DOUBLE of total size 512 bytes
Sending count = 64 of sendtype MPI_FLOAT_INT of total size 512 bytes
Sending count = 64 of sendtype dup of MPI_INT of total size 256 bytes
Sending count = 64 of sendtype int-vector of total size 16384 bytes
Sending count = 64 of sendtype int-indexed(4-int) of total size 65536 bytes
Sending count = 64 of sendtype int-indexed(2 blocks) of total size 16384 bytes
Sending count = 64 of sendtype MPI_INT of total size 256 bytes
Sending count = 64 of sendtype MPI_SHORT of total size 128 bytes
Sending count = 64 of sendtype MPI_LONG of total size 512 bytes
Sending count = 64 of sendtype MPI_CHAR of total size 64 bytes
Sending count = 64 of sendtype MPI_UINT64_T of total size 512 bytes
Sending count = 64 of sendtype MPI_FLOAT of total size 256 bytes
Sending count = 64 of sendtype MPI_INT of total size 256 bytes
Sending count = 128 of sendtype MPI_INT of total size 512 bytes
Sending count = 128 of sendtype MPI_DOUBLE of total size 1024 bytes
Sending count = 128 of sendtype MPI_FLOAT_INT of total size 1024 bytes
Sending count = 128 of sendtype dup of MPI_INT of total size 512 bytes
Sending count = 128 of sendtype int-vector of total size 65536 bytes
Sending count = 128 of sendtype int-indexed(4-int) of total size 262144 bytes
Sending count = 128 of sendtype int-indexed(2 blocks) of total size 65536 bytes
Sending count = 128 of sendtype MPI_INT of total size 512 bytes
Sending count = 128 of sendtype MPI_SHORT of total size 256 bytes
Sending count = 128 of sendtype MPI_LONG of total size 1024 bytes
Sending count = 128 of sendtype MPI_CHAR of total size 128 bytes
Sending count = 128 of sendtype MPI_UINT64_T of total size 1024 bytes
Sending count = 128 of sendtype MPI_FLOAT of total size 512 bytes
Sending count = 128 of sendtype MPI_INT of total size 512 bytes
Sending count = 256 of sendtype MPI_INT of total size 1024 bytes
Sending count = 256 of sendtype MPI_DOUBLE of total size 2048 bytes
Sending count = 256 of sendtype MPI_FLOAT_INT of total size 2048 bytes
Sending count = 256 of sendtype dup of MPI_INT of total size 1024 bytes
Sending count = 256 of sendtype int-vector of total size 262144 bytes
Sending count = 256 of sendtype int-indexed(4-int) of total size 1048576 bytes
Sending count = 256 of sendtype int-indexed(2 blocks) of total size 262144 bytes
Sending count = 256 of sendtype MPI_INT of total size 1024 bytes
Sending count = 256 of sendtype MPI_SHORT of total size 512 bytes
Sending count = 256 of sendtype MPI_LONG of total size 2048 bytes
Sending count = 256 of sendtype MPI_CHAR of total size 256 bytes
Sending count = 256 of sendtype MPI_UINT64_T of total size 2048 bytes
Sending count = 256 of sendtype MPI_FLOAT of total size 1024 bytes
Sending count = 256 of sendtype MPI_INT of total size 1024 bytes
Sending count = 512 of sendtype MPI_INT of total size 2048 bytes
Sending count = 512 of sendtype MPI_DOUBLE of total size 4096 bytes
Sending count = 512 of sendtype MPI_FLOAT_INT of total size 4096 bytes
Sending count = 512 of sendtype dup of MPI_INT of total size 2048 bytes
Sending count = 512 of sendtype int-vector of total size 1048576 bytes
Sending count = 512 of sendtype int-indexed(4-int) of total size 4194304 bytes
Sending count = 512 of sendtype int-indexed(2 blocks) of total size 1048576 bytes
Sending count = 512 of sendtype MPI_INT of total size 2048 bytes
Sending count = 512 of sendtype MPI_SHORT of total size 1024 bytes
Sending count = 512 of sendtype MPI_LONG of total size 4096 bytes
Sending count = 512 of sendtype MPI_CHAR of total size 512 bytes
Sending count = 512 of sendtype MPI_UINT64_T of total size 4096 bytes
Sending count = 512 of sendtype MPI_FLOAT of total size 2048 bytes
Sending count = 512 of sendtype MPI_INT of total size 2048 bytes
Sending count = 1024 of sendtype MPI_INT of total size 4096 bytes
Sending count = 1024 of sendtype MPI_DOUBLE of total size 8192 bytes
Sending count = 1024 of sendtype MPI_FLOAT_INT of total size 8192 bytes
Sending count = 1024 of sendtype dup of MPI_INT of total size 4096 bytes
Sending count = 1024 of sendtype int-vector of total size 4194304 bytes
Sending count = 1024 of sendtype int-indexed(4-int) of total size 16777216 bytes
Sending count = 1024 of sendtype int-indexed(2 blocks) of total size 4194304 bytes
Sending count = 1024 of sendtype MPI_INT of total size 4096 bytes
Sending count = 1024 of sendtype MPI_SHORT of total size 2048 bytes
Sending count = 1024 of sendtype MPI_LONG of total size 8192 bytes
Sending count = 1024 of sendtype MPI_CHAR of total size 1024 bytes
Sending count = 1024 of sendtype MPI_UINT64_T of total size 8192 bytes
Sending count = 1024 of sendtype MPI_FLOAT of total size 4096 bytes
Sending count = 1024 of sendtype MPI_INT of total size 4096 bytes
Sending count = 2048 of sendtype MPI_INT of total size 8192 bytes
Sending count = 2048 of sendtype MPI_DOUBLE of total size 16384 bytes
Sending count = 2048 of sendtype MPI_FLOAT_INT of total size 16384 bytes
Sending count = 2048 of sendtype dup of MPI_INT of total size 8192 bytes
Sending count = 2048 of sendtype int-vector of total size 16777216 bytes
Sending count = 2048 of sendtype int-indexed(4-int) of total size 67108864 bytes
Sending count = 2048 of sendtype int-indexed(2 blocks) of total size 16777216 bytes
Sending count = 2048 of sendtype MPI_INT of total size 8192 bytes
Sending count = 2048 of sendtype MPI_SHORT of total size 4096 bytes
Sending count = 2048 of sendtype MPI_LONG of total size 16384 bytes
Sending count = 2048 of sendtype MPI_CHAR of total size 2048 bytes
Sending count = 2048 of sendtype MPI_UINT64_T of total size 16384 bytes
Sending count = 2048 of sendtype MPI_FLOAT of total size 8192 bytes
Sending count = 2048 of sendtype MPI_INT of total size 8192 bytes
No errors

Passed Preposted receive - sendself

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Test root sending to self with a preposted receive for a selection of datatypes and increasing array sizes. Includes tests for MPI_Send(), MPI_Ssend(), and MPI_Rsend().

No errors

Passed Race condition - sendflood

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

Repeatedly sends messages to the root from all other processes. Run this test with 8 processes. This test was submitted as a result of problems seen with the ch3:shm device on a Solaris system. The symptom is that the test hangs; this is due to losing a message, probably due to a race condition in a message-queue update.

No errors

Passed Sendrecv from/to - self

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test uses MPI_Sendrecv() sent from and to rank=0. Includes test for MPI_Sendrecv_replace().

No errors.

Passed Simple thread finalize - initth

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

The test here is a simple one that Finalize exits, so the only action is to write no error.

No errors

Passed Simple thread initialize - initth2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

The test initializes a thread, then calls MPI_Finalize() and prints "No errors".

No errors

Communicator Testing - Score: 100% Passed

This group features tests that emphasize MPI calls that create, manipulate, and delete MPI Communicators.

Passed Comm creation comprehensive - commcreate1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

Check that Communicators can be created from various subsets of the processes in the communicator. Uses MPI_Comm_group(), MPI_Group_range_incl(), and MPI_Comm_dup() to create new communicators.

Creating groups
Creating groups
Creating groups
Creating groups
Creating groups
Creating groups
Creating groups
Creating groups
Testing comm MPI_COMM_WORLD from ghigh
Testing comm MPI_COMM_WORLD from ghigh
Testing comm MPI_COMM_WORLD from ghigh
Testing comm MPI_COMM_WORLD from ghigh
Testing comm MPI_COMM_WORLD from ghigh
Testing comm MPI_COMM_WORLD from ghigh
Testing comm MPI_COMM_WORLD from ghigh
Testing comm MPI_COMM_WORLD from godd
Testing comm MPI_COMM_WORLD from geven
Done testing comm MPI_COMM_WORLD from geven
Testing comm Dup of world from ghigh
Testing comm Dup of world from godd
Testing comm Dup of world from geven
Done testing comm Dup of world from geven
Testing comm MPI_COMM_WORLD from godd+geven
Done testing comm MPI_COMM_WORLD from godd+geven
Testing comm Dup of world from godd+geven
Done testing comm Dup of world from godd+geven
Testing comm MPI_COMM_WORLD from ghigh
Done testing comm MPI_COMM_WORLD from ghigh
Testing comm MPI_COMM_WORLD from godd
Testing comm MPI_COMM_WORLD from godd
Done testing comm MPI_COMM_WORLD from godd
Testing comm MPI_COMM_WORLD from geven
Testing comm Dup of world from ghigh
Testing comm Dup of world from godd
Done testing comm Dup of world from godd
Testing comm Dup of world from geven
Testing comm MPI_COMM_WORLD from godd+geven
Done testing comm MPI_COMM_WORLD from godd+geven
Testing comm Dup of world from godd+geven
Done testing comm Dup of world from godd+geven
Testing comm MPI_COMM_WORLD from MPI_GROUP_EMPTY
Testing comm Dup of world from MPI_GROUP_EMPTY
Testing comm MPI_COMM_WORLD from godd
Testing comm MPI_COMM_WORLD from geven
Done testing comm MPI_COMM_WORLD from geven
Testing comm Dup of world from ghigh
Testing comm Dup of world from godd
Testing comm Dup of world from geven
Done testing comm Dup of world from geven
Testing comm MPI_COMM_WORLD from godd+geven
Done testing comm MPI_COMM_WORLD from godd+geven
Testing comm Dup of world from godd+geven
Testing comm MPI_COMM_WORLD from godd
Testing comm MPI_COMM_WORLD from geven
Done testing comm MPI_COMM_WORLD from geven
Testing comm Dup of world from ghigh
Testing comm Dup of world from godd
Testing comm Dup of world from geven
Done testing comm Dup of world from geven
Testing comm MPI_COMM_WORLD from godd+geven
Done testing comm MPI_COMM_WORLD from godd+geven
Testing comm Dup of world from godd+geven
Done testing comm Dup of world from godd+geven
Testing comm MPI_COMM_WORLD from MPI_GROUP_EMPTY
Testing comm Dup of world from MPI_GROUP_EMPTY
Done testing comm MPI_COMM_WORLD from ghigh
Testing comm MPI_COMM_WORLD from godd
Done testing comm MPI_COMM_WORLD from godd
Testing comm MPI_COMM_WORLD from geven
Testing comm Dup of world from ghigh
Done testing comm Dup of world from ghigh
Testing comm Dup of world from godd
Done testing comm Dup of world from godd
Testing comm Dup of world from geven
Testing comm MPI_COMM_WORLD from godd+geven
Done testing comm MPI_COMM_WORLD from godd+geven
Testing comm Dup of world from godd+geven
Done testing comm Dup of world from godd+geven
Testing comm MPI_COMM_WORLD from MPI_GROUP_EMPTY
Testing comm Dup of world from MPI_GROUP_EMPTY
Testing comm MPI_COMM_WORLD from godd
Done testing comm MPI_COMM_WORLD from godd
Testing comm MPI_COMM_WORLD from geven
Testing comm Dup of world from ghigh
Testing comm Dup of world from godd
Done testing comm Dup of world from godd
Testing comm Dup of world from geven
Testing comm MPI_COMM_WORLD from godd+geven
Done testing comm MPI_COMM_WORLD from godd+geven
Testing comm Dup of world from godd+geven
Done testing comm Dup of world from godd+geven
Testing comm MPI_COMM_WORLD from MPI_GROUP_EMPTY
Testing comm Dup of world from MPI_GROUP_EMPTY
Testing comm MPI_COMM_WORLD from geven
Done testing comm MPI_COMM_WORLD from geven
Testing comm Dup of world from ghigh
Done testing comm Dup of world from ghigh
Testing comm Dup of world from godd
Testing comm Dup of world from geven
Done testing comm Dup of world from geven
Testing comm MPI_COMM_WORLD from godd+geven
Done testing comm MPI_COMM_WORLD from godd+geven
Testing comm Dup of world from godd+geven
Done testing comm Dup of world from godd+geven
Testing comm MPI_COMM_WORLD from MPI_GROUP_EMPTY
Testing comm Dup of world from MPI_GROUP_EMPTY
Testing comm MPI_COMM_WORLD from MPI_GROUP_EMPTY
Testing comm Dup of world from MPI_GROUP_EMPTY
No errors
Done testing comm MPI_COMM_WORLD from ghigh
Testing comm MPI_COMM_WORLD from godd
Done testing comm MPI_COMM_WORLD from godd
Testing comm MPI_COMM_WORLD from geven
Testing comm Dup of world from ghigh
Done testing comm Dup of world from ghigh
Testing comm Dup of world from godd
Done testing comm Dup of world from godd
Testing comm Dup of world from geven
Testing comm MPI_COMM_WORLD from godd+geven
Done testing comm MPI_COMM_WORLD from godd+geven
Testing comm Dup of world from godd+geven
Done testing comm Dup of world from godd+geven
Testing comm MPI_COMM_WORLD from MPI_GROUP_EMPTY
Testing comm Dup of world from MPI_GROUP_EMPTY
Done testing comm Dup of world from godd+geven
Testing comm MPI_COMM_WORLD from MPI_GROUP_EMPTY
Testing comm Dup of world from MPI_GROUP_EMPTY

Passed Comm_create group tests - icgroup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

Simple test that gets the group of an intercommunicator using MPI_Group_rank() and MPI_Group_size() using a selection of intercommunicators.

No errors

Passed Comm_create intercommunicators - iccreate

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This program tests MPI_Comm_create() using a selection of intercommunicators. Creates a new communicator from an intercommunicator, duplicates the communicator, and verifies that it works. Includes test with one side of intercommunicator being set with MPI_GROUP_EMPTY.

Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Testing communication on intercomm 'Single rank in each group', remote_size=1
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
isends posted, about to recv
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
isends posted, about to recv
my recvs completed, about to waitall
isends posted, about to recv
my recvs completed, about to waitall
isends posted, about to recv
my recvs completed, about to waitall
isends posted, about to recv
isends posted, about to recv
my recvs completed, about to waitall
isends posted, about to recv
my recvs completed, about to waitall
my recvs completed, about to waitall
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=7
isends posted, about to recv
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=2
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=2
isends posted, about to recv
my recvs completed, about to waitall
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=2
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=2
isends posted, about to recv
my recvs completed, about to waitall
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=2
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=2
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=6
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=6
isends posted, about to recv
my recvs completed, about to waitall
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
No errors
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall

Passed Comm_create_group excl 4 rank - comm_create_group4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test using 4 processes creates a group with the even processes using MPI_Group_excl() and uses this group to create a communicator. Then both the communicator and group are freed.

No errors

Passed Comm_create_group excl 8 rank - comm_create_group8

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This test using 8 processes creates a group with the even processes using MPI_Group_excl() and uses this group to create a communicator. Then both the communicator and group are freed.

No errors

Passed Comm_create_group incl 2 rank - comm_group_half2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test using 2 processes creates a group with ranks less than size/2 using MPI_Group_range_incl() and uses this group to create a communicator. Then both the communicator and group are freed.

No errors

Passed Comm_create_group incl 4 rank - comm_group_half4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test using 4 processes creates a group with ranks less than size/2 using MPI_Group_range_incl() and uses this group to create a communicator. Then both the communicator and group are freed.

No errors

Passed Comm_create_group incl 8 rank - comm_group_half8

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This test using 8 processes creates a group with ranks less than size/2 using MPI_Group_range_incl() and uses this group to create a communicator. Then both the communicator and group are freed.

No errors

Passed Comm_create_group random 2 rank - comm_group_rand2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test using 2 processes creates and frees groups by randomly adding processes to a group, then creating a communicator with the group.

No errors

Passed Comm_create_group random 4 rank - comm_group_rand4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test using 4 processes creates and frees groups by randomly adding processes to a group, then creating a communicator with the group.

No errors

Passed Comm_create_group random 8 rank - comm_group_rand8

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This test using 8 processes creates and frees groups by randomly adding processes to a group, then creating a communicator with the group.

No errors

Passed Comm_dup basic - dup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test exercises MPI_Comm_dup() by duplicating a communicator, checking basic properties, and communicating with this new communicator.

No errors

Passed Comm_dup contexts - dupic

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Check that communicators have separate contexts. We do this by setting up non-blocking receives on two communicators and then sending to them. If the contexts are different, tests on the unsatisfied communicator should indicate no available message. Tested using a selection of intercommunicators.

No errors

Passed Comm_idup 2 rank - comm_idup2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Multiple tests using 2 processes that make rank 0 wait in a blocking receive until all other processes have called MPI_Comm_idup(), then call idup afterwards. Should ensure that idup doesn't deadlock. Includes a test using an intercommunicator.]

No errors

Passed Comm_idup 4 rank - comm_idup4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Multiple tests using 4 processes that make rank 0 wait in a blocking receive until all other processes have called MPI_Comm_idup(), then call idup afterwards. Should ensure that idup doesn't deadlock. Includes a test using an intercommunicator.

No errors

Passed Comm_idup 9 rank - comm_idup9

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 9

Test Description:

Multiple tests using 9 processes that make rank 0 wait in a blocking receive until all other processes have called MPI_Comm_idup(), then call idup afterwards. Should ensure that idup doesn't deadlock. Includes a test using an intercommunicator.]

No errors

Passed Comm_idup multi - comm_idup_mul

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Simple test creating multiple communicators with MPI_Comm_idup.

No errors

Passed Comm_idup overlap - comm_idup_overlap

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Each pair of processes uses MPI_Comm_idup() to dup the communicator such that the dups are overlapping. If this were done with MPI_Comm_dup() this should deadlock.

No errors

Passed Comm_split basic - cmsplit

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Simple test for MPI_Comm_split().

No errors

Passed Comm_split intercommunicators - icsplit

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This tests MPI_Comm_split() using a selection of intercommunicators. The split communicator is tested using simple send and receive routines.

Created intercomm Intercomm by splitting MPI_COMM_WORLD
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then discarding 0 ranks with MPI_Comm_create
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Created intercomm Intercomm by splitting MPI_COMM_WORLD then discarding 0 ranks with MPI_Comm_create
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Testing communication on intercomm 
No errors
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then discarding 0 ranks with MPI_Comm_create
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then discarding 0 ranks with MPI_Comm_create
Created intercomm Intercomm by splitting MPI_COMM_WORLD then discarding 0 ranks with MPI_Comm_create
Created intercomm Intercomm by splitting MPI_COMM_WORLD then discarding 0 ranks with MPI_Comm_create
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm

Passed Comm_split key order - cmsplit2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 12

Test Description:

This test ensures that MPI_Comm_split breaks ties in key values by using the original rank in the input communicator. This typically corresponds to the difference between using a stable sort or using an unstable sort. It checks all sizes from 1..comm_size(world)-1, so this test does not need to be run multiple times at process counts from a higher-level test driver.

modulus=1 oldranks={0} keys={0}
modulus=1 oldranks={0,1} keys={0,0}
modulus=2 oldranks={0,1} keys={0,1}
modulus=1 oldranks={0,1,2} keys={0,0,0}
modulus=2 oldranks={0,2,1} keys={0,1,0}
modulus=3 oldranks={0,1,2} keys={0,1,2}
modulus=1 oldranks={0,1,2,3} keys={0,0,0,0}
modulus=2 oldranks={0,2,1,3} keys={0,1,0,1}
modulus=3 oldranks={0,3,1,2} keys={0,1,2,0}
modulus=4 oldranks={0,1,2,3} keys={0,1,2,3}
modulus=1 oldranks={0,1,2,3,4} keys={0,0,0,0,0}
modulus=2 oldranks={0,2,4,1,3} keys={0,1,0,1,0}
modulus=3 oldranks={0,3,1,4,2} keys={0,1,2,0,1}
modulus=4 oldranks={0,4,1,2,3} keys={0,1,2,3,0}
modulus=5 oldranks={0,1,2,3,4} keys={0,1,2,3,4}
modulus=1 oldranks={0,1,2,3,4,5} keys={0,0,0,0,0,0}
modulus=2 oldranks={0,2,4,1,3,5} keys={0,1,0,1,0,1}
modulus=3 oldranks={0,3,1,4,2,5} keys={0,1,2,0,1,2}
modulus=4 oldranks={0,4,1,5,2,3} keys={0,1,2,3,0,1}
modulus=5 oldranks={0,5,1,2,3,4} keys={0,1,2,3,4,0}
modulus=6 oldranks={0,1,2,3,4,5} keys={0,1,2,3,4,5}
modulus=1 oldranks={0,1,2,3,4,5,6} keys={0,0,0,0,0,0,0}
modulus=2 oldranks={0,2,4,6,1,3,5} keys={0,1,0,1,0,1,0}
modulus=3 oldranks={0,3,6,1,4,2,5} keys={0,1,2,0,1,2,0}
modulus=4 oldranks={0,4,1,5,2,6,3} keys={0,1,2,3,0,1,2}
modulus=5 oldranks={0,5,1,6,2,3,4} keys={0,1,2,3,4,0,1}
modulus=6 oldranks={0,6,1,2,3,4,5} keys={0,1,2,3,4,5,0}
modulus=7 oldranks={0,1,2,3,4,5,6} keys={0,1,2,3,4,5,6}
modulus=1 oldranks={0,1,2,3,4,5,6,7} keys={0,0,0,0,0,0,0,0}
modulus=2 oldranks={0,2,4,6,1,3,5,7} keys={0,1,0,1,0,1,0,1}
modulus=3 oldranks={0,3,6,1,4,7,2,5} keys={0,1,2,0,1,2,0,1}
modulus=4 oldranks={0,4,1,5,2,6,3,7} keys={0,1,2,3,0,1,2,3}
modulus=5 oldranks={0,5,1,6,2,7,3,4} keys={0,1,2,3,4,0,1,2}
modulus=6 oldranks={0,6,1,7,2,3,4,5} keys={0,1,2,3,4,5,0,1}
modulus=7 oldranks={0,7,1,2,3,4,5,6} keys={0,1,2,3,4,5,6,0}
modulus=8 oldranks={0,1,2,3,4,5,6,7} keys={0,1,2,3,4,5,6,7}
modulus=1 oldranks={0,1,2,3,4,5,6,7,8} keys={0,0,0,0,0,0,0,0,0}
modulus=2 oldranks={0,2,4,6,8,1,3,5,7} keys={0,1,0,1,0,1,0,1,0}
modulus=3 oldranks={0,3,6,1,4,7,2,5,8} keys={0,1,2,0,1,2,0,1,2}
modulus=4 oldranks={0,4,8,1,5,2,6,3,7} keys={0,1,2,3,0,1,2,3,0}
modulus=5 oldranks={0,5,1,6,2,7,3,8,4} keys={0,1,2,3,4,0,1,2,3}
modulus=6 oldranks={0,6,1,7,2,8,3,4,5} keys={0,1,2,3,4,5,0,1,2}
modulus=7 oldranks={0,7,1,8,2,3,4,5,6} keys={0,1,2,3,4,5,6,0,1}
modulus=8 oldranks={0,8,1,2,3,4,5,6,7} keys={0,1,2,3,4,5,6,7,0}
modulus=9 oldranks={0,1,2,3,4,5,6,7,8} keys={0,1,2,3,4,5,6,7,8}
modulus=1 oldranks={0,1,2,3,4,5,6,7,8,9} keys={0,0,0,0,0,0,0,0,0,0}
modulus=2 oldranks={0,2,4,6,8,1,3,5,7,9} keys={0,1,0,1,0,1,0,1,0,1}
modulus=3 oldranks={0,3,6,9,1,4,7,2,5,8} keys={0,1,2,0,1,2,0,1,2,0}
modulus=4 oldranks={0,4,8,1,5,9,2,6,3,7} keys={0,1,2,3,0,1,2,3,0,1}
modulus=5 oldranks={0,5,1,6,2,7,3,8,4,9} keys={0,1,2,3,4,0,1,2,3,4}
modulus=6 oldranks={0,6,1,7,2,8,3,9,4,5} keys={0,1,2,3,4,5,0,1,2,3}
modulus=7 oldranks={0,7,1,8,2,9,3,4,5,6} keys={0,1,2,3,4,5,6,0,1,2}
modulus=8 oldranks={0,8,1,9,2,3,4,5,6,7} keys={0,1,2,3,4,5,6,7,0,1}
modulus=9 oldranks={0,9,1,2,3,4,5,6,7,8} keys={0,1,2,3,4,5,6,7,8,0}
modulus=10 oldranks={0,1,2,3,4,5,6,7,8,9} keys={0,1,2,3,4,5,6,7,8,9}
modulus=1 oldranks={0,1,2,3,4,5,6,7,8,9,10} keys={0,0,0,0,0,0,0,0,0,0,0}
modulus=2 oldranks={0,2,4,6,8,10,1,3,5,7,9} keys={0,1,0,1,0,1,0,1,0,1,0}
modulus=3 oldranks={0,3,6,9,1,4,7,10,2,5,8} keys={0,1,2,0,1,2,0,1,2,0,1}
modulus=4 oldranks={0,4,8,1,5,9,2,6,10,3,7} keys={0,1,2,3,0,1,2,3,0,1,2}
modulus=5 oldranks={0,5,10,1,6,2,7,3,8,4,9} keys={0,1,2,3,4,0,1,2,3,4,0}
modulus=6 oldranks={0,6,1,7,2,8,3,9,4,10,5} keys={0,1,2,3,4,5,0,1,2,3,4}
modulus=7 oldranks={0,7,1,8,2,9,3,10,4,5,6} keys={0,1,2,3,4,5,6,0,1,2,3}
modulus=8 oldranks={0,8,1,9,2,10,3,4,5,6,7} keys={0,1,2,3,4,5,6,7,0,1,2}
modulus=9 oldranks={0,9,1,10,2,3,4,5,6,7,8} keys={0,1,2,3,4,5,6,7,8,0,1}
modulus=10 oldranks={0,10,1,2,3,4,5,6,7,8,9} keys={0,1,2,3,4,5,6,7,8,9,0}
modulus=11 oldranks={0,1,2,3,4,5,6,7,8,9,10} keys={0,1,2,3,4,5,6,7,8,9,10}
modulus=1 oldranks={0,1,2,3,4,5,6,7,8,9,10,11} keys={0,0,0,0,0,0,0,0,0,0,0,0}
modulus=2 oldranks={0,2,4,6,8,10,1,3,5,7,9,11} keys={0,1,0,1,0,1,0,1,0,1,0,1}
modulus=3 oldranks={0,3,6,9,1,4,7,10,2,5,8,11} keys={0,1,2,0,1,2,0,1,2,0,1,2}
modulus=4 oldranks={0,4,8,1,5,9,2,6,10,3,7,11} keys={0,1,2,3,0,1,2,3,0,1,2,3}
modulus=5 oldranks={0,5,10,1,6,11,2,7,3,8,4,9} keys={0,1,2,3,4,0,1,2,3,4,0,1}
modulus=6 oldranks={0,6,1,7,2,8,3,9,4,10,5,11} keys={0,1,2,3,4,5,0,1,2,3,4,5}
modulus=7 oldranks={0,7,1,8,2,9,3,10,4,11,5,6} keys={0,1,2,3,4,5,6,0,1,2,3,4}
modulus=8 oldranks={0,8,1,9,2,10,3,11,4,5,6,7} keys={0,1,2,3,4,5,6,7,0,1,2,3}
modulus=9 oldranks={0,9,1,10,2,11,3,4,5,6,7,8} keys={0,1,2,3,4,5,6,7,8,0,1,2}
modulus=10 oldranks={0,10,1,11,2,3,4,5,6,7,8,9} keys={0,1,2,3,4,5,6,7,8,9,0,1}
modulus=11 oldranks={0,11,1,2,3,4,5,6,7,8,9,10} keys={0,1,2,3,4,5,6,7,8,9,10,0}
modulus=12 oldranks={0,1,2,3,4,5,6,7,8,9,10,11} keys={0,1,2,3,4,5,6,7,8,9,10,11}
No errors

Passed Comm_split_type basic - cmsplit_type

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Tests MPI_Comm_split_type() including a test using MPI_UNDEFINED.

Created subcommunicator of size 2
Created subcommunicator of size 1
No errors
Created subcommunicator of size 2
Created subcommunicator of size 1

Passed Comm_with_info dup 2 rank - dup_with_info2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test exercises MPI_Comm_dup_with_info() with 2 processes by setting the info for a communicator, duplicating it, and then testing the communicator.

No errors

Passed Comm_with_info dup 4 rank - dup_with_info4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises MPI_Comm_dup_with_info() with 4 processes by setting the info for a communicator, duplicating it, and then testing the communicator.

No errors

Passed Comm_with_info dup 9 rank - dup_with_info9

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 9

Test Description:

This test exercises MPI_Comm_dup_with_info() with 9 processes by setting the info for a communicator, duplicating it, and then testing the communicator.

No errors

Passed Comm_{dup,free} contexts - ctxalloc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This program tests the allocation and deallocation of contexts by using MPI_Comm_dup() to create many communicators in batches and then freeing them in batches.

No errors

Passed Comm_{get,set}_name basic - commname

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Simple test for MPI_Comm_get_name() using a selection of communicators.

No errors

Passed Context split - ctxsplit

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test uses MPI_Comm_split() to repeatedly create and free communicators. This check is intended to fail if there is a leak of context ids. This test needs to run longer than many tests because it tries to exhaust the number of context ids. The for loop uses 10000 iterations, which is adequate for MPICH (with only about 1k context ids available).

After 0 (0.000000)
After 100 (7143.828704)
After 200 (13282.334482)
After 300 (18784.977628)
After 400 (23736.305783)
After 500 (27893.945212)
After 600 (31872.057910)
After 700 (35508.564311)
After 800 (38865.902676)
After 900 (41970.785815)
After 1000 (44759.445507)
After 1100 (47405.926172)
After 1200 (49744.941105)
After 1300 (52052.150008)
After 1400 (54190.482255)
After 1500 (56132.724705)
After 1600 (57983.672160)
After 1700 (59756.250038)
After 1800 (61092.977810)
After 1900 (62668.543337)
After 2000 (64101.978556)
After 2100 (65468.511844)
After 2200 (66794.280175)
After 2300 (68029.448351)
After 2400 (69167.221181)
After 2500 (70261.322092)
After 2600 (71320.998202)
After 2700 (72336.286574)
After 2800 (73345.416467)
After 2900 (74257.137359)
After 3000 (75150.791943)
After 3100 (76007.194007)
After 3200 (76830.429767)
After 3300 (77607.197941)
After 3400 (78377.517160)
After 3500 (79099.970701)
After 3600 (79797.422716)
After 3700 (80447.676972)
After 3800 (81032.732319)
After 3900 (81652.856482)
After 4000 (82254.779532)
After 4100 (82830.167005)
After 4200 (83379.852343)
After 4300 (83932.928377)
After 4400 (84433.373865)
After 4500 (84933.098576)
After 4600 (85427.581201)
After 4700 (85883.343814)
After 4800 (86341.987337)
After 4900 (86789.740659)
After 5000 (87228.183677)
After 5100 (87641.808958)
After 5200 (88037.893541)
After 5300 (88431.074726)
After 5400 (88791.592304)
After 5500 (89149.029720)
After 5600 (89493.070815)
After 5700 (89856.240892)
After 5800 (90199.054695)
After 5900 (90534.829132)
After 6000 (90865.779598)
After 6100 (91160.936923)
After 6200 (91459.204614)
After 6300 (91767.168224)
After 6400 (92047.379663)
After 6500 (92330.501547)
After 6600 (92611.659317)
After 6700 (92858.579294)
After 6800 (93126.900576)
After 6900 (93380.727436)
After 7000 (93632.243660)
After 7100 (93874.170903)
After 7200 (94128.319857)
After 7300 (94372.354164)
After 7400 (94606.174694)
After 7500 (94811.969913)
After 7600 (95019.412466)
After 7700 (95238.743121)
After 7800 (95454.175987)
After 7900 (95657.956981)
After 8000 (95852.258313)
After 8100 (96054.836924)
After 8200 (96263.445156)
After 8300 (96451.279687)
After 8400 (96630.538004)
After 8500 (96825.625410)
After 8600 (97014.820142)
After 8700 (97184.465499)
After 8800 (97339.555101)
After 8900 (97504.266551)
After 9000 (97658.276070)
After 9100 (97815.534048)
After 9200 (97964.357096)
After 9300 (98102.214765)
After 9400 (98232.733557)
After 9500 (98378.609707)
After 9600 (98518.199057)
After 9700 (98663.288409)
After 9800 (98780.037308)
After 9900 (98891.112973)
No errors

Passed Intercomm probe - probe-intercomm

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test MPI_Probe() with a selection of intercommunicators. Creates and intercommunicator, probes it, and then frees it.

No errors

Passed Intercomm_create basic - ic1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

A simple test of MPI_Intercomm_create() that creates an intercommunicator and verifies that it works.

No errors

Passed Intercomm_create many rank 2x2 - ic2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 33

Test Description:

Test for MPI_Intercomm_create() using at least 33 processes that exercises a loop bounds bug by creating and freeing two intercommunicators with two processes each.

No errors

Passed Intercomm_merge - icm

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

Test MPI_Intercomm_merge() using a selection of intercommunicators. Includes multiple tests with different choices for the high value.

No errors

Passed MPI_Info_create basic - comm_info

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 6

Test Description:

Simple test for MPI_Comm_{set,get}_info.

No errors

Passed Multiple threads context dup - ctxdup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test creates communicators concurrently in different threads.

No errors

Passed Multiple threads context idup - ctxidup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test creates communicators concurrently, non-blocking, in different threads.

No errors

Passed Multiple threads dup leak - dup_leak_test

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test repeatedly duplicates and frees communicators with multiple threads concurrently to stress the multithreaded aspects of the context ID allocation code. Thanks to IBM for providing the original version of this test.

No errors

Passed Simple thread comm dup - comm_dup_deadlock

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a simple test of threads in MPI with communicator duplication.

No errors

Passed Simple thread comm idup - comm_idup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a simple test of threads in MPI with non-blocking communicator duplication.

No Errors

Passed Thread Group creation - comm_create_threads

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Every thread paticipates in a distinct MPI_Comm_create group, distinguished by its thread-id (used as the tag). Threads on even ranks join an even comm and threads on odd ranks join the odd comm.

No errors

Passed Threaded group - comm_create_group_threads

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

In this test a number of threads are created with a distinct MPI communicator (or comm) group distinguished by its thread-id (used as a tag). Threads on even ranks join an even comm and threads on odd ranks join the odd comm.

No errors

Error Processing - Score: 100% Passed

This group features tests of MPI error processing.

Passed Error Handling - errors

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Reports the default action taken on an error. It also reports if error handling can be changed to "returns", and if so, if this functions properly.

MPI errors are fatal by default.
MPI errors can be changed to MPI_ERRORS_RETURN.
Call MPI_Send() with a bad destination rank.
Error code: 940169478
Error string: Invalid rank, error stack:
PMPI_Send(163): MPI_Send(buf=0x7ffd96b41f7c, count=1, MPI_INT, dest=1, tag=MPI_ANY_TAG, MPI_COMM_WORLD) failed
PMPI_Send(100): Invalid rank has value 1 but must be nonnegative and less than 1
No errors

Passed File IO error handlers - userioerr

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test exercises MPI I/O and MPI error handling techniques.

No errors

Passed MPI_Abort() return exit - abortexit

Build: Passed

Execution: Failed

Exit Status: Intentional_failure_was_successful

MPI Processes: 1

Test Description:

This program calls MPI_Abort and confirms that the exit status in the call is returned to the invoking environment.

MPI_Abort() with return exit code:6
MPICH ERROR [Rank 0] [job id b0409dd2-ee21-49c3-b47a-bea013bf9ace] [Mon Apr 15 11:02:31 2024] [x1001c5s5b0n0] - Abort(6) (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 6) - process 0
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 6

Passed MPI_Add_error_class basic - adderr

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Create NCLASSES new classes, each with 5 codes (160 total).

No errors

Passed MPI_Comm_errhandler basic - commcall

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test comm_{set,call}_errhandle.

No errors

Passed MPI_Error_string basic - errstring

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Test that prints out MPI error codes from 0-53.

msg for 0 is No MPI error
msg for 1 is Invalid buffer pointer
msg for 2 is Invalid count
msg for 3 is Invalid datatype
msg for 4 is Invalid tag
msg for 5 is Invalid communicator
msg for 6 is Invalid rank
msg for 7 is Invalid root
msg for 8 is Invalid group
msg for 9 is Invalid MPI_Op
msg for 10 is Invalid topology
msg for 11 is Invalid dimension argument
msg for 12 is Invalid argument
msg for 13 is Unknown error.  Please file a bug report.
msg for 14 is Message truncated
msg for 15 is Other MPI error
msg for 16 is Internal MPI error!
msg for 17 is See the MPI_ERROR field in MPI_Status for the error code
msg for 18 is Pending request (no error)
msg for 19 is Request pending due to failure
msg for 20 is Access denied to file
msg for 21 is Invalid amode value in MPI_File_open 
msg for 22 is Invalid file name
msg for 23 is An error occurred in a user-defined data conversion function
msg for 24 is The requested datarep name has already been specified to MPI_REGISTER_DATAREP
msg for 25 is File exists
msg for 26 is File in use by some process
msg for 27 is Invalid MPI_File
msg for 28 is Invalid MPI_Info
msg for 29 is Invalid key for MPI_Info 
msg for 30 is Invalid MPI_Info value 
msg for 31 is MPI_Info key is not defined 
msg for 32 is Other I/O error 
msg for 33 is Invalid service name (see MPI_Publish_name)
msg for 34 is Unable to allocate memory for MPI_Alloc_mem
msg for 35 is Inconsistent arguments to collective routine 
msg for 36 is Not enough space for file 
msg for 37 is File does not exist
msg for 38 is Invalid port
msg for 39 is Quota exceeded for files
msg for 40 is Read-only file or filesystem name
msg for 41 is Attempt to lookup an unknown service name 
msg for 42 is Error in spawn call
msg for 43 is Unsupported datarep passed to MPI_File_set_view 
msg for 44 is Unsupported file operation 
msg for 45 is Invalid MPI_Win
msg for 46 is Invalid base address
msg for 47 is Invalid lock type
msg for 48 is Invalid keyval
msg for 49 is Conflicting accesses to window 
msg for 50 is Wrong synchronization of RMA calls 
msg for 51 is Invalid size argument in RMA call
msg for 52 is Invalid displacement argument in RMA call 
msg for 53 is Invalid assert argument
No errors.

Passed MPI_Error_string error class - errstring2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Simple test where an MPI error class is created, and an error string introduced for that string.

No errors

Passed User error handling 1 rank - predef_eh

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Ensure that setting a user-defined error handler on predefined communicators does not cause a problem at finalize time. Regression test for former issue. Runs on 1 rank.

No errors

Passed User error handling 2 rank - predef_eh2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Ensure that setting a user-defined error handler on predefined communicators does not cause a problem at finalize time. Regression test for former issue. Runs on 2 ranks.

No errors

UTK Test Suite - Score: 95% Passed

This group features the test suite developed at the University of Tennesss Knoxville for MPI-2.2 and earlier specifications. Though techically not a functional group, it was retained to allow comparison with the previous benchmark suite.

Passed Alloc_mem - alloc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Simple check to see if MPI_Alloc_mem() is supported.

No errors

Passed Assignment constants - process_assignment_constants

Build: NA

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Test for Named Constants supported in MPI-1.0 and higher. The test is a Perl script that constructs a small seperate main program in either C or FORTRAN for each constant. The constants for this test are used to assign a value to a const integer type in C and an integer type in Fortran. This test is the de facto test for any constant recognized by the compiler. NOTE: The constants used in this test are tested against both C and FORTRAN compilers. Some of the constants are optional and may not be supported by the MPI implementation. Failure to verify these constants does not necessarily constitute failure of the MPI implementation to satisfy the MPI specifications. ISSUE: This test may timeout if separate program executions initialize slowly.

c "MPI_ARGV_NULL" is not verified.
c "MPI_ARGVS_NULL" is not verified.
c "MPI_ANY_SOURCE" is verified by const integer.
c "MPI_ANY_TAG" is verified by const integer.
c "MPI_BAND" is verified by const integer.
c "MPI_BOR" is verified by const integer.
c "MPI_BSEND_OVERHEAD" is verified by const integer.
c "MPI_BXOR" is verified by const integer.
c "MPI_CART" is verified by const integer.
c "MPI_COMBINER_CONTIGUOUS" is verified by const integer.
c "MPI_COMBINER_DARRAY" is verified by const integer.
c "MPI_COMBINER_DUP" is verified by const integer.
c "MPI_COMBINER_F90_COMPLEX" is verified by const integer.
c "MPI_COMBINER_F90_INTEGER" is verified by const integer.
c "MPI_COMBINER_F90_REAL" is verified by const integer.
c "MPI_COMBINER_HINDEXED" is verified by const integer.
c "MPI_COMBINER_HINDEXED_INTEGER" is verified by const integer.
c "MPI_COMBINER_HVECTOR" is verified by const integer.
c "MPI_COMBINER_HVECTOR_INTEGER" is verified by const integer.
c "MPI_COMBINER_INDEXED" is verified by const integer.
c "MPI_COMBINER_INDEXED_BLOCK" is verified by const integer.
c "MPI_COMBINER_NAMED" is verified by const integer.
c "MPI_COMBINER_RESIZED" is verified by const integer.
c "MPI_COMBINER_STRUCT" is verified by const integer.
c "MPI_COMBINER_STRUCT_INTEGER" is verified by const integer.
c "MPI_COMBINER_SUBARRAY" is verified by const integer.
c "MPI_COMBINER_VECTOR" is verified by const integer.
c "MPI_COMM_NULL" is verified by const integer.
c "MPI_COMM_SELF" is verified by const integer.
c "MPI_COMM_WORLD" is verified by const integer.
c "MPI_CONGRUENT" is verified by const integer.
c "MPI_CONVERSION_FN_NULL" is not verified.
c "MPI_DATATYPE_NULL" is verified by const integer.
c "MPI_DISPLACEMENT_CURRENT" is verified by const integer.
c "MPI_DISTRIBUTE_BLOCK" is verified by const integer.
c "MPI_DISTRIBUTE_CYCLIC" is verified by const integer.
c "MPI_DISTRIBUTE_DFLT_DARG" is verified by const integer.
c "MPI_DISTRIBUTE_NONE" is verified by const integer.
c "MPI_ERRCODES_IGNORE" is not verified.
c "MPI_ERRHANDLER_NULL" is verified by const integer.
c "MPI_ERRORS_ARE_FATAL" is verified by const integer.
c "MPI_ERRORS_RETURN" is verified by const integer.
c "MPI_F_STATUS_IGNORE" is not verified.
c "MPI_F_STATUSES_IGNORE" is not verified.
c "MPI_FILE_NULL" is not verified.
c "MPI_GRAPH" is verified by const integer.
c "MPI_GROUP_NULL" is verified by const integer.
c "MPI_IDENT" is verified by const integer.
c "MPI_IN_PLACE" is not verified.
c "MPI_INFO_NULL" is verified by const integer.
c "MPI_KEYVAL_INVALID" is verified by const integer.
c "MPI_LAND" is verified by const integer.
c "MPI_LOCK_EXCLUSIVE" is verified by const integer.
c "MPI_LOCK_SHARED" is verified by const integer.
c "MPI_LOR" is verified by const integer.
c "MPI_LXOR" is verified by const integer.
c "MPI_MAX" is verified by const integer.
c "MPI_MAXLOC" is verified by const integer.
c "MPI_MIN" is verified by const integer.
c "MPI_MINLOC" is verified by const integer.
c "MPI_OP_NULL" is verified by const integer.
c "MPI_PROC_NULL" is verified by const integer.
c "MPI_PROD" is verified by const integer.
c "MPI_REPLACE" is verified by const integer.
c "MPI_REQUEST_NULL" is verified by const integer.
c "MPI_ROOT" is verified by const integer.
c "MPI_SEEK_CUR" is verified by const integer.
c "MPI_SEEK_END" is verified by const integer.
c "MPI_SEEK_SET" is verified by const integer.
c "MPI_SIMILAR" is verified by const integer.
c "MPI_STATUS_IGNORE" is not verified.
c "MPI_STATUSES_IGNORE" is not verified.
c "MPI_SUCCESS" is verified by const integer.
c "MPI_SUM" is verified by const integer.
c "MPI_UNDEFINED" is verified by const integer.
c "MPI_UNEQUAL" is verified by const integer.
F "MPI_ARGV_NULL" is not verified.
F "MPI_ARGVS_NULL" is not verified.
F "MPI_ANY_SOURCE" is verified by integer assignment.
F "MPI_ANY_TAG" is verified by integer assignment.
F "MPI_BAND" is verified by integer assignment.
F "MPI_BOR" is verified by integer assignment.
F "MPI_BSEND_OVERHEAD" is verified by integer assignment.
F "MPI_BXOR" is verified by integer assignment.
F "MPI_CART" is verified by integer assignment.
F "MPI_COMBINER_CONTIGUOUS" is verified by integer assignment.
F "MPI_COMBINER_DARRAY" is verified by integer assignment.
F "MPI_COMBINER_DUP" is verified by integer assignment.
F "MPI_COMBINER_F90_COMPLEX" is verified by integer assignment.
F "MPI_COMBINER_F90_INTEGER" is verified by integer assignment.
F "MPI_COMBINER_F90_REAL" is verified by integer assignment.
F "MPI_COMBINER_HINDEXED" is verified by integer assignment.
F "MPI_COMBINER_HINDEXED_INTEGER" is verified by integer assignment.
F "MPI_COMBINER_HVECTOR" is verified by integer assignment.
F "MPI_COMBINER_HVECTOR_INTEGER" is verified by integer assignment.
F "MPI_COMBINER_INDEXED" is verified by integer assignment.
F "MPI_COMBINER_INDEXED_BLOCK" is verified by integer assignment.
F "MPI_COMBINER_NAMED" is verified by integer assignment.
F "MPI_COMBINER_RESIZED" is verified by integer assignment.
F "MPI_COMBINER_STRUCT" is verified by integer assignment.
F "MPI_COMBINER_STRUCT_INTEGER" is verified by integer assignment.
F "MPI_COMBINER_SUBARRAY" is verified by integer assignment.
F "MPI_COMBINER_VECTOR" is verified by integer assignment.
F "MPI_COMM_NULL" is verified by integer assignment.
F "MPI_COMM_SELF" is verified by integer assignment.
F "MPI_COMM_WORLD" is verified by integer assignment.
F "MPI_CONGRUENT" is verified by integer assignment.
F "MPI_CONVERSION_FN_NULL" is not verified.
F "MPI_DATATYPE_NULL" is verified by integer assignment.
F "MPI_DISPLACEMENT_CURRENT" is verified by integer assignment.
F "MPI_DISTRIBUTE_BLOCK" is verified by integer assignment.
F "MPI_DISTRIBUTE_CYCLIC" is verified by integer assignment.
F "MPI_DISTRIBUTE_DFLT_DARG" is verified by integer assignment.
F "MPI_DISTRIBUTE_NONE" is verified by integer assignment.
F "MPI_ERRCODES_IGNORE" is not verified.
F "MPI_ERRHANDLER_NULL" is verified by integer assignment.
F "MPI_ERRORS_ARE_FATAL" is verified by integer assignment.
F "MPI_ERRORS_RETURN" is verified by integer assignment.
F "MPI_F_STATUS_IGNORE" is verified by integer assignment.
F "MPI_F_STATUSES_IGNORE" is verified by integer assignment.
F "MPI_FILE_NULL" is verified by integer assignment.
F "MPI_GRAPH" is verified by integer assignment.
F "MPI_GROUP_NULL" is verified by integer assignment.
F "MPI_IDENT" is verified by integer assignment.
F "MPI_IN_PLACE" is verified by integer assignment.
F "MPI_INFO_NULL" is verified by integer assignment.
F "MPI_KEYVAL_INVALID" is verified by integer assignment.
F "MPI_LAND" is verified by integer assignment.
F "MPI_LOCK_EXCLUSIVE" is verified by integer assignment.
F "MPI_LOCK_SHARED" is verified by integer assignment.
F "MPI_LOR" is verified by integer assignment.
F "MPI_LXOR" is verified by integer assignment.
F "MPI_MAX" is verified by integer assignment.
F "MPI_MAXLOC" is verified by integer assignment.
F "MPI_MIN" is verified by integer assignment.
F "MPI_MINLOC" is verified by integer assignment.
F "MPI_OP_NULL" is verified by integer assignment.
F "MPI_PROC_NULL" is verified by integer assignment.
F "MPI_PROD" is verified by integer assignment.
F "MPI_REPLACE" is verified by integer assignment.
F "MPI_REQUEST_NULL" is verified by integer assignment.
F "MPI_ROOT" is verified by integer assignment.
F "MPI_SEEK_CUR" is verified by integer assignment.
F "MPI_SEEK_END" is verified by integer assignment.
F "MPI_SEEK_SET" is verified by integer assignment.
F "MPI_SIMILAR" is verified by integer assignment.
F "MPI_STATUS_IGNORE" is not verified.
F "MPI_STATUSES_IGNORE" is not verified.
F "MPI_SUCCESS" is verified by integer assignment.
F "MPI_SUM" is verified by integer assignment.
F "MPI_UNDEFINED" is verified by integer assignment.
F "MPI_UNEQUAL" is verified by integer assignment.
Number of successful C constants: 66 of 76
Number of successful FORTRAN constants: 70 of 76
No errors.

Passed C/Fortran interoperability supported - interoperability

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Checks if the C-Fortran (F77) interoperability functions are supported using the MPI-2.2 specification.

No errors

Passed Communicator attributes - attributes

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Returns all communicator attributes that are not supported. The test is run as a single process MPI job and fails if any attributes are not supported.

No errors

Passed Compiletime constants - process_compiletime_constants

Build: NA

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

The MPI-3.0 specifications require that some named constants be known at compiletime. The report includes a record for each constant of this class in the form "X MPI_CONSTANT is [not] verified by METHOD" where X is either 'c' for the C compiler, or 'F' for the FORTRAN 77 compiler. For a C langauge compile, the constant is used as a case label in a switch statement. For a FORTRAN language compile, the constant is assigned to a PARAMETER. The report sumarizes with the number of constants for each compiler that was successfully verified.

c "MPI_MAX_PROCESSOR_NAME" is verified by switch label.
c "MPI_MAX_LIBRARY_VERSION_STRING" is verified by switch label.
c "MPI_MAX_ERROR_STRING" is verified by switch label.
c "MPI_MAX_DATAREP_STRING" is verified by switch label.
c "MPI_MAX_INFO_KEY" is verified by switch label.
c "MPI_MAX_INFO_VAL" is verified by switch label.
c "MPI_MAX_OBJECT_NAME" is verified by switch label.
c "MPI_MAX_PORT_NAME" is verified by switch label.
c "MPI_VERSION" is verified by switch label.
c "MPI_SUBVERSION" is verified by switch label.
c "MPI_MAX_LIBRARY_VERSION_STRING" is verified by switch label.
F "MPI_ADDRESS_KIND" is verified by PARAMETER.
F "MPI_ASYNC_PROTECTS_NONBLOCKING" is not verified.
F "MPI_COUNT_KIND" is verified by PARAMETER.
F "MPI_ERROR" is verified by PARAMETER.
F "MPI_ERRORS_ARE_FATAL" is verified by PARAMETER.
F "MPI_ERRORS_RETURN" is verified by PARAMETER.
F "MPI_INTEGER_KIND" is verified by PARAMETER.
F "MPI_OFFSET_KIND" is verified by PARAMETER.
F "MPI_SOURCE" is verified by PARAMETER.
F "MPI_STATUS_SIZE" is verified by PARAMETER.
F "MPI_SUBARRAYS_SUPPORTED" is not verified.
F "MPI_TAG" is verified by PARAMETER.
F "MPI_MAX_PROCESSOR_NAME" is verified by PARAMETER.
F "MPI_MAX_LIBRARY_VERSION_STRING" is verified by PARAMETER.
F "MPI_MAX_ERROR_STRING" is verified by PARAMETER.
F "MPI_MAX_DATAREP_STRING" is verified by PARAMETER.
F "MPI_MAX_INFO_KEY" is verified by PARAMETER.
F "MPI_MAX_INFO_VAL" is verified by PARAMETER.
F "MPI_MAX_OBJECT_NAME" is verified by PARAMETER.
F "MPI_MAX_PORT_NAME" is verified by PARAMETER.
F "MPI_VERSION" is verified by PARAMETER.
F "MPI_SUBVERSION" is verified by PARAMETER.
F "MPI_MAX_LIBRARY_VERSION_STRING" is verified by PARAMETER.
Number of successful C constants: 11 of 11
Number of successful FORTRAN constants: 21 out of 23
No errors.

Passed Datatypes - process_datatypes

Build: NA

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests for the presence of constants from MPI-1.0 and higher. It constructs small separate main programs in either C, FORTRAN, or C++ for each datatype. It fails if any datatype is not present. ISSUE: This test may timeout if separate program executions initialize slowly.

c "MPI_2DOUBLE_PRECISION" Size = 16 is verified.
c "MPI_2INT" Size = 8 is verified.
c "MPI_2INTEGER" Size = 8 is verified.
c "MPI_2REAL" Size = 8 is verified.
c "MPI_AINT" Size = 8 is verified.
c "MPI_BYTE" Size = 1 is verified.
c "MPI_C_BOOL" Size = 1 is verified.
c "MPI_C_COMPLEX" Size = 8 is verified.
c "MPI_C_DOUBLE_COMPLEX" Size = 16 is verified.
c "MPI_C_FLOAT_COMPLEX" Size = 8 is verified.
c "MPI_C_LONG_DOUBLE_COMPLEX" is not verified: (execution).
c "MPI_CHAR" Size = 1 is verified.
c "MPI_CHARACTER" Size = 1 is verified.
c "MPI_COMPLEX" Size = 8 is verified.
c "MPI_COMPLEX2" is not verified: (compilation).
c "MPI_COMPLEX4" is not verified: (compilation).
c "MPI_COMPLEX8" Size = 8 is verified.
c "MPI_COMPLEX16" Size = 16 is verified.
c "MPI_COMPLEX32" Size = 32 is verified.
c "MPI_DOUBLE" Size = 8 is verified.
c "MPI_DOUBLE_INT" Size = 12 is verified.
c "MPI_DOUBLE_COMPLEX" Size = 16 is verified.
c "MPI_DOUBLE_PRECISION" Size = 8 is verified.
c "MPI_FLOAT" Size = 4 is verified.
c "MPI_FLOAT_INT" Size = 8 is verified.
c "MPI_INT" Size = 4 is verified.
c "MPI_INT8_T" Size = 1 is verified.
c "MPI_INT16_T" Size = 2 is verified.
c "MPI_INT32_T" Size = 4 is verified.
c "MPI_INT64_T" Size = 8 is verified.
c "MPI_INTEGER" Size = 4 is verified.
c "MPI_INTEGER1" Size = 1 is verified.
c "MPI_INTEGER2" Size = 2 is verified.
c "MPI_INTEGER4" Size = 4 is verified.
c "MPI_INTEGER8" Size = 8 is verified.
c "MPI_INTEGER16" is not verified: (execution).
c "MPI_LB" Size = 0 is verified.
c "MPI_LOGICAL" Size = 4 is verified.
c "MPI_LONG" Size = 8 is verified.
c "MPI_LONG_INT" Size = 12 is verified.
c "MPI_LONG_DOUBLE" is not verified: (execution).
c "MPI_LONG_DOUBLE_INT" is not verified: (execution).
c "MPI_LONG_LONG" Size = 8 is verified.
c "MPI_LONG_LONG_INT" Size = 8 is verified.
c "MPI_OFFSET" Size = 8 is verified.
c "MPI_PACKED" Size = 1 is verified.
c "MPI_REAL" Size = 4 is verified.
c "MPI_REAL2" is not verified: (compilation).
c "MPI_REAL4" Size = 4 is verified.
c "MPI_REAL8" Size = 8 is verified.
c "MPI_REAL16" Size = 16 is verified.
c "MPI_SHORT" Size = 2 is verified.
c "MPI_SHORT_INT" Size = 6 is verified.
c "MPI_SIGNED_CHAR" Size = 1 is verified.
c "MPI_UB" Size = 0 is verified.
c "MPI_UNSIGNED_CHAR" Size = 1 is verified.
c "MPI_UNSIGNED_SHORT" Size = 2 is verified.
c "MPI_UNSIGNED" Size = 4 is verified.
c "MPI_UNSIGNED_LONG" Size = 8 is verified.
c "MPI_WCHAR" Size = 4 is verified.
c "MPI_LONG_LONG_INT" Size = 8 is verified.
c "MPI_FLOAT_INT" Size = 8 is verified.
c "MPI_DOUBLE_INT" Size = 12 is verified.
c "MPI_LONG_INT" Size = 12 is verified.
c "MPI_LONG_DOUBLE_INT" is not verified: (execution).
c "MPI_2INT" Size = 8 is verified.
c "MPI_SHORT_INT" Size = 6 is verified.
c "MPI_LONG_DOUBLE_INT" is not verified: (execution).
c "MPI_2REAL" Size = 8 is verified.
c "MPI_2DOUBLE_PRECISION" Size = 16 is verified.
c "MPI_2INTEGER" Size = 8 is verified.
C "MPI_CXX_BOOL" Size = 1 is verified.
C "MPI_CXX_FLOAT_COMPLEX" Size = 8 is verified.
C "MPI_CXX_DOUBLE_COMPLEX" Size = 16 is verified.
C "MPI_CXX_LONG_DOUBLE_COMPLEX" is not verified: (execution).
f "MPI_BYTE" Size =1 is verified.
f "MPI_CHARACTER" Size =1 is verified.
f "MPI_COMPLEX" Size =8 is verified.
f "MPI_DOUBLE_COMPLEX" Size =16 is verified.
f "MPI_DOUBLE_PRECISION" Size =8 is verified.
f "MPI_INTEGER" Size =4 is verified.
f "MPI_INTEGER1" Size =1 is verified.
f "MPI_INTEGER2" Size =2 is verified.
f "MPI_INTEGER4" Size =4 is verified.
f "MPI_LOGICAL" Size =4 is verified.
f "MPI_REAL" Size =4 is verified.
f "MPI_REAL2" is not verified: (execution).
f "MPI_REAL4" Size =4 is verified.
f "MPI_REAL8" Size =8 is verified.
f "MPI_PACKED" Size =1 is verified.
f "MPI_2REAL" Size =8 is verified.
f "MPI_2DOUBLE_PRECISION" Size =16 is verified.
f "MPI_2INTEGER" Size =8 is verified.
No errors.

Passed Deprecated routines - deprecated

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Checks all MPI deprecated routines as of MPI-2.2, but not including routines removed by MPI-3 if this is an MPI-3 implementation.

MPI_Attr_delete(): is functional.
MPI_Attr_get(): is functional.
MPI_Attr_put(): is functional.
MPI_Keyval_create(): is functional.
MPI_Keyval_free(): is functional.
MPI_Address(): is removed by MPI 3.0+.
MPI_Errhandler_create(): is removed by MPI 3.0+.
MPI_Errhandler_get(): is removed by MPI 3.0+.
MPI_Errhandler_set(): is removed by MPI 3.0+.
MPI_Type_extent(): is removed by MPI 3.0+.
MPI_Type_hindexed(): is removed by MPI 3.0+.
MPI_Type_hvector(): is removed by MPI 3.0+.
MPI_Type_lb(): is removed by MPI 3.0+.
MPI_Type_struct(): is removed by MPI 3.0+.
MPI_Type_ub(): is removed by MPI 3.0+.
No errors

Passed Error Handling - errors

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Reports the default action taken on an error. It also reports if error handling can be changed to "returns", and if so, if this functions properly.

MPI errors are fatal by default.
MPI errors can be changed to MPI_ERRORS_RETURN.
Call MPI_Send() with a bad destination rank.
Error code: 940169478
Error string: Invalid rank, error stack:
PMPI_Send(163): MPI_Send(buf=0x7ffd96b41f7c, count=1, MPI_INT, dest=1, tag=MPI_ANY_TAG, MPI_COMM_WORLD) failed
PMPI_Send(100): Invalid rank has value 1 but must be nonnegative and less than 1
No errors

Passed Errorcodes - process_errorcodes

Build: NA

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

The MPI-3.0 specifications require that the same constants be available for the C language and FORTRAN. The report includes a record for each errorcode of the form "X MPI_ERRCODE is [not] verified" where X is either 'c' for the C compiler, or 'F' for the FORTRAN 77 compiler. The report sumarizes with the number of errorcodes for each compiler that were successfully verified.

c "MPI_ERR_ACCESS" (20) is verified.
c "MPI_ERR_AMODE" (21) is verified.
c "MPI_ERR_ARG" (12) is verified.
c "MPI_ERR_ASSERT" (53) is verified.
c "MPI_ERR_BAD_FILE" (22) is verified.
c "MPI_ERR_BASE" (46) is verified.
c "MPI_ERR_BUFFER" (1) is verified.
c "MPI_ERR_COMM" (5) is verified.
c "MPI_ERR_CONVERSION" (23) is verified.
c "MPI_ERR_COUNT" (2) is verified.
c "MPI_ERR_DIMS" (11) is verified.
c "MPI_ERR_DISP" (52) is verified.
c "MPI_ERR_DUP_DATAREP" (24) is verified.
c "MPI_ERR_FILE" (27) is verified.
c "MPI_ERR_FILE_EXISTS" (25) is verified.
c "MPI_ERR_FILE_IN_USE" (26) is verified.
c "MPI_ERR_GROUP" (8) is verified.
c "MPI_ERR_IN_STATUS" (17) is verified.
c "MPI_ERR_INFO" (28) is verified.
c "MPI_ERR_INFO_KEY" (29) is verified.
c "MPI_ERR_INFO_NOKEY" (31) is verified.
c "MPI_ERR_INFO_VALUE" (30) is verified.
c "MPI_ERR_INTERN" (16) is verified.
c "MPI_ERR_IO" (32) is verified.
c "MPI_ERR_KEYVAL" (48) is verified.
c "MPI_ERR_LASTCODE" (1073741823) is verified.
c "MPI_ERR_LOCKTYPE" (47) is verified.
c "MPI_ERR_NAME" (33) is verified.
c "MPI_ERR_NO_MEM" (34) is verified.
c "MPI_ERR_NO_SPACE" (36) is verified.
c "MPI_ERR_NO_SUCH_FILE" (37) is verified.
c "MPI_ERR_NOT_SAME" (35) is verified.
c "MPI_ERR_OP" (9) is verified.
c "MPI_ERR_OTHER" (15) is verified.
c "MPI_ERR_PENDING" (18) is verified.
c "MPI_ERR_PORT" (38) is verified.
c "MPI_ERR_QUOTA" (39) is verified.
c "MPI_ERR_RANK" (6) is verified.
c "MPI_ERR_READ_ONLY" (40) is verified.
c "MPI_ERR_REQUEST" (19) is verified.
c "MPI_ERR_RMA_ATTACH" (56) is verified.
c "MPI_ERR_RMA_CONFLICT" (49) is verified.
c "MPI_ERR_RMA_FLAVOR" (58) is verified.
c "MPI_ERR_RMA_RANGE" (55) is verified.
c "MPI_ERR_RMA_SHARED" (57) is verified.
c "MPI_ERR_RMA_SYNC" (50) is verified.
c "MPI_ERR_ROOT" (7) is verified.
c "MPI_ERR_SERVICE" (41) is verified.
c "MPI_ERR_SIZE" (51) is verified.
c "MPI_ERR_SPAWN" (42) is verified.
c "MPI_ERR_TAG" (4) is verified.
c "MPI_ERR_TOPOLOGY" (10) is verified.
c "MPI_ERR_TRUNCATE" (14) is verified.
c "MPI_ERR_TYPE" (3) is verified.
c "MPI_ERR_UNKNOWN" (13) is verified.
c "MPI_ERR_UNSUPPORTED_DATAREP" (43) is verified.
c "MPI_ERR_UNSUPPORTED_OPERATION" (44) is verified.
c "MPI_ERR_WIN" (45) is verified.
c "MPI_SUCCESS" (0) is verified.
c "MPI_T_ERR_CANNOT_INIT" (61) is verified.
c "MPI_T_ERR_CVAR_SET_NEVER" (69) is verified.
c "MPI_T_ERR_CVAR_SET_NOT_NOW" (68) is verified.
c "MPI_T_ERR_INVALID_HANDLE" (64) is verified.
c "MPI_T_ERR_INVALID_INDEX" (62) is verified.
c "MPI_T_ERR_INVALID_ITEM" (63) is verified.
c "MPI_T_ERR_INVALID_SESSION" (67) is verified.
c "MPI_T_ERR_MEMORY" (59) is verified.
c "MPI_T_ERR_NOT_INITIALIZED" (60) is verified.
c "MPI_T_ERR_OUT_OF_HANDLES" (65) is verified.
c "MPI_T_ERR_OUT_OF_SESSIONS" (66) is verified.
c "MPI_T_ERR_PVAR_NO_ATOMIC" (72) is verified.
c "MPI_T_ERR_PVAR_NO_STARTSTOP" (70) is verified.
c "MPI_T_ERR_PVAR_NO_WRITE" (71) is verified.
F "MPI_ERR_ACCESS" (20) is verified 
F "MPI_ERR_AMODE" (21) is verified 
F "MPI_ERR_ARG" (12) is verified 
F "MPI_ERR_ASSERT" (53) is verified 
F "MPI_ERR_BAD_FILE" (22) is verified 
F "MPI_ERR_BASE" (46) is verified 
F "MPI_ERR_BUFFER" (1) is verified 
F "MPI_ERR_COMM" (5) is verified 
F "MPI_ERR_CONVERSION" (23) is verified 
F "MPI_ERR_COUNT" (2) is verified 
F "MPI_ERR_DIMS" (11) is verified 
F "MPI_ERR_DISP" (52) is verified 
F "MPI_ERR_DUP_DATAREP" (24) is verified 
F "MPI_ERR_FILE" (27) is verified 
F "MPI_ERR_FILE_EXISTS" (25) is verified 
F "MPI_ERR_FILE_IN_USE" (26) is verified 
F "MPI_ERR_GROUP" (8) is verified 
F "MPI_ERR_IN_STATUS" (17) is verified 
F "MPI_ERR_INFO" (28) is verified 
F "MPI_ERR_INFO_KEY" (29) is verified 
F "MPI_ERR_INFO_NOKEY" (31) is verified 
F "MPI_ERR_INFO_VALUE" (30) is verified 
F "MPI_ERR_INTERN" (16) is verified 
F "MPI_ERR_IO" (32) is verified 
F "MPI_ERR_KEYVAL" (48) is verified 
F "MPI_ERR_LASTCODE" (1073741823) is verified 
F "MPI_ERR_LOCKTYPE" (47) is verified 
F "MPI_ERR_NAME" (33) is verified 
F "MPI_ERR_NO_MEM" (34) is verified 
F "MPI_ERR_NO_SPACE" (36) is verified 
F "MPI_ERR_NO_SUCH_FILE" (37) is verified 
F "MPI_ERR_NOT_SAME" (35) is verified 
F "MPI_ERR_OP" (9) is verified 
F "MPI_ERR_OTHER" (15) is verified 
F "MPI_ERR_PENDING" (18) is verified 
F "MPI_ERR_PORT" (38) is verified 
F "MPI_ERR_QUOTA" (39) is verified 
F "MPI_ERR_RANK" (6) is verified 
F "MPI_ERR_READ_ONLY" (40) is verified 
F "MPI_ERR_REQUEST" (19) is verified 
F "MPI_ERR_RMA_ATTACH" (56) is verified 
F "MPI_ERR_RMA_CONFLICT" (49) is verified 
F "MPI_ERR_RMA_FLAVOR" (58) is verified 
F "MPI_ERR_RMA_RANGE" (55) is verified 
F "MPI_ERR_RMA_SHARED" (57) is verified 
F "MPI_ERR_RMA_SYNC" (50) is verified 
F "MPI_ERR_ROOT" (7) is verified 
F "MPI_ERR_SERVICE" (41) is verified 
F "MPI_ERR_SIZE" (51) is verified 
F "MPI_ERR_SPAWN" (42) is verified 
F "MPI_ERR_TAG" (4) is verified 
F "MPI_ERR_TOPOLOGY" (10) is verified 
F "MPI_ERR_TRUNCATE" (14) is verified 
F "MPI_ERR_TYPE" (3) is verified 
F "MPI_ERR_UNKNOWN" (13) is verified 
F "MPI_ERR_UNSUPPORTED_DATAREP" is not verified: (compilation).
F "MPI_ERR_UNSUPPORTED_OPERATION" is not verified: (compilation).
F "MPI_ERR_WIN" (45) is verified 
F "MPI_SUCCESS" (0) is verified 
F "MPI_T_ERR_CANNOT_INIT" is not verified: (compilation).
F "MPI_T_ERR_CVAR_SET_NEVER" is not verified: (compilation).
F "MPI_T_ERR_CVAR_SET_NOT_NOW" is not verified: (compilation).
F "MPI_T_ERR_INVALID_HANDLE" is not verified: (compilation).
F "MPI_T_ERR_INVALID_INDEX" is not verified: (compilation).
F "MPI_T_ERR_INVALID_ITEM" is not verified: (compilation).
F "MPI_T_ERR_INVALID_SESSION" is not verified: (compilation).
F "MPI_T_ERR_MEMORY" is not verified: (compilation).
F "MPI_T_ERR_NOT_INITIALIZED" is not verified: (compilation).
F "MPI_T_ERR_OUT_OF_HANDLES" is not verified: (compilation).
F "MPI_T_ERR_OUT_OF_SESSIONS" is not verified: (compilation).
F "MPI_T_ERR_PVAR_NO_ATOMIC" is not verified: (compilation).
F "MPI_T_ERR_PVAR_NO_STARTSTOP" is not verified: (compilation).
F "MPI_T_ERR_PVAR_NO_WRITE" is not verified: (compilation).
C errorcodes successful: 73 out of 73
FORTRAN errorcodes successful:57 out of 73
No errors.

Passed Extended collectives - collectives

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Checks if "extended collectives" are supported, i.e., collective operations with MPI-2 intercommunicators.

No errors

Passed Init arguments - init_args

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

In MPI-1.1, it is explicitly stated that an implementation is allowed to require that the arguments argc and argv passed by an application to MPI_Init in C be the same arguments passed into the application as the arguments to main. In MPI-2 implementations are not allowed to impose this requirement. Conforming implementations of MPI allow applications to pass NULL for both the argc and argv arguments of MPI_Init(). This test prints the result of the error status of MPI_Init(). If the test completes without error, it reports 'No errors.'

MPI_INIT accepts Null arguments for MPI_init().
No errors

Passed MPI-2 replaced routines - mpi_2_functions

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Checks the presence of all MPI-2.2 routines that replaced deprecated routines.

No errors

Passed MPI-2 type routines - mpi_2_functions_bcast

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test checks that a subset of MPI-2 routines that replaced MPI-1 routines work correctly.

rank:0/2 c:'C',d[6]={3.141590,1.000000,12.310000,7.980000,5.670000,12.130000},b="123456"
rank:0/2 MPI_Bcast() of struct.
No errors
rank:1/2 c:'C',d[6]={3.141590,1.000000,12.310000,7.980000,5.670000,12.130000},b="123456"
rank:1/2 MPI_Bcast() of struct.

Failed Master/slave - master

Build: Passed

Execution: Failed

Exit Status: Application_ended_with_error(s)

MPI Processes: 1

Test Description:

This test running as a single MPI process spawns four slave processes using MPI_Comm_spawn(). The master process sends and receives a message from each slave. If the test completes, it will report 'No errors.', otherwise specific error messages are listed.

MPI_UNIVERSE_SIZE read 1
MPI_UNIVERSE_SIZE forced to 4
master rank creating 4 slave processes.
Assertion failed in file ../src/mpid/ch4/netmod/ofi/ofi_spawn.c at line 753: 0
/opt/cray/pe/lib64/libmpi_cray.so.12(MPL_backtrace_show+0x26) [0x14c02d0ccc2b]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1fef8d4) [0x14c02cb038d4]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x22a6fe8) [0x14c02cdbafe8]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x202ac89) [0x14c02cb3ec89]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x202b03e) [0x14c02cb3f03e]
/opt/cray/pe/lib64/libmpi_cray.so.12(MPI_Comm_spawn+0x1e2) [0x14c02c6ab212]
/var/run/palsd/0b3f3421-650f-4245-9890-f437cc2ae0ee/files/master() [0x203e09]
/lib64/libc.so.6(__libc_start_main+0xef) [0x14c029e9524d]
/var/run/palsd/0b3f3421-650f-4245-9890-f437cc2ae0ee/files/master() [0x203afa]
MPICH ERROR [Rank 0] [job id 0b3f3421-650f-4245-9890-f437cc2ae0ee] [Mon Apr 15 11:03:44 2024] [x1001c5s5b0n0] - Abort(1): Internal error
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 1

Passed One-sided communication - one_sided_modes

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Checks MPI-2.2 one-sided communication modes reporting those that are not defined. If the test compiles, then "No errors" is reported, else, all undefined modes are reported as "not defined."

No errors

Passed One-sided fences - one_sided_fences

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Verifies that one-sided communication with active target synchronization with fences functions properly. If all operations succeed, one-sided communication with active target synchronization with fences is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with active target synchronization with fences is reported as NOT supported.

No errors

Passed One-sided passiv - one_sided_passive

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Verifies one-sided-communication with passive target synchronization functions properly. If all operations succeed, one-sided-communication with passive target synchronization is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with passive target synchronization with fences is reported as NOT supported.

No errors

Passed One-sided post - one_sided_post

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Verifies one-sided-communication with active target synchronization with post/start/complete/wait functions properly. If all operations succeed, one-sided communication with active target synchronization with post/start/complete/wait is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with active target synchronization with post/start/complete/wait is reported as NOT supported.

No errors

Passed One-sided routines - one_sided_routines

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Reports if one-sided communication routines are defined. If this test compiles, one-sided communication is reported as defined, otherwise "not supported".

No errors

Passed Thread support - thread_safety

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Reports the level of thread support provided. This is either MPI_THREAD_SINGLE, MPI_THREAD_FUNNELED, MPI_THREAD_SERIALIZED, or MPI_THREAD_MULTIPLE.

MPI_THREAD_MULTIPLE requested.
MPI_THREAD_MULTIPLE is supported.
No errors

Group Communicator - Score: 100% Passed

This group features tests of MPI communicator group calls.

Passed MPI_Group irregular - gtranks

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This is a test comparing small groups against larger groups, and use groups with irregular members (to bypass optimizations in group_translate_ranks for simple groups).

No errors

Passed MPI_Group_Translate_ranks perf - gtranksperf

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 20

Test Description:

Measure and compare the relative performance of MPI_Group_translate_ranks with small and large group2 sizes but a constant number of ranks. This serves as a performance sanity check for the Scalasca use case where we translate to MPI_COMM_WORLD ranks. The performance should only depend on the number of ranks passed, not the size of either group (especially group2). This test is probably only meaningful for large-ish process counts.

No errors

Passed MPI_Group_excl basic - grouptest

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This is a test of MPI_Group_excl().

No errors

Passed MPI_Group_incl basic - groupcreate

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a simple test of creating a group array.

No errors

Passed MPI_Group_incl empty - groupnullincl

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a test to determine if an empty group can be created.

No errors

Passed MPI_Group_translate_ranks - grouptest2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a test of MPI_Group_translate_ranks().

No errors

Passed Win_get_group basic - getgroup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a simple test of MPI_Win_get_group() for a selection of communicators.

No errors

Parallel Input/Output - Score: 100% Passed

This group features tests that involve MPI parallel input/output operations.

Passed Asynchronous IO basic - async_any

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test asynchronous I/O with multiple completion. Each process writes to separate files and reads them back.

No errors

Passed Asynchronous IO collective - async_all

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test asynchronous collective reading and writing. Each process asynchronously to to a file then reads it back.

No errors

Passed Asynchronous IO contig - async

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test contiguous asynchronous I/O. Each process writes to separate files and reads them back. The file name is taken as a command-line argument, and the process rank is appended to it.

No errors

Passed Asynchronous IO non-contig - i_noncontig

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests noncontiguous reads/writes using non-blocking I/O.

No errors

Passed File IO error handlers - userioerr

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test exercises MPI I/O and MPI error handling techniques.

No errors

Passed MPI_File_get_type_extent - getextent

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test file_get_extent.

No errors

Passed MPI_File_set_view displacement_current - setviewcur

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test set_view with DISPLACEMENT_CURRENT. This test reads a header then sets the view to every "size" int, using set view and current displacement. The file is first written using a combination of collective and ordered writes.

No errors

Passed MPI_File_write_ordered basic - rdwrord

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test reading and writing ordered output.

No errors

Passed MPI_File_write_ordered zero - rdwrzero

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test reading and writing data with zero length. The test then looks for errors in the MPI IO routines and reports any that were found, otherwise "No errors" is reported.

No errors

Passed MPI_Info_set file view - setinfo

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test file_set_view. Access style is explicitly described as modifiable. Values include read_once, read_mostly, write_once, write_mostly, random.

No errors

Passed MPI_Type_create_resized basic - resized

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Test file views with MPI_Type_create_resized.

No errors

Passed MPI_Type_create_resized x2 - resized2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Test file views with MPI_Type_create_resized, with a resizing of the resized type.

No errors

Datatypes - Score: 95% Passed

This group features tests that involve named MPI and user defined datatypes.

Passed Aint add and diff - aintmath

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests the MPI 3.1 standard functions MPI_Aint_diff and MPI_Aint_add.

No errors

Passed Blockindexed contiguous convert - blockindexed-misc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test converts a block indexed datatype to a contiguous datatype.

No errors

Passed Blockindexed contiguous zero - blockindexed-zero-count

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This tests the behavior with a zero-count blockindexed datatype.

No errors

Passed C++ datatypes - cxx-types

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test checks for the existence of four new C++ named predefined datatypes that should be accessible from C and Fortran.

No errors

Passed Datatype commit-free-commit - zeroparms

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test creates a valid datatype, commits and frees the datatype, then repeats the process for a second datatype of the same size.

No errors

Passed Datatype get structs - get-struct

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test was motivated by the failure of an example program for RMA involving simple operations on a struct that included a struct. The observed failure was a SEGV in the MPI_Get.

No errors

Failed Datatype inclusive typename - typename

Build: Passed

Execution: Failed

Exit Status: Application_ended_with_error(s)

MPI Processes: 1

Test Description:

Sample some datatypes. See 8.4, "Naming Objects" in MPI-2. The default name is the same as the datatype name.

Checking type MPI_CHAR
Checking type MPI_SIGNED_CHAR
Checking type MPI_UNSIGNED_CHAR
Checking type MPI_BYTE
Checking type MPI_WCHAR
Checking type MPI_SHORT
Checking type MPI_UNSIGNED_SHORT
Checking type MPI_INT
Checking type MPI_UNSIGNED
Checking type MPI_LONG
Checking type MPI_UNSIGNED_LONG
Checking type MPI_FLOAT
Checking type MPI_DOUBLE
Checking type MPI_AINT
Checking type MPI_OFFSET
Checking type MPI_PACKED
Checking type MPI_FLOAT_INT
Checking type MPI_DOUBLE_INT
Checking type MPI_LONG_INT
Checking type MPI_SHORT_INT
Checking type MPI_2INT
Checking type MPI_COMPLEX
Checking type MPI_DOUBLE_COMPLEX
Checking type MPI_LOGICAL
Checking type MPI_REAL
Checking type MPI_DOUBLE_PRECISION
Checking type MPI_INTEGER
Checking type MPI_2INTEGER
Checking type MPI_2REAL
Checking type MPI_2DOUBLE_PRECISION
Checking type MPI_CHARACTER
Checking type MPI_INT8_T
Checking type MPI_INT16_T
Checking type MPI_INT32_T
Checking type MPI_INT64_T
Checking type MPI_UINT8_T
Checking type MPI_UINT16_T
Checking type MPI_UINT32_T
Checking type MPI_UINT64_T
Checking type MPI_C_BOOL
Checking type MPI_C_FLOAT_COMPLEX
Checking type MPI_C_DOUBLE_COMPLEX
Checking type MPI_AINT
Checking type MPI_OFFSET
Checking type MPI_REAL4
Checking type MPI_REAL8
Checking type MPI_REAL16
Checking type MPI_COMPLEX8
Checking type MPI_COMPLEX16
Checking type MPI_COMPLEX32
Checking type MPI_INTEGER1
Checking type MPI_INTEGER2
Checking type MPI_INTEGER4
Checking type MPI_INTEGER8
Checking type MPI_LONG_LONG_INT
Checking type MPI_LONG_LONG
Checking type MPI_UNSIGNED_LONG_LONG
Checking type MPI_AINT
Checking type MPI_OFFSET
Checking type MPI_COUNT
Found 2 errors
Expected MPI_C_FLOAT_COMPLEX but got MPI_C_COMPLEX
Expected MPI_LONG_LONG but got MPI_LONG_LONG_INT
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 1

Passed Datatype match size - tmatchsize

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Test of type_match_size. Check the most likely cases. Note that it is an error to free the type returned by MPI_Type_match_size. Also note that it is an error to request a size not supported by the compiler, so Type_match_size should generate an error in that case.

No errors

Passed Datatype reference count - tfree

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test to check if freed datatypes have reference count semantics. The idea here is to create a simple but non-contiguous datatype, perform an irecv with it, free it, and then create many new datatypes. If the datatype was freed and the space was reused, this test may detect an error.

No errors

Passed Datatypes - process_datatypes

Build: NA

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests for the presence of constants from MPI-1.0 and higher. It constructs small separate main programs in either C, FORTRAN, or C++ for each datatype. It fails if any datatype is not present. ISSUE: This test may timeout if separate program executions initialize slowly.

c "MPI_2DOUBLE_PRECISION" Size = 16 is verified.
c "MPI_2INT" Size = 8 is verified.
c "MPI_2INTEGER" Size = 8 is verified.
c "MPI_2REAL" Size = 8 is verified.
c "MPI_AINT" Size = 8 is verified.
c "MPI_BYTE" Size = 1 is verified.
c "MPI_C_BOOL" Size = 1 is verified.
c "MPI_C_COMPLEX" Size = 8 is verified.
c "MPI_C_DOUBLE_COMPLEX" Size = 16 is verified.
c "MPI_C_FLOAT_COMPLEX" Size = 8 is verified.
c "MPI_C_LONG_DOUBLE_COMPLEX" is not verified: (execution).
c "MPI_CHAR" Size = 1 is verified.
c "MPI_CHARACTER" Size = 1 is verified.
c "MPI_COMPLEX" Size = 8 is verified.
c "MPI_COMPLEX2" is not verified: (compilation).
c "MPI_COMPLEX4" is not verified: (compilation).
c "MPI_COMPLEX8" Size = 8 is verified.
c "MPI_COMPLEX16" Size = 16 is verified.
c "MPI_COMPLEX32" Size = 32 is verified.
c "MPI_DOUBLE" Size = 8 is verified.
c "MPI_DOUBLE_INT" Size = 12 is verified.
c "MPI_DOUBLE_COMPLEX" Size = 16 is verified.
c "MPI_DOUBLE_PRECISION" Size = 8 is verified.
c "MPI_FLOAT" Size = 4 is verified.
c "MPI_FLOAT_INT" Size = 8 is verified.
c "MPI_INT" Size = 4 is verified.
c "MPI_INT8_T" Size = 1 is verified.
c "MPI_INT16_T" Size = 2 is verified.
c "MPI_INT32_T" Size = 4 is verified.
c "MPI_INT64_T" Size = 8 is verified.
c "MPI_INTEGER" Size = 4 is verified.
c "MPI_INTEGER1" Size = 1 is verified.
c "MPI_INTEGER2" Size = 2 is verified.
c "MPI_INTEGER4" Size = 4 is verified.
c "MPI_INTEGER8" Size = 8 is verified.
c "MPI_INTEGER16" is not verified: (execution).
c "MPI_LB" Size = 0 is verified.
c "MPI_LOGICAL" Size = 4 is verified.
c "MPI_LONG" Size = 8 is verified.
c "MPI_LONG_INT" Size = 12 is verified.
c "MPI_LONG_DOUBLE" is not verified: (execution).
c "MPI_LONG_DOUBLE_INT" is not verified: (execution).
c "MPI_LONG_LONG" Size = 8 is verified.
c "MPI_LONG_LONG_INT" Size = 8 is verified.
c "MPI_OFFSET" Size = 8 is verified.
c "MPI_PACKED" Size = 1 is verified.
c "MPI_REAL" Size = 4 is verified.
c "MPI_REAL2" is not verified: (compilation).
c "MPI_REAL4" Size = 4 is verified.
c "MPI_REAL8" Size = 8 is verified.
c "MPI_REAL16" Size = 16 is verified.
c "MPI_SHORT" Size = 2 is verified.
c "MPI_SHORT_INT" Size = 6 is verified.
c "MPI_SIGNED_CHAR" Size = 1 is verified.
c "MPI_UB" Size = 0 is verified.
c "MPI_UNSIGNED_CHAR" Size = 1 is verified.
c "MPI_UNSIGNED_SHORT" Size = 2 is verified.
c "MPI_UNSIGNED" Size = 4 is verified.
c "MPI_UNSIGNED_LONG" Size = 8 is verified.
c "MPI_WCHAR" Size = 4 is verified.
c "MPI_LONG_LONG_INT" Size = 8 is verified.
c "MPI_FLOAT_INT" Size = 8 is verified.
c "MPI_DOUBLE_INT" Size = 12 is verified.
c "MPI_LONG_INT" Size = 12 is verified.
c "MPI_LONG_DOUBLE_INT" is not verified: (execution).
c "MPI_2INT" Size = 8 is verified.
c "MPI_SHORT_INT" Size = 6 is verified.
c "MPI_LONG_DOUBLE_INT" is not verified: (execution).
c "MPI_2REAL" Size = 8 is verified.
c "MPI_2DOUBLE_PRECISION" Size = 16 is verified.
c "MPI_2INTEGER" Size = 8 is verified.
C "MPI_CXX_BOOL" Size = 1 is verified.
C "MPI_CXX_FLOAT_COMPLEX" Size = 8 is verified.
C "MPI_CXX_DOUBLE_COMPLEX" Size = 16 is verified.
C "MPI_CXX_LONG_DOUBLE_COMPLEX" is not verified: (execution).
f "MPI_BYTE" Size =1 is verified.
f "MPI_CHARACTER" Size =1 is verified.
f "MPI_COMPLEX" Size =8 is verified.
f "MPI_DOUBLE_COMPLEX" Size =16 is verified.
f "MPI_DOUBLE_PRECISION" Size =8 is verified.
f "MPI_INTEGER" Size =4 is verified.
f "MPI_INTEGER1" Size =1 is verified.
f "MPI_INTEGER2" Size =2 is verified.
f "MPI_INTEGER4" Size =4 is verified.
f "MPI_LOGICAL" Size =4 is verified.
f "MPI_REAL" Size =4 is verified.
f "MPI_REAL2" is not verified: (execution).
f "MPI_REAL4" Size =4 is verified.
f "MPI_REAL8" Size =8 is verified.
f "MPI_PACKED" Size =1 is verified.
f "MPI_2REAL" Size =8 is verified.
f "MPI_2DOUBLE_PRECISION" Size =16 is verified.
f "MPI_2INTEGER" Size =8 is verified.
No errors.

Failed Datatypes basic and derived - sendrecvt2

Build: Passed

Execution: Failed

Exit Status: Failed with signal 127

MPI Processes: 2

Test Description:

This program is derived from one in the MPICH-1 test suite. It tests a wide variety of basic and derived datatypes.

MPICH ERROR [Rank 0] [job id 86ec025f-cd3e-4884-826d-302cdf5320b4] [Mon Apr 15 11:03:25 2024] [x1001c5s5b0n0] - Abort(201983491) (rank 0 in comm 0): Fatal error in PMPI_Type_contiguous: Invalid datatype, error stack:
PMPI_Type_contiguous(303): MPI_Type_contiguous(count=10, MPI_DATATYPE_NULL, new_type_p=0x11b3d0c) failed
PMPI_Type_contiguous(271): Datatype for argument datatype is a null datatype
aborting job:
Fatal error in PMPI_Type_contiguous: Invalid datatype, error stack:
PMPI_Type_contiguous(303): MPI_Type_contiguous(count=10, MPI_DATATYPE_NULL, new_type_p=0x11b3d0c) failed
PMPI_Type_contiguous(271): Datatype for argument datatype is a null datatype
MPICH ERROR [Rank 1] [job id 86ec025f-cd3e-4884-826d-302cdf5320b4] [Mon Apr 15 11:03:25 2024] [x1001c5s7b0n0] - Abort(738854403) (rank 1 in comm 0): Fatal error in PMPI_Type_contiguous: Invalid datatype, error stack:
PMPI_Type_contiguous(303): MPI_Type_contiguous(count=10, MPI_DATATYPE_NULL, new_type_p=0x22bbfec) failed
PMPI_Type_contiguous(271): Datatype for argument datatype is a null datatype
aborting job:
Fatal error in PMPI_Type_contiguous: Invalid datatype, error stack:
PMPI_Type_contiguous(303): MPI_Type_contiguous(count=10, MPI_DATATYPE_NULL, new_type_p=0x22bbfec) failed
PMPI_Type_contiguous(271): Datatype for argument datatype is a null datatype
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 1 exited with code 255
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 255

Failed Datatypes comprehensive - sendrecvt4

Build: Passed

Execution: Failed

Exit Status: Failed with signal 127

MPI Processes: 2

Test Description:

This program is derived from one in the MPICH-1 test suite. This test sends and receives EVERYTHING from MPI_BOTTOM, by putting the data into a structure.

MPICH ERROR [Rank 0] [job id 77221a50-b944-42b8-bc50-ae16eed7f9c6] [Mon Apr 15 11:03:25 2024] [x1001c5s5b0n0] - Abort(873072131) (rank 0 in comm 0): Fatal error in PMPI_Type_contiguous: Invalid datatype, error stack:
PMPI_Type_contiguous(303): MPI_Type_contiguous(count=10, MPI_DATATYPE_NULL, new_type_p=0x15efd0c) failed
PMPI_Type_contiguous(271): Datatype for argument datatype is a null datatype
aborting job:
Fatal error in PMPI_Type_contiguous: Invalid datatype, error stack:
PMPI_Type_contiguous(303): MPI_Type_contiguous(count=10, MPI_DATATYPE_NULL, new_type_p=0x15efd0c) failed
PMPI_Type_contiguous(271): Datatype for argument datatype is a null datatype
MPICH ERROR [Rank 1] [job id 77221a50-b944-42b8-bc50-ae16eed7f9c6] [Mon Apr 15 11:03:25 2024] [x1001c5s7b0n0] - Abort(671745539) (rank 1 in comm 0): Fatal error in PMPI_Type_contiguous: Invalid datatype, error stack:
PMPI_Type_contiguous(303): MPI_Type_contiguous(count=10, MPI_DATATYPE_NULL, new_type_p=0xf10fec) failed
PMPI_Type_contiguous(271): Datatype for argument datatype is a null datatype
aborting job:
Fatal error in PMPI_Type_contiguous: Invalid datatype, error stack:
PMPI_Type_contiguous(303): MPI_Type_contiguous(count=10, MPI_DATATYPE_NULL, new_type_p=0xf10fec) failed
PMPI_Type_contiguous(271): Datatype for argument datatype is a null datatype
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 1 exited with code 255
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 255

Passed Get_address math - gaddress

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This routine shows how math can be used on MPI addresses and verifies that it produces the correct result.

No errors

Passed Get_elements contig - get-elements

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Uses a contig of a struct in order to satisfy two properties: (A) a type that contains more than one element type (the struct portion) (B) a type that has an odd number of ints in its "type contents" (1 in this case). This triggers a specific bug in some versions of MPICH.

No errors

Passed Get_elements pair - get-elements-pairtype

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Send a { double, int, double} tuple and receive as a pair of MPI_DOUBLE_INTs. this should (a) be valid, and (b) result in an element count of 3.

No errors

Passed Get_elements partial - getpartelm

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Receive partial datatypes and check that MPI_Getelements gives the correct version.

No errors

Passed LONG_DOUBLE size - longdouble

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test ensures that simplistic build logic/configuration did not result in a defined, yet incorrectly sized, MPI predefined datatype for long double and long double Complex. Based on a test suggested by Jim Hoekstra @ Iowa State University. The test also considers other datatypes that are optional in the MPI-3 specification.

No errors

Passed Large counts for types - large-count

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test checks for large count functionality ("MPI_Count") mandated by MPI-3, as well as behavior of corresponding pre-MPI-3 interfaces that have better defined behavior when an "int" quantity would overflow.

No errors

Passed Large types - large_type

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test checks that MPI can handle large datatypes.

No errors

Passed Local pack/unpack basic - localpack

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test users MPI_Pack() on a communication buffer, then call MPU_Unpack() to confirm that the unpacked data matches the original. This routine performs all work within a simple processor.

No errors

Passed Noncontiguous datatypes - unusual-noncontigs

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test uses a structure datatype that describes data that is contiguous, but is is manipulated as if it is noncontiguous. The test is designed to expose flaws in MPI memory management should they exist.

No errors

Passed Pack basic - simple-pack

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests functionality of MPI_Type_get_envelope() and MPI_Type_get_contents() on a MPI_FLOAT. Returns the number of errors encountered.

No errors

Passed Pack/Unpack matrix transpose - transpose-pack

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test confirms that an MPI packed matrix can be unpacked correctly by the MPI infrastructure.

No errors

Passed Pack/Unpack multi-struct - struct-pack

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test confirms that packed structures, including array-of-struct and struct-of-struct unpack properly.

No errors

Passed Pack/Unpack sliced - slice-pack

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test confirms that sliced array pack and unpack properly.

No errors

Passed Pack/Unpack struct - structpack2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test confirms that a packed structure unpacks properly.

No errors

Passed Pack_external_size - simple-pack-external

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests functionality of MPI_Type_get_envelope() and MPI_Type_get_contents() on a packed-external MPI_FLOAT. Returns the number of errors encountered.

No errors

Passed Pair types optional - pairtype-size-extent

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Check for optional datatypes such as LONG_DOUBLE_INT.

No errors

Passed Simple contig datatype - contigstruct

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test checks to see if we can create a simple datatype made from many contiguous copies of a single struct. The struct is built with monotone decreasing displacements to avoid any struct->config optimizations.

No errors

Passed Simple zero contig - contig-zero-count

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests behaviour with a zero count contig.

No errors

Passed Struct zero count - struct-zero-count

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests behavior with a zero-count struct of builtins.

No errors

Passed Type_commit basic - simple-commit

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests that verifies that the MPI_Type_commit succeeds.

No errors

Passed Type_create_darray cyclic - darray-cyclic

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 12

Test Description:

Several cyclic checks of a custom struct darray.

No errors

Passed Type_create_darray pack - darray-pack

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 9

Test Description:

Performs a sequence of tests building darrays with single-element blocks, running through all the various positions that the element might come from.

No errors

Passed Type_create_darray pack many rank - darray-pack_72

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 32

Test Description:

Performs a sequence of tests building darrays with single-element blocks, running through all the various positions that the element might come from. Should be run with many ranks (at least 32).

No errors

Passed Type_create_hindexed_block - hindexed_block

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests behavior with a hindexed_block that can be converted to a contig easily. This is specifically for coverage. Returns the number of errors encountered.

No errors

Passed Type_create_hindexed_block contents - hindexed_block_contents

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test is a simple check of MPI_Type_create_hindexed_block() using MPI_Type_get_envelope() and MPI_Type_get_contents().

No errors

Passed Type_create_resized - simple-resized

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests behavior with resizing of a simple derived type.

No errors

Passed Type_create_resized 0 lower bound - tresized

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test of MPI datatype resized with 0 lower bound.

No errors

Passed Type_create_resized lower bound - tresized2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test of MPI datatype resized with non-zero lower bound.

No errors

Passed Type_create_subarray basic - subarray

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test creates a subarray and confirms its contents.

No errors

Passed Type_create_subarray pack/unpack - subarray-pack

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test confirms that a packed sub-array can be properly unpacked.

No errors

Passed Type_free memory - typefree

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test is used to confirm that memory is properly recovered from freed datatypes. The test may be run with valgrind or similar tools, or it may be run with MPI implementation specific options. For this test it is run only with standard MPI error checking enabled.

No errors

Passed Type_get_envelope basic - contents

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This tests the functionality of MPI_Type_get_envelope() and MPI_Type_get_contents().

No errors

Passed Type_hindexed zero - hindexed-zeros

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests hindexed types with all zero length blocks.

No errors

Passed Type_hvector counts - struct-derived-zeros

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests vector and struct type creation and commits with varying counts and odd displacements.

No errors

Passed Type_hvector_blklen loop - hvecblklen

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Inspired by the Intel MPI_Type_hvector_blklen test. Added to include a test of a dataloop optimization that failed.

No errors

Passed Type_indexed many - lots-of-types

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

No errors

Passed Type_indexed not compacted - indexed-misc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests behavior with an indexed array that can be compacted but should continue to be stored as an indexed type. Specifically for coverage. Returns the number of errors encountered.

No errors

Passed Type_struct basic - struct-empty-el

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test creates an MPI_Type_struct() datatype, assigns data and sends the structure to a second process. The second process receives the structure and confirms that the information contained in the structure agrees with the original data.

No errors

Passed Type_struct() alignment - dataalign

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This routine checks the alignment of a custom datatype.

No errors

Passed Type_vector blklen - vecblklen

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test is inspired by the Intel MPI_Type_vector_blklen test. The test fundamentally tries to deceive MPI into scrambling the data using padded struct types, and MPI_Pack() and MPI_Unpack(). The data is then checked to make sure the original data was not lost in the process. If "No errors" is reported, then the MPI functions that manipulated the data did not corrupt the test data.

No errors

Passed Type_{lb,ub,extent} - typelb

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test checks that both the upper and lower boundary of an hindexed MPI type is correct.

No errors

Passed Zero sized blocks - zeroblks

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test creates an empty packed indexed type, and then checks that the last 40 entrines of the unpacked recv_buffer have the corresponding elements from the send buffer.

No errors

Collectives - Score: 97% Passed

This group features tests of utilizing MPI collectives.

Passed Allgather basic - allgatherv3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Gather data from a vector to a contiguous vector for a selection of communicators. This is the trivial version based on the allgather test (allgatherv but with constant data sizes).

No errors

Passed Allgather double zero - allgather3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

This test is similar to "Allgather in-place null", but uses MPI_DOUBLE with separate input and output arrays and performs an additional test for a zero byte gather operation.

No errors

Passed Allgather in-place null - allgather2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

This is a test of MPI_Allgather() using MPI_IN_PLACE and MPI_DATATYPE_NULL to repeatedly gather data from a vector that increases in size each iteration for a selection of communicators.

No errors

Passed Allgather intercommunicators - icallgather

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

Allgather tests using a selection of intercommunicators and increasing array sizes. Processes are split into two groups and MPI_Allgather() is used to have each group send data to the other group and to send data from one group to the other.

No errors

Passed Allgatherv 2D - coll6

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

This test uses MPI_Allgatherv() to define a two-dimensional table.

No errors

Passed Allgatherv in-place - allgatherv2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Gather data from a vector to a contiguous vector using MPI_IN_PLACE for a selection of communicators. This is the trivial version based on the coll/allgather tests with constant data sizes.

No errors

Passed Allgatherv intercommunicators - icallgatherv

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

Allgatherv test using a selection of intercommunicators and increasing array sizes. Processes are split into two groups and MPI_Allgatherv() is used to have each group send data to the other group and to send data from one group to the other. Similar to Allgather test (coll/icallgather).

No errors

Passed Allgatherv large - coll7

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

This test is the same as Allgatherv basic (coll/coll6) except the size of the table is greater than the number of processors.

No errors

Passed Allreduce flood - allredmany

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Tests the ability of the implementation to handle a flood of one-way messages by repeatedly calling MPI_Allreduce(). Test should be run with 2 processes.

No errors

Passed Allreduce in-place - allred2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

MPI_Allreduce() Test using MPI_IN_PLACE for a selection of communicators.

No errors

Passed Allreduce intercommunicators - icallreduce

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

Allreduce test using a selection of intercommunicators and increasing array sizes.

No errors

Passed Allreduce mat-mult - allred3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

This test implements a simple matrix-matrix multiply for a selection of communicators using a user-defined operation for MPI_Allreduce(). This is an associative but not commutative operation where matSize=matrix. The number of matrices is the count argument, which is currently set to 1. The matrix is stored in C order, so that c(i,j) = cin[j+i*matSize].

No errors

Passed Allreduce non-commutative - allred6

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

This tests MPI_Allreduce() using apparent non-commutative operators using a selection of communicators. This forces MPI to run code used for non-commutative operators.

No errors

Passed Allreduce operations - allred

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

This tests all possible MPI operation codes using the MPI_Allreduce() routine.

No errors

Passed Allreduce user-defined - allred4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This example tests MPI_Allreduce() with user-defined operations using a selection of communicators similar to coll/allred3, but uses 3x3 matrices with integer-valued entries. This is an associative but not commutative operation. The number of matrices is the count argument. Tests using separate input and output matrices and using MPI_IN_PLACE. The matrix is stored in C order.

No errors

Passed Allreduce user-defined long - longuser

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Tests user-defined operation on a long value. Tests proper handling of possible pipelining in the implementation of reductions with user-defined operations.

No errors

Passed Allreduce vector size - allred5

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

This tests MPI_Allreduce() using vectors with size greater than the number of processes for a selection of communicators.

No errors

Passed Alltoall basic - coll13

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Simple test for MPI_Alltoall().

No errors

Passed Alltoall communicators - alltoall1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

Tests MPI_Alltoall() by calling it with a selection of communicators and datatypes. Includes test using MPI_IN_PLACE.

No errors

Passed Alltoall intercommunicators - icalltoall

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

Alltoall test using a selction of intercommunicators and increasing array sizes.

No errors

Passed Alltoall threads - alltoall

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

The LISTENER THREAD waits for communication from any source (including calling thread) messages which it receives that has tag REQ_TAG. Each thread enters an infinite loop that will stop only if every node in the MPI_COMM_WORLD sends a message containing -1.

No errors

Passed Alltoallv communicators - alltoallv

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

This program tests MPI_Alltoallv() by having each processor send different amounts of data to each processor using a selection of communicators. The test uses only MPI_INT which is adequate for testing systems that use point-to-point operations. Includes test using MPI_IN_PLACE.

No errors

Passed Alltoallv halo exchange - alltoallv0

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

This tests MPI_Alltoallv() by having each processor send data to two neighbors only, using counts of 0 for the other neighbors for a selection of communicators. This idiom is sometimes used for halo exchange operations. The test uses MPI_INT which is adequate for testing systems that use point-to-point operations.

No errors

Passed Alltoallv intercommunicators - icalltoallv

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

This program tests MPI_Alltoallv using int array and a selection of intercommunicators by having each process send different amounts of data to each process. This test sends i items to process i from all processes.

No errors

Passed Alltoallw intercommunicators - icalltoallw

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

This program tests MPI_Alltoallw by having each process send different amounts of data to each process. This test is similar to the Alltoallv test (coll/icalltoallv), but with displacements in bytes rather than units of the datatype. This test sends i items to process i from all process.

No errors

Passed Alltoallw matrix transpose - alltoallw1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Tests MPI_Alltoallw() by performing a blocked matrix transpose operation. This more detailed example test was taken from MPI - The Complete Reference, Vol 1, p 222-224. Please refer to this reference for more details of the test.

Allocated local arrays
M = 20, N = 30
Allocated local arrays
M = 20, N = 30
Begin Alltoallw...
Allocated local arrays
M = 20, N = 30
Begin Alltoallw...
Allocated local arrays
M = 20, N = 30
Begin Alltoallw...
Allocated local arrays
M = 20, N = 30
Begin Alltoallw...
Begin Alltoallw...
Allocated local arrays
M = 20, N = 30
Allocated local arrays
M = 20, N = 30
Begin Alltoallw...
Allocated local arrays
M = 20, N = 30
Begin Alltoallw...
Allocated local arrays
M = 20, N = 30
Begin Alltoallw...
Allocated local arrays
M = 20, N = 30
Begin Alltoallw...
Begin Alltoallw...
Done with Alltoallw
Done with Alltoallw
Done with Alltoallw
No errors
Done with Alltoallw
Done with Alltoallw
Done with Alltoallw
Done with Alltoallw
Done with Alltoallw
Done with Alltoallw
Done with Alltoallw

Passed Alltoallw matrix transpose comm - alltoallw2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

This program tests MPI_Alltoallw() by having each processor send different amounts of data to all processors. This is similar to the "Alltoallv communicators" test, but with displacements in bytes rather than units of the datatype. Currently, the test uses only MPI_INT which is adequate for testing systems that use point-to-point operations. Includes test using MPI_IN_PLACE.

No errors

Passed Alltoallw zero types - alltoallw_zeros

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This test makes sure that counts with non-zero-sized types on the send (recv) side match and don't cause a problem with non-zero counts and zero-sized types on the recv (send) side when using MPI_Alltoallw and MPI_Alltoallv. Includes tests using MPI_IN_PLACE.

No errors

Passed BAND operations - opband

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test MPI_BAND (bitwise and) operations using MPI_Reduce() on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.

Reduce of MPI_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_BYTE
Reduce of MPI_SHORT
Reduce of MPI_UNSIGNED_SHORT
Reduce of MPI_UNSIGNED
Reduce of MPI_LONG
Reduce of MPI_UNSIGNED_LONG
Reduce of MPI_LONG_LONG
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_BYTE
Reduce of MPI_SHORT
Reduce of MPI_UNSIGNED_SHORT
Reduce of MPI_UNSIGNED
Reduce of MPI_LONG
Reduce of MPI_UNSIGNED_LONG
Reduce of MPI_LONG_LONG
No errors
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_BYTE
Reduce of MPI_SHORT
Reduce of MPI_UNSIGNED_SHORT
Reduce of MPI_UNSIGNED
Reduce of MPI_LONG
Reduce of MPI_UNSIGNED_LONG
Reduce of MPI_LONG_LONG
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_BYTE
Reduce of MPI_SHORT
Reduce of MPI_UNSIGNED_SHORT
Reduce of MPI_UNSIGNED
Reduce of MPI_LONG
Reduce of MPI_UNSIGNED_LONG
Reduce of MPI_LONG_LONG

Passed BOR operations - opbor

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test MPI_BOR (bitwise or) operations using MPI_Reduce() on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.

Reduce of MPI_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_BYTE
Reduce of MPI_SHORT
Reduce of MPI_UNSIGNED_SHORT
Reduce of MPI_UNSIGNED
Reduce of MPI_INT
Reduce of MPI_LONG
Reduce of MPI_UNSIGNED_LONG
Reduce of MPI_LONG_LONG
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_BYTE
Reduce of MPI_SHORT
Reduce of MPI_UNSIGNED_SHORT
Reduce of MPI_UNSIGNED
Reduce of MPI_INT
Reduce of MPI_LONG
Reduce of MPI_UNSIGNED_LONG
Reduce of MPI_LONG_LONG
No errors
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_BYTE
Reduce of MPI_SHORT
Reduce of MPI_UNSIGNED_SHORT
Reduce of MPI_UNSIGNED
Reduce of MPI_INT
Reduce of MPI_LONG
Reduce of MPI_UNSIGNED_LONG
Reduce of MPI_LONG_LONG
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_BYTE
Reduce of MPI_SHORT
Reduce of MPI_UNSIGNED_SHORT
Reduce of MPI_UNSIGNED
Reduce of MPI_INT
Reduce of MPI_LONG
Reduce of MPI_UNSIGNED_LONG
Reduce of MPI_LONG_LONG

Passed BXOR Operations - opbxor

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test MPI_BXOR (bitwise excl or) operations using MPI_Reduce() on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.

Reduce of MPI_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_BYTE
Reduce of MPI_SHORT
Reduce of MPI_UNSIGNED_SHORT
Reduce of MPI_UNSIGNED
Reduce of MPI_INT
Reduce of MPI_LONG
Reduce of MPI_UNSIGNED_LONG
Reduce of MPI_LONG_LONG
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_BYTE
Reduce of MPI_SHORT
Reduce of MPI_UNSIGNED_SHORT
Reduce of MPI_UNSIGNED
Reduce of MPI_INT
Reduce of MPI_LONG
Reduce of MPI_UNSIGNED_LONG
Reduce of MPI_LONG_LONG
No errors
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_BYTE
Reduce of MPI_SHORT
Reduce of MPI_UNSIGNED_SHORT
Reduce of MPI_UNSIGNED
Reduce of MPI_INT
Reduce of MPI_LONG
Reduce of MPI_UNSIGNED_LONG
Reduce of MPI_LONG_LONG
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_BYTE
Reduce of MPI_SHORT
Reduce of MPI_UNSIGNED_SHORT
Reduce of MPI_UNSIGNED
Reduce of MPI_INT
Reduce of MPI_LONG
Reduce of MPI_UNSIGNED_LONG
Reduce of MPI_LONG_LONG

Passed Barrier intercommunicators - icbarrier

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

This test checks that MPI_Barrier() accepts intercommunicators. It does not check for the semantics of a intercomm barrier (all processes in the local group can exit when (but not before) all processes in the remote group enter the barrier.

No errors

Passed Bcast basic - bcast2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Test broadcast with various roots, datatypes, and communicators.

No errors

Passed Bcast intercommunicators - icbcast

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Broadcast test using a selection of intercommunicators and increasing array sizes.

No errors

Passed Bcast intermediate - bcast3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Test broadcast with various roots, datatypes, sizes that are not powers of two, larger message sizes, and communicators.

No errors

Passed Bcast sizes - bcasttest

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Tests MPI_Bcast() repeatedly using MPI_INT with a selection of data sizes.

No errors

Passed Bcast zero types - bcastzerotype

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Tests broadcast behavior with non-zero counts but zero-sized types.

No errors

Passed Collectives array-of-struct - coll12

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Tests various calls to MPI_Reduce(), MPI_Bcast(), and MPI_Allreduce() using arrays of structs.

No errors

Passed Exscan basic - exscan2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

Simple test of MPI_Exscan() using single element int arrays.

No errors

Passed Exscan communicators - exscan

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Tests MPI_Exscan() using int arrays and a selection of communicators and array sizes. Includes tests using MPI_IN_PLACE.

No errors

Passed Extended collectives - collectives

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Checks if "extended collectives" are supported, i.e., collective operations with MPI-2 intercommunicators.

No errors

Passed Gather 2D - coll2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

This test uses MPI_Gather() to define a two-dimensional table.

No errors

Passed Gather basic - gather2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This tests gathers data from a vector to contiguous datatype using doubles for a selection of communicators and array sizes. Includes test for zero length gather using MPI_IN_PLACE.

No errors

Passed Gather communicators - gather

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test gathers data from a vector to contiguous datatype using a double vector for a selection of communicators. Includes a zero length gather and a test to ensure aliasing is disallowed correctly.

No errors

Passed Gather intercommunicators - icgather

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

Gather test using a selection of intercommunicators and increasing array sizes.

No errors

Passed Gatherv 2D - coll3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

This test uses MPI_Gatherv() to define a two-dimensional table. This test is similar to Gather test (coll/coll2).

No errors

Passed Gatherv intercommunicators - icgatherv

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

Gatherv test using a selection of intercommunicators and increasing array sizes.

No errors

Passed Iallreduce basic - iallred

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Simple test for MPI_Iallreduce() and MPI_Allreduce().

No errors

Passed Ibarrier - ibarrier

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test calls MPI_Ibarrier() followed by an conditional loop containing usleep(1000) and MPI_Test(). This test hung indefinitely under some MPI implementations.

No errors

Failed LAND operations - opland

Build: Passed

Execution: Failed

Exit Status: Failed with signal 15

MPI Processes: 4

Test Description:

Test MPI_LAND (logical and) operations using MPI_Reduce() on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.

Reduce of MPI_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Reduce of MPI_DOUBLE
Reduce of MPI_LONG_LONG
Assertion failed in file ../src/mpi/coll/op/opland.c at line 65: 0
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Reduce of MPI_DOUBLE
Reduce of MPI_LONG_LONG
/opt/cray/pe/lib64/libmpi_cray.so.12(MPL_backtrace_show+0x26) [0x14e7218f5c2b]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1fef8d4) [0x14e72132c8d4]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x463aa9) [0x14e71f7a0aa9]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1bcf125) [0x14e720f0c125]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x354510) [0x14e71f691510]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x3b24ef) [0x14e71f6ef4ef]
/opt/cray/pe/lib64/libmpi_cray.so.12(PMPI_Reduce+0x578) [0x14e71f6f0368]
/var/run/palsd/645c690f-71d1-4903-a5d8-502215b098d5/files/opland() [0x203ff6]
/lib64/libc.so.6(__libc_start_main+0xef) [0x14e71e6be24d]
/var/run/palsd/645c690f-71d1-4903-a5d8-502215b098d5/files/opland() [0x203aea]
MPICH ERROR [Rank 2] [job id 645c690f-71d1-4903-a5d8-502215b098d5] [Mon Apr 15 11:03:07 2024] [x1001c5s7b0n0] - Abort(1): Internal error
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 2 exited with code 1
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 1 died from signal 15
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 3 died from signal 15
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 died from signal 15

Failed LOR operations - oplor

Build: Passed

Execution: Failed

Exit Status: Failed with signal 15

MPI Processes: 4

Test Description:

Test MPI_LOR (logical or) operations using MPI_Reduce() on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.

Reduce of MPI_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Reduce of MPI_DOUBLE
Reduce of MPI_LONG_LONG
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Reduce of MPI_DOUBLE
Reduce of MPI_LONG_LONG
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Assertion failed in file ../src/mpi/coll/op/oplor.c at line 65: 0
/opt/cray/pe/lib64/libmpi_cray.so.12(MPL_backtrace_show+0x26) [0x14989ed88c2b]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1fef8d4) [0x14989e7bf8d4]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x463aa9) [0x14989cc33aa9]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1bcf125) [0x14989e39f125]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x354510) [0x14989cb24510]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x3b24ef) [0x14989cb824ef]
/opt/cray/pe/lib64/libmpi_cray.so.12(PMPI_Reduce+0x578) [0x14989cb83368]
/var/run/palsd/c6262c01-5fe9-4a40-8c69-4f6ce0dcd202/files/oplor() [0x203fcd]
/lib64/libc.so.6(__libc_start_main+0xef) [0x14989bb5124d]
/var/run/palsd/c6262c01-5fe9-4a40-8c69-4f6ce0dcd202/files/oplor() [0x203aca]
MPICH ERROR [Rank 2] [job id c6262c01-5fe9-4a40-8c69-4f6ce0dcd202] [Mon Apr 15 11:03:07 2024] [x1001c5s7b0n0] - Abort(1): Internal error
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 2 exited with code 1
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 3 died from signal 15
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 1 died from signal 15
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 died from signal 15

Passed LXOR operations - oplxor

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

Test MPI_LXOR (logical excl or) operations using MPI_Reduce() on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.

Reduce of MPI_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Reduce of MPI_DOUBLE
Reduce of MPI_LONG_LONG
Reduce of MPI_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Reduce of MPI_DOUBLE
Reduce of MPI_LONG_LONG
No errors
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Reduce of MPI_DOUBLE
Reduce of MPI_LONG_LONG
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Reduce of MPI_DOUBLE
Reduce of MPI_LONG_LONG
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_FLOAT
Reduce of MPI_DOUBLE
Reduce of MPI_LONG_LONG

Passed MAX operations - opmax

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

Test MPI_MAX operations using MPI_Reduce() on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.

Reduce of MPI_CHAR
Reduce of MPI_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_LONG_LONG
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_LONG_LONG
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_LONG_LONG
No errors
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_LONG_LONG
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_LONG_LONG

Passed MAXLOC operations - opmaxloc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

Test MPI_MAXLOC operations using MPI_Reduce() on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.

No errors

Passed MIN operations - opmin

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test MPI_Min operations using MPI_Reduce() on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.

Reduce of MPI_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_LONG_LONG
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_LONG_LONG
No errors
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_LONG_LONG
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_LONG_LONG

Passed MINLOC operations - opminloc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test MPI_MINLOC operations using MPI_Reduce() on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.

No errors

Passed MScan - coll11

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Tests user defined collective operations for MPI_Scan(). The operations are inoutvec[i] += invec[i] op inoutvec[i] and inoutvec[i] = invec[i] op inoutvec[i] (see MPI-1.3 Message-Passing Interface section 4.9.4). The order of operation is important. Note that the computation is in process rank (in the communicator) order independant of the root process.

No errors

Passed Non-blocking basic - nonblocking4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a weak sanity test that all non-blocking collectives specified by MPI-3 are present in the library and accept arguments as expected. This test does not check for progress, matching issues, or sensible output buffer values.

No errors

Passed Non-blocking intracommunicator - nonblocking2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

This is a basic test of all 17 non-blocking collective operations specified by the MPI-3 standard. It only exercises the intracommunicator functionality, does not use MPI_IN_PLACE, and only transmits/receives simple integer types with relatively small counts. It does check a few fancier issues, such as ensuring that "premature user releases" of MPI_Op and MPI_Datatype objects does not result in a segfault.

No errors

Passed Non-blocking overlapping - nonblocking3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

This test attempts to execute multiple simultaneous non-blocking collective (NBC) MPI routines at the same time, and manages their completion with a variety of routines (MPI_{Wait,Test}{,_all,_any,_some}). The test also exercises a few point-to-point operations.

No errors

Passed Non-blocking wait - nonblocking

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

This is a weak sanity test that all non-blocking collectives specified by MPI-3 are present in the library and take arguments as expected. Includes calls to MPI_Wait() immediately following non-blocking collective calls. This test does not check for progress, matching issues, or sensible output buffer values.

No errors

Passed Op_{create,commute,free} - op_commutative

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

A simple test of MPI_Op_Create/Commutative/free on predefined reduction operations and both commutative and non-commutative user defined operations.

No errors

Passed PROD operations - opprod

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 6

Test Description:

Test MPI_PROD operations using MPI_Reduce() on optional datatypes. Note that failing this test does not mean that there is something wrong with the MPI implementation.

Reduce of MPI_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
No errors
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR

Passed Reduce any-root user-defined - red4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

This tests implements a simple matrix-matrix multiply with an arbitrary root using MPI_Reduce() on user-defined operations for a selection of communicators. This is an associative but not commutative operation. For a matrix size of matsize, the matrix is stored in C order where c(i,j) is cin[j+i*matSize].

No errors

Passed Reduce basic - reduce

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

A simple test of MPI_Reduce() with the rank of the root process shifted through each possible value using a selection of communicators.

No errors

Passed Reduce communicators user-defined - red3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

This tests implements a simple matrix-matrix multiply using MPI_Reduce() on user-defined operations for a selection of communicators. This is an associative but not commutative operation. For a matrix size of matsize, the matrix is stored in C order where c(i,j) is cin[j+i*matSize].

No errors

Passed Reduce intercommunicators - icreduce

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

Reduce test using a selection of intercommunicators and increasing array sizes.

No errors

Passed Reduce/Bcast multi-operation - coll8

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test repeats pairs of calls to MPI_Reduce() and MPI_Bcast() using different reduction operations and checks for errors.

No errors

Passed Reduce/Bcast user-defined - coll9

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test calls MPI_Reduce() and MPI_Bcast() with a user defined operation.

No errors

Passed Reduce_Scatter intercomm. large - redscatbkinter

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Test of reduce scatter block with large data on a selection of intercommunicators (needed in MPICH to trigger the long-data algorithm). Each processor contributes its rank + the index to the reduction, then receives the ith sum. Can be called with any number of processors.

No errors

Passed Reduce_Scatter large data - redscat3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

Test of reduce scatter with large data (needed to trigger the long-data algorithm). Each processor contributes its rank + index to the reduction, then receives the "ith" sum. Can be run with any number of processors.

No errors

Passed Reduce_Scatter user-defined - redscat2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Test of reduce scatter using user-defined operations. Checks that the non-communcative operations are not commuted and that all of the operations are performed.

No errors

Passed Reduce_Scatter_block large data - redscatblk3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Test of reduce scatter block with large data (needed in MPICH to trigger the long-data algorithm). Each processor contributes its rank + the index to the reduction, then receives the ith sum. Can be called with any number of processors.

No errors

Passed Reduce_local basic - reduce_local

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

A simple test of MPI_Reduce_local(). Uses MPI_SUM as well as user defined operators on arrays of increasing size.

No errors

Passed Reduce_scatter basic - redscat

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 6

Test Description:

Test of reduce scatter. Each processor contribues its rank plus the index to the reduction, then receives the ith sum. Can be called with any number of processors.

No errors

Passed Reduce_scatter intercommunicators - redscatinter

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

Test of reduce scatter with large data on a selection of intercommunicators (needed in MPICH to trigger the long-data algorithm). Each processor contributes its rank + the index to the reduction, then receives the ith sum. Can be called with any number of processors.

No errors

Failed Reduce_scatter_block basic - red_scat_block

Build: Passed

Execution: Failed

Exit Status: Application_ended_with_error(s)

MPI Processes: 8

Test Description:

Test of reduce scatter block. Each process contributes its rank plus the index to the reduction, then receives the ith sum. Can be called with any number of processors.

Found 1 errors
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 1 exited with code 1
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 1
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 2 exited with code 1
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 3 exited with code 1
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 7 exited with code 1
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 5 exited with code 1
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 4 exited with code 1
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 6 exited with code 1

Passed Reduce_scatter_block user-def - red_scat_block2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

Test of reduce scatter block using user-defined operations to check that non-commutative operations are not commuted and that all operations are performed. Can be called with any number of processors.

No errors

Passed SUM operations - opsum

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test looks at integer or integer related datatypes not required by the MPI-3.0 standard (e.g. long long) using MPI_Reduce(). Note that failure to support these datatypes is not an indication of a non-compliant MPI implementation.

Reduce of MPI_CHAR
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_DOUBLE_COMPLEX
Reduce of MPI_LONG_LONG
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_DOUBLE_COMPLEX
Reduce of MPI_LONG_LONG
No errors
Reduce of MPI_CHAR
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_DOUBLE_COMPLEX
Reduce of MPI_LONG_LONG
Reduce of MPI_SIGNED_CHAR
Reduce of MPI_UNSIGNED_CHAR
Reduce of MPI_DOUBLE_COMPLEX
Reduce of MPI_LONG_LONG

Passed Scan basic - scantst

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

A simple test of MPI_Scan() on predefined operations and user-defined operations with with inoutvec[i] = invec[i] op inoutvec[i] (see 4.9.4 of the MPI standard 1.3) and inoutvec[i] += invec[i] op inoutvec[i]. The order is important. Note that the computation is in process rank (in the communicator) order, independent of the root.

No errors

Passed Scatter 2D - coll4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test uses MPI_Scatter() to define a two-dimensional table. See also Gather test (coll/coll2) and Gatherv test (coll/coll3) for similar tests.

No errors

Passed Scatter basic - scatter2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This MPI_Scatter() test sends a vector and receives individual elements, except for the root process that does not receive any data.

No errors

Passed Scatter contiguous - scatter3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This MPI_Scatter() test sends contiguous data and receives a vector on some nodes and contiguous data on others. There is some evidence that some MPI implementations do not check recvcount on the root process. This test checks for that case.

No errors

Passed Scatter intercommunicators - icscatter

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

Scatter test using a selection of intercommunicators and increasing array sizes.

No errors

Passed Scatter vector-to-1 - scattern

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This MPI_Scatter() test sends a vector and receives individual elements.

No errors

Passed Scatterv 2D - coll5

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test uses MPI_Scatterv() to define a two-dimensional table.

No errors

Passed Scatterv intercommunicators - icscatterv

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

Scatterv test using a selection of intercommunicators and increasing array sizes.

No errors

Passed Scatterv matrix - scatterv

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is an example of using scatterv to send a matrix from one process to all others, with the matrix stored in Fortran order. Note the use of an explicit upper bound (UB) to enable the sources to overlap. This tests uses scatterv to make sure that it uses the datatype size and extent correctly. It requires the number of processors used in the call to MPI_Dims_create.

No errors

Passed User-defined many elements - uoplong

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 16

Test Description:

Test user-defined operations for MPI_Reduce() with a large number of elements. Added because a talk at EuroMPI'12 claimed that these failed with more than 64k elements.

Count = 1
Count = 2
Count = 4
Count = 8
Count = 16
Count = 32
Count = 64
Count = 128
Count = 256
Count = 512
Count = 1024
Count = 2048
Count = 4096
Count = 8192
Count = 16384
Count = 32768
Count = 65536
Count = 131072
Count = 262144
Count = 524288
Count = 1048576
No errors

MPI_Info Objects - Score: 100% Passed

The info tests emphasize the MPI Info object functionality.

Passed MPI_Info_delete basic - infodel

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test exercises the MPI_Info_delete() function.

No errors

Passed MPI_Info_dup basic - infodup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test exercises the MPI_Info_dup() function.

No errors

Passed MPI_Info_get basic - infoenv

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This is a simple test of the MPI_Info_get() function.

No errors

Passed MPI_Info_get ext. ins/del - infomany2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Test of info that makes use of the extended handles, including inserts and deletes.

No errors

Passed MPI_Info_get extended - infomany

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Test of info that makes use of the extended handles.

No errors

Passed MPI_Info_get ordered - infoorder

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This is a simple test that illustrates how named keys are ordered.

No errors

Passed MPI_Info_get_valuelen basic - infovallen

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Simple info set and get_valuelen test.

No errors

Passed MPI_Info_set/get basic - infotest

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Simple info set and get test.

No errors

Dynamic Process Management - Score: 93% Passed

This group features tests that add processes to a running communicator, joining separately started applications, then handling faults/failures.

Passed Creation group intercomm test - pgroup_intercomm_test

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

In this test processes create an intracommunicator, and creation is collective only on the members of the new communicator, not on the parent communicator. This is accomplished by building up and merging intercommunicators starting with MPI_COMM_SELF for each process involved.

No errors

Passed MPI spawn test with threads - taskmaster

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Create a thread for each task. Each thread will spawn a child process to perform its task.

No errors

Passed MPI spawn-connect-accept - spaconacc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Spawns two processes, one connecting and one accepting. It synchronizes with each then waits for them to connect and accept.

init.
No errors

Passed MPI spawn-connect-accept send/recv - spaconacc2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Spawns two processes, one connecting and one accepting. It synchronizes with each then waits for them to connect and accept. The connector and acceptor respectively send and receive some data.

init.
No errors

Passed MPI_Comm_accept basic - selfconacc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This tests exercises MPI_Open_port(), MPI_Comm_accept(), and MPI_Comm_disconnect().

init.
init.
size.
rank.
open_port.
0: opened port: <746167233024636f6e6e656e74727923303230303938373430413936303142413030303030303030303030303030303024>
send.
size.
rank.
recv.
accept.
1: received port: <746167233024636f6e6e656e74727923303230303938373430413936303142413030303030303030303030303030303024>
connect.
close_port.
disconnect.
disconnect.
No errors

Passed MPI_Comm_connect 2 processes - multiple_ports

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 3

Test Description:

This test checks to make sure that two MPI_Comm_connects to two different MPI ports match their corresponding MPI_Comm_accepts.

0: opening ports.
1: receiving port.
1: received port1: <746167233024636f6e6e656e74727923303230304230313830413936303142413030303030303030303030303030303024>
1: connecting.
0: opened port1: <746167233024636f6e6e656e74727923303230304230313830413936303142413030303030303030303030303030303024>
0: opened port2: <746167233124636f6e6e656e74727923303230304230313830413936303142413030303030303030303030303030303024>
0: sending ports.
2: receiving port.
0: accepting port2.
2: received port2: <746167233124636f6e6e656e74727923303230304230313830413936303142413030303030303030303030303030303024>
2: connecting.
0: accepting port1.
2: disconnecting.
0: closing ports.
0: sending 1 to process 1.
0: sending 2 to process 2.
0: disconnecting.
1: disconnecting.
No errors

Passed MPI_Comm_connect 3 processes - multiple_ports2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test checks to make sure that three MPI_Comm_connections to three different MPI ports match their corresponding MPI_Comm_accepts.

0: opening ports.
1: receiving port.
1: received port1: <746167233024636f6e6e656e74727923303230304230384630413936303142413030303030303030303030303030303024>
1: connecting.
2: receiving port.
0: opened port1: <746167233024636f6e6e656e74727923303230304230384630413936303142413030303030303030303030303030303024>
0: opened port2: <746167233124636f6e6e656e74727923303230304230384630413936303142413030303030303030303030303030303024>
0: opened port3: <746167233224636f6e6e656e74727923303230304230384630413936303142413030303030303030303030303030303024>
0: sending ports.
3: receiving port.
0: accepting port3.
2: received port2: <746167233124636f6e6e656e74727923303230304230384630413936303142413030303030303030303030303030303024>
2: connecting.
2: received port2: <ffffffffffffffffffffffffffffffffffffffffffffffffffffffff>
3: connecting.
0: accepting port2.
0: accepting port1.
0: closing ports.
0: sending 1 to process 1.
0: sending 2 to process 2.
0: sending 3 to process 3.
0: disconnecting.
2: disconnecting.
1: disconnecting.
3: disconnecting.
No errors

Passed MPI_Comm_disconnect basic - disconnect

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 3

Test Description:

A simple test of Comm_disconnect with a master and 2 spawned ranks.

calling finalize
No errors
calling finalize
calling finalize

Passed MPI_Comm_disconnect send0-1 - disconnect2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 3

Test Description:

A test of Comm_disconnect with a master and 2 spawned ranks, after sending from rank 0 to 1.

No errors
calling finalize
calling finalize
calling finalize

Passed MPI_Comm_disconnect send1-2 - disconnect3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 3

Test Description:

A test of Comm_disconnect with a master and 2 spawned ranks, after sending from rank 1 to 2.

calling finalize
No errors
calling finalize
calling finalize

Passed MPI_Comm_disconnect-reconnect basic - disconnect_reconnect

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 3

Test Description:

A simple test of Comm_connect/accept/disconnect.

No errors
[2113488] calling finalize
[2113488] calling finalize
[2113488] calling finalize

Passed MPI_Comm_disconnect-reconnect groups - disconnect_reconnect3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 3

Test Description:

This test tests the disconnect code for processes that span process groups. This test spawns a group of processes and then merges them into a single communicator. Then the single communicator is split into two communicators, one containing the even ranks and the other the odd ranks. Then the two new communicators do MPI_Comm_accept/connect/disconnect calls in a loop. The even group does the accepting while the odd group does the connecting.

calling finalize
No errors
calling finalize
calling finalize

Passed MPI_Comm_disconnect-reconnect repeat - disconnect_reconnect2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 3

Test Description:

This test spawns two child jobs and has them open a port and connect to each other. The two children repeatedly connect, accept, and disconnect from each other.

init.
init.
init.
No errors
No errors
No errors

Passed MPI_Comm_join basic - join

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

A simple test of Comm_join.

No errors

Passed MPI_Comm_spawn basic - spawn1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

A simple test of Comm_spawn.

No errors

Passed MPI_Comm_spawn complex args - spawnargv

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

A simple test of Comm_spawn, with complex arguments.

No errors

Passed MPI_Comm_spawn inter-merge - spawnintra

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

A simple test of Comm_spawn, followed by intercomm merge.

No errors

Passed MPI_Comm_spawn many args - spawnmanyarg

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

A simple test of Comm_spawn, with many arguments.

No errors

Passed MPI_Comm_spawn repeat - spawn2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

A simple test of Comm_spawn, called twice.

No errors

Passed MPI_Comm_spawn with info - spawninfo1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

A simple test of Comm_spawn with info.

No errors

Passed MPI_Comm_spawn_multiple appnum - spawnmult2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This tests spawn_mult by using the same executable and no command-line options. The attribute MPI_APPNUM is used to determine which executable is running.

No errors

Passed MPI_Comm_spawn_multiple basic - spawnminfo1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

A simple test of Comm_spawn_multiple with info.

No errors

Passed MPI_Intercomm_create - spaiccreate

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Use Spawn to create an intercomm, then create a new intercomm that includes processes not in the initial spawn intercomm.This test ensures that spawned processes are able to communicate with processes that were not in the communicator from which they were spawned.

No errors

Passed MPI_Publish_name basic - namepub

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test confirms the functionality of MPI_Open_port() and MPI_Publish_name().

PE 0: MPICH Warning: MPICH_DPM_DIR not set, trying HOME directory. See intro_mpi man page for more details.
PE 1: MPICH Warning: MPICH_DPM_DIR not set, trying HOME directory. See intro_mpi man page for more details.
No errors

Failed Multispawn - multispawn

Build: Passed

Execution: Failed

Exit Status: Application_ended_with_error(s)

MPI Processes: 1

Test Description:

This (is a placeholder for a) test that creates 4 threads, each of which does a concurrent spawn of 4 more processes, for a total of 17 MPI processes. The resulting intercomms are tested for consistency (to ensure that the spawns didn't get confused among the threads). As an option, it will time the Spawn calls. If the spawn calls block the calling thread, this may show up in the timing of the calls.

Assertion failed in file ../src/mpid/ch4/netmod/ofi/ofi_spawn.c at line 753: 0
/opt/cray/pe/lib64/libmpi_cray.so.12(MPL_backtrace_show+0x26) [0x14e43ecbbc2b]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1fef8d4) [0x14e43e6f28d4]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x22a6fe8) [0x14e43e9a9fe8]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x202ac89) [0x14e43e72dc89]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x202b03e) [0x14e43e72e03e]
/opt/cray/pe/lib64/libmpi_cray.so.12(MPI_Comm_spawn+0x1e2) [0x14e43e29a212]
/var/run/palsd/53ab20c7-1677-4a01-8358-553a94e183e1/files/multispawn() [0x203f53]
/var/run/palsd/53ab20c7-1677-4a01-8358-553a94e183e1/files/multispawn() [0x2040b3]
/lib64/libc.so.6(__libc_start_main+0xef) [0x14e43ba8424d]
/var/run/palsd/53ab20c7-1677-4a01-8358-553a94e183e1/files/multispawn() [0x203e4a]
MPICH ERROR [Rank 0] [job id 53ab20c7-1677-4a01-8358-553a94e183e1] [Mon Apr 15 11:03:45 2024] [x1001c5s5b0n0] - Abort(1): Internal error
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 1

Passed Process group creation - pgroup_connect_test

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

In this test, processes create an intracommunicator, and creation is collective only on the members of the new communicator, not on the parent communicator. This is accomplished by building up and merging intercommunicators using Connect/Accept to merge with a master/controller process.

No errors

Failed Taskmaster threaded - th_taskmaster

Build: Passed

Execution: Failed

Exit Status: Application_ended_with_error(s)

MPI Processes: 1

Test Description:

This is a simple test that creates threads to verifiy compatibility between MPI and the pthread library.

Assertion failed in file ../src/mpid/ch4/netmod/ofi/ofi_spawn.c at line 753: 0
/opt/cray/pe/lib64/libmpi_cray.so.12(MPL_backtrace_show+0x26) [0x14c4bfcdbc2b]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1fef8d4) [0x14c4bf7128d4]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x22a6fe8) [0x14c4bf9c9fe8]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x202ac89) [0x14c4bf74dc89]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x202b03e) [0x14c4bf74e03e]
/opt/cray/pe/lib64/libmpi_cray.so.12(MPI_Comm_spawn+0x1e2) [0x14c4bf2ba212]
/var/run/palsd/21854d6c-d848-4083-a5e7-bf55022f37f6/files/th_taskmaster() [0x2040a6]
/var/run/palsd/21854d6c-d848-4083-a5e7-bf55022f37f6/files/th_taskmaster() [0x20422f]
/lib64/libpthread.so.0(+0xa6ea) [0x14c4bcc706ea]
/lib64/libc.so.6(clone+0x3f) [0x14c4bcb8694f]
MPICH ERROR [Rank 0] [job id 21854d6c-d848-4083-a5e7-bf55022f37f6] [Mon Apr 15 11:03:52 2024] [x1001c5s5b0n0] - Abort(1): Internal error
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 1

Threads - Score: 92% Passed

This group features tests that utilize thread compliant MPI implementations. This includes the threaded environment provided by MPI-3.0, as well as POSIX compliant threaded libraries such as PThreads.

Passed Alltoall threads - alltoall

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

The LISTENER THREAD waits for communication from any source (including calling thread) messages which it receives that has tag REQ_TAG. Each thread enters an infinite loop that will stop only if every node in the MPI_COMM_WORLD sends a message containing -1.

No errors

Passed MPI_T multithreaded - mpit_threading

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test is adapted from test/mpi/mpi_t/mpit_vars.c. But this is a multithreading version in which multiple threads will call MPI_T routines.

With verbose set, thread 0 will prints out MPI_T control variables, performance variables and their categories.

129 MPI Control Variables
	MPIR_CVAR_ALLREDUCE_MAX_SMP_SIZE=262144	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLREDUCE_GPU_MAX_SMP_SIZE=1024	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IALLGATHERV_PIPELINE_MSG_SIZE=524288	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_SCATTER_COMMUTATIVE_LONG_MSG_SIZE=524288	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_SCATTER_MAX_COMMSIZE=1000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_SYNC_FREQ=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_BLK_SIZE=16384	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_CHUNKING_MAX_NODES=90	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLGATHER_VSHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLGATHER_VSHORT_MSG_ALGORITHM=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLGATHERV_VSHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLREDUCE_NO_SMP=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_SHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALLV_THROTTLE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_ONLY_TREE=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_INTERNODE_RADIX=4	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_INTRANODE_RADIX=4	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_COLL_OPT_OFF	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_COLL_SYNC	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_SHORT_MSG=131072	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_NO_SMP=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SYNCHRONOUS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SHARED_MEM_COLL_OPT	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_STAGING_THRESHOLD=256	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_STAGING_BUF_SIZE=1048576	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_REGISTER_SMALL_STAGING_BUFFERS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IALLGATHERV_THROTTLE=6	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_SYNC_FREQ=16	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_MAX_TMP_SIZE	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_MIN_COMM_SIZE=64	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IGATHERV_MIN_COMM_SIZE=1000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IGATHERV_SYNC_FREQ=100	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SYNC_FREQ=16	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_MAX_TMP_SIZE	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_MIN_COMM_SIZE=64	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_ABORT_ON_RW_ERROR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_AGGREGATOR_PLACEMENT_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_AGGREGATOR_PLACEMENT_STRIDE=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_CB_ALIGN=2	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_DVS_MAXNODES=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_HINTS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_HINTS_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_NUM_IRECV=50	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_NUM_ISEND=50	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_SIZE_ISEND=10485760	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS_INTERVAL_MSEC	SCOPE_READONLY	NO_OBJECT	MPI_UNSIGNED_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIMERS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIMERS_SCALE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIME_WAITS=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_WRITE_EXIT_BARRIER=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_DS_WRITE_CRAY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_OFI_STARTUP_CONNECT	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_OFI_STARTUP_NODES_AGGREGATOR=2	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_DPM_DIR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SINGLE_HOST_ENABLED=1	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_REPORT=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_REPORT_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_XRCD_BASE_DIR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_POLICY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_MAPPING	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NUM_NICS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_SKIP_NIC_SYMMETRY_TEST=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_STARTUP_CONNECT=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_STARTUP_CONNECT=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_DEFAULT_TCLASS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_TCLASS_ERRORS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_PID_BASE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_VNI_INDEX=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_USE_SCALABLE_STARTUP=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_LATENCY_TCLASS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_LATENCY_TCLASS_THRESHOLD=2048	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_UCX_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_UCX_RC_MAX_RANKS=7	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SMP_SINGLE_COPY_MODE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SMP_SINGLE_COPY_SIZE=8192	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SHM_PROGRESS_MAX_BATCH_SIZE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CH4_PROGRESS_POKE=1	SCOPE_LOCAL	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SPAWN_USE_RANKPOOL=1	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CH4_RMA_THREAD_HOT=0	SCOPE_GROUP_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ABORT_ON_ERROR=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CPUMASK_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ENV_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OPTIMIZED_MEMCPY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_VERBOSITY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RANK_REORDER_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RANK_REORDER_METHOD=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_USE_SYSTEM_MEMCPY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_VERSION_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_USE_GPU_STREAM_TRIGGERED=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NUM_MAX_GPU_STREAMS=27	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MEMCPY_MEM_CHECK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MSG_QUEUE_DBG=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NO_BUFFER_ALIAS_CHECK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_AFFINITY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_INTERNAL_MEM_AFFINITY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_POLICY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_PG_SZ	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OPT_THREAD_SYNC=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_THREAD_YIELD_FREQ=10000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MEM_DEBUG_FNAME	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MALLOC_FALLBACK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NO_DIRECT_GPU_ACCESS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_G2G_PIPELINE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_SUPPORT_ENABLED=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_MANAGED_MEMORY_SUPPORT_ENABLED=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_STAGING_AREA_OPT=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_REGISTER_SHARED_MEM_REGION=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_ENABLED=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_THRESHOLD=1024	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_PROTOCOL	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_NO_ASYNC_COPY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ENABLE_YAKSA=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_EAGER_DEVICE_MEM=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_EAGER_REGISTER_HOST_MEM=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_USE_KERNEL=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_BLK_SIZE=8388608	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_KERNEL_THRESHOLD=131072	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RMA_MAX_PENDING=128	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RMA_SHM_ACCUMULATE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_LOCAL_SPAWN_SERVER=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
0 MPI Performance Variables
22 MPI_T categories
Category COLLECTIVE has 39 control variables, 0 performance variables, 0 subcategories
Category COMMUNICATOR has 0 control variables, 0 performance variables, 0 subcategories
Category DATALOOP has 0 control variables, 0 performance variables, 0 subcategories
Category ERROR_HANDLING has 0 control variables, 0 performance variables, 0 subcategories
Category THREADS has 0 control variables, 0 performance variables, 0 subcategories
Category DEBUGGER has 0 control variables, 0 performance variables, 0 subcategories
Category DEVELOPER has 0 control variables, 0 performance variables, 0 subcategories
Category CRAY_MPIIO has 20 control variables, 0 performance variables, 0 subcategories
Category DIMS has 0 control variables, 0 performance variables, 0 subcategories
Category PROCESS_MANAGER has 2 control variables, 0 performance variables, 0 subcategories
Category MEMORY has 0 control variables, 0 performance variables, 0 subcategories
Category NODEMAP has 1 control variables, 0 performance variables, 0 subcategories
Category REQUEST has 0 control variables, 0 performance variables, 0 subcategories
Category NEMESIS has 0 control variables, 0 performance variables, 0 subcategories
Category FT has 0 control variables, 0 performance variables, 0 subcategories
Category CH3 has 0 control variables, 0 performance variables, 0 subcategories
Category CH4_OFI has 20 control variables, 0 performance variables, 0 subcategories
Category CH4 has 3 control variables, 0 performance variables, 0 subcategories
Category CRAY_CONTROL has 17 control variables, 0 performance variables, 0 subcategories
Category CRAY_DISPLAY has 7 control variables, 0 performance variables, 0 subcategories
Category CRAY_GPU has 18 control variables, 0 performance variables, 0 subcategories
Category CH4_UCX has 2 control variables, 0 performance variables, 0 subcategories
No errors

Passed Multi-target basic - multisend

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Run concurrent sends to a single target process. Stresses an implementation that permits concurrent sends to different targets.

No errors

Passed Multi-target many - multisend2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

Run concurrent sends to different target processes. Stresses an implementation that permits concurrent sends to different targets.

buf size 1: time 0.000076
buf size 2: time 0.000006
buf size 4: time 0.000007
buf size 8: time 0.000007
buf size 16: time 0.000007
buf size 32: time 0.000007
buf size 64: time 0.000024
buf size 128: time 0.000010
buf size 256: time 0.000010
buf size 512: time 0.000011
buf size 1024: time 0.000013
buf size 2048: time 0.000023
buf size 4096: time 0.000026
buf size 8192: time 0.000046
buf size 16384: time 0.000048
buf size 32768: time 0.000053
buf size 65536: time 0.000064
buf size 131072: time 0.000054
buf size 262144: time 0.000086
buf size 524288: time 0.000149
No errors

Passed Multi-target non-blocking - multisend3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

Run concurrent sends to different target processes. Stresses an implementation that permits concurrent sends to different targets. Uses non-blocking sends, and have a single thread complete all I/O.

buf address 0x154d88000b60 (size 2640000)
buf address 0x154d80000b60 (size 2640000)
buf address 0x154d84000b60 (size 2640000)
buf address 0x154d78000b60 (size 2640000)
buf size 4: time 0.001991
buf size 8: time 0.000012
buf size 16: time 0.000013
buf size 32: time 0.000012
buf size 64: time 0.000012
buf size 128: time 0.000012
buf size 256: time 0.000266
buf size 512: time 0.000016
buf size 1024: time 0.000017
buf size 2048: time 0.000018
buf size 4096: time 0.000026
buf size 8192: time 0.000023
buf size 16384: time 0.000025
buf size 32768: time 0.000261
buf size 65536: time 0.000040
buf size 131072: time 0.000068
buf size 262144: time 0.000119
buf size 524288: time 0.000337
buf size 1048576: time 0.000332
buf size 2097152: time 0.000915
No errors

Passed Multi-target non-blocking send/recv - multisend4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

Run concurrent sends to different target processes. Stresses an implementation that permits concurrent sends to different targets. Uses non-blocking sends and recvs, and have a single thread complete all I/O.

buf size 1: time 0.000564
buf size 1: time 0.000570
buf size 1: time 0.000579
buf size 1: time 0.000569
buf size 1: time 0.000580
buf size 2: time 0.000034
buf size 2: time 0.000033
buf size 2: time 0.000034
buf size 2: time 0.000035
buf size 2: time 0.000035
buf size 4: time 0.000032
buf size 4: time 0.000031
buf size 4: time 0.000031
buf size 4: time 0.000032
buf size 4: time 0.000032
buf size 8: time 0.000031
buf size 8: time 0.000030
buf size 8: time 0.000031
buf size 8: time 0.000031
buf size 8: time 0.000031
buf size 16: time 0.000031
buf size 16: time 0.000031
buf size 16: time 0.000031
buf size 16: time 0.000030
buf size 16: time 0.000030
buf size 32: time 0.000028
buf size 32: time 0.000028
buf size 32: time 0.000028
buf size 32: time 0.000028
buf size 32: time 0.000028
buf size 64: time 0.000303
buf size 64: time 0.000303
buf size 64: time 0.000304
buf size 64: time 0.000304
buf size 64: time 0.000304
buf size 128: time 0.000033
buf size 128: time 0.000033
buf size 128: time 0.000033
buf size 128: time 0.000034
buf size 128: time 0.000033
buf size 256: time 0.000045
buf size 256: time 0.000047
buf size 256: time 0.000046
buf size 256: time 0.000046
buf size 256: time 0.000047
buf size 512: time 0.000036
buf size 512: time 0.000037
buf size 512: time 0.000036
buf size 512: time 0.000037
buf size 512: time 0.000037
buf size 1024: time 0.000050
buf size 1024: time 0.000051
buf size 1024: time 0.000051
buf size 1024: time 0.000051
buf size 1024: time 0.000051
buf size 2048: time 0.000054
buf size 2048: time 0.000053
buf size 2048: time 0.000056
buf size 2048: time 0.000057
buf size 2048: time 0.000055
buf size 4096: time 0.000059
buf size 4096: time 0.000061
buf size 4096: time 0.000060
buf size 4096: time 0.000060
buf size 4096: time 0.000061
buf size 8192: time 0.000423
buf size 8192: time 0.000425
buf size 8192: time 0.000424
buf size 8192: time 0.000425
buf size 8192: time 0.000426
buf size 16384: time 0.000147
buf size 16384: time 0.000148
buf size 16384: time 0.000151
buf size 16384: time 0.000147
buf size 16384: time 0.000145
buf size 32768: time 0.000264
buf size 32768: time 0.000261
buf size 32768: time 0.000270
buf size 32768: time 0.000269
buf size 32768: time 0.000267
buf size 65536: time 0.000393
buf size 65536: time 0.000407
buf size 65536: time 0.000405
buf size 65536: time 0.000410
buf size 65536: time 0.000411
buf size 131072: time 0.000499
buf size 131072: time 0.000502
buf size 131072: time 0.000497
buf size 131072: time 0.000519
buf size 131072: time 0.000517
buf size 262144: time 0.000933
buf size 262144: time 0.000936
buf size 262144: time 0.000931
buf size 262144: time 0.000933
buf size 262144: time 0.000931
buf size 524288: time 0.001535
buf size 524288: time 0.001549
buf size 524288: time 0.001549
No errors
buf size 524288: time 0.001551
buf size 524288: time 0.001547

Passed Multi-target self - sendselfth

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Send to self in a threaded program.

No errors

Passed Multi-threaded [non]blocking - threads

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

The tests blocking and non-blocking capability within MPI.

Using MPI_PROC_NULL
-------------------
Threads: 1; Latency: 0.014; Mrate: 72.666
Threads: 2; Latency: 0.726; Mrate: 2.756
Threads: 3; Latency: 0.697; Mrate: 4.306
Threads: 4; Latency: 1.019; Mrate: 3.924
Blocking communication with message size      0 bytes
------------------------------------------------------
Threads: 1; Latency: 0.398; Mrate: 2.512
Threads: 2; Latency: 5.022; Mrate: 0.398
Threads: 3; Latency: 6.375; Mrate: 0.471
Threads: 4; Latency: 7.934; Mrate: 0.504
Blocking communication with message size      1 bytes
------------------------------------------------------
Threads: 1; Latency: 0.411; Mrate: 2.433
Threads: 2; Latency: 4.752; Mrate: 0.421
Threads: 3; Latency: 6.324; Mrate: 0.474
Threads: 4; Latency: 7.902; Mrate: 0.506
Blocking communication with message size      4 bytes
------------------------------------------------------
Threads: 1; Latency: 0.412; Mrate: 2.430
Threads: 2; Latency: 4.673; Mrate: 0.428
Threads: 3; Latency: 7.914; Mrate: 0.379
Threads: 4; Latency: 7.716; Mrate: 0.518
Blocking communication with message size     16 bytes
------------------------------------------------------
Threads: 1; Latency: 0.405; Mrate: 2.471
Threads: 2; Latency: 4.891; Mrate: 0.409
Threads: 3; Latency: 8.807; Mrate: 0.341
Threads: 4; Latency: 9.447; Mrate: 0.423
Blocking communication with message size     64 bytes
------------------------------------------------------
Threads: 1; Latency: 0.433; Mrate: 2.308
Threads: 2; Latency: 5.226; Mrate: 0.383
Threads: 3; Latency: 6.945; Mrate: 0.432
Threads: 4; Latency: 11.606; Mrate: 0.345
Blocking communication with message size    256 bytes
------------------------------------------------------
Threads: 1; Latency: 3.540; Mrate: 0.282
Threads: 2; Latency: 6.460; Mrate: 0.310
Threads: 3; Latency: 10.764; Mrate: 0.279
Threads: 4; Latency: 12.832; Mrate: 0.312
Blocking communication with message size   1024 bytes
------------------------------------------------------
Threads: 1; Latency: 3.747; Mrate: 0.267
Threads: 2; Latency: 6.653; Mrate: 0.301
Threads: 3; Latency: 9.835; Mrate: 0.305
Threads: 4; Latency: 14.778; Mrate: 0.271
Non-blocking communication with message size      0 bytes
----------------------------------------------------------
Threads: 1; Latency: 0.591; Mrate: 1.692
Threads: 2; Latency: 4.116; Mrate: 0.486
Threads: 3; Latency: 5.821; Mrate: 0.515
Threads: 4; Latency: 12.166; Mrate: 0.329
Non-blocking communication with message size      1 bytes
----------------------------------------------------------
Threads: 1; Latency: 0.621; Mrate: 1.611
Threads: 2; Latency: 5.449; Mrate: 0.367
Threads: 3; Latency: 8.340; Mrate: 0.360
Threads: 4; Latency: 12.669; Mrate: 0.316
Non-blocking communication with message size      4 bytes
----------------------------------------------------------
Threads: 1; Latency: 0.609; Mrate: 1.641
Threads: 2; Latency: 5.442; Mrate: 0.368
Threads: 3; Latency: 8.791; Mrate: 0.341
Threads: 4; Latency: 12.415; Mrate: 0.322
Non-blocking communication with message size     16 bytes
----------------------------------------------------------
Threads: 1; Latency: 0.609; Mrate: 1.643
Threads: 2; Latency: 5.477; Mrate: 0.365
Threads: 3; Latency: 8.579; Mrate: 0.350
Threads: 4; Latency: 12.613; Mrate: 0.317
Non-blocking communication with message size     64 bytes
----------------------------------------------------------
Threads: 1; Latency: 0.613; Mrate: 1.631
Threads: 2; Latency: 5.547; Mrate: 0.361
Threads: 3; Latency: 8.595; Mrate: 0.349
Threads: 4; Latency: 11.554; Mrate: 0.346
Non-blocking communication with message size    256 bytes
----------------------------------------------------------
Threads: 1; Latency: 0.680; Mrate: 1.472
Threads: 2; Latency: 5.593; Mrate: 0.358
Threads: 3; Latency: 9.282; Mrate: 0.323
Threads: 4; Latency: 11.466; Mrate: 0.349
Non-blocking communication with message size   1024 bytes
----------------------------------------------------------
Threads: 1; Latency: 0.821; Mrate: 1.218
Threads: 2; Latency: 5.582; Mrate: 0.358
Threads: 3; Latency: 9.636; Mrate: 0.311
Threads: 4; Latency: 12.252; Mrate: 0.326
No errors

Passed Multi-threaded send/recv - threaded_sr

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

The buffer size needs to be large enough to cause the rndv protocol to be used. If the MPI provider doesn't use a rndv protocol then the size doesn't matter.

No errors

Passed Multiple threads context dup - ctxdup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test creates communicators concurrently in different threads.

No errors

Passed Multiple threads context idup - ctxidup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test creates communicators concurrently, non-blocking, in different threads.

No errors

Passed Multiple threads dup leak - dup_leak_test

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test repeatedly duplicates and frees communicators with multiple threads concurrently to stress the multithreaded aspects of the context ID allocation code. Thanks to IBM for providing the original version of this test.

No errors

Failed Multispawn - multispawn

Build: Passed

Execution: Failed

Exit Status: Application_ended_with_error(s)

MPI Processes: 1

Test Description:

This (is a placeholder for a) test that creates 4 threads, each of which does a concurrent spawn of 4 more processes, for a total of 17 MPI processes. The resulting intercomms are tested for consistency (to ensure that the spawns didn't get confused among the threads). As an option, it will time the Spawn calls. If the spawn calls block the calling thread, this may show up in the timing of the calls.

Assertion failed in file ../src/mpid/ch4/netmod/ofi/ofi_spawn.c at line 753: 0
/opt/cray/pe/lib64/libmpi_cray.so.12(MPL_backtrace_show+0x26) [0x14e43ecbbc2b]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1fef8d4) [0x14e43e6f28d4]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x22a6fe8) [0x14e43e9a9fe8]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x202ac89) [0x14e43e72dc89]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x202b03e) [0x14e43e72e03e]
/opt/cray/pe/lib64/libmpi_cray.so.12(MPI_Comm_spawn+0x1e2) [0x14e43e29a212]
/var/run/palsd/53ab20c7-1677-4a01-8358-553a94e183e1/files/multispawn() [0x203f53]
/var/run/palsd/53ab20c7-1677-4a01-8358-553a94e183e1/files/multispawn() [0x2040b3]
/lib64/libc.so.6(__libc_start_main+0xef) [0x14e43ba8424d]
/var/run/palsd/53ab20c7-1677-4a01-8358-553a94e183e1/files/multispawn() [0x203e4a]
MPICH ERROR [Rank 0] [job id 53ab20c7-1677-4a01-8358-553a94e183e1] [Mon Apr 15 11:03:45 2024] [x1001c5s5b0n0] - Abort(1): Internal error
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 1

Passed Simple thread comm dup - comm_dup_deadlock

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a simple test of threads in MPI with communicator duplication.

No errors

Passed Simple thread comm idup - comm_idup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a simple test of threads in MPI with non-blocking communicator duplication.

No Errors

Passed Simple thread finalize - initth

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

The test here is a simple one that Finalize exits, so the only action is to write no error.

No errors

Passed Simple thread initialize - initth2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

The test initializes a thread, then calls MPI_Finalize() and prints "No errors".

No errors

Failed Taskmaster threaded - th_taskmaster

Build: Passed

Execution: Failed

Exit Status: Application_ended_with_error(s)

MPI Processes: 1

Test Description:

This is a simple test that creates threads to verifiy compatibility between MPI and the pthread library.

Assertion failed in file ../src/mpid/ch4/netmod/ofi/ofi_spawn.c at line 753: 0
/opt/cray/pe/lib64/libmpi_cray.so.12(MPL_backtrace_show+0x26) [0x14c4bfcdbc2b]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1fef8d4) [0x14c4bf7128d4]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x22a6fe8) [0x14c4bf9c9fe8]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x202ac89) [0x14c4bf74dc89]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x202b03e) [0x14c4bf74e03e]
/opt/cray/pe/lib64/libmpi_cray.so.12(MPI_Comm_spawn+0x1e2) [0x14c4bf2ba212]
/var/run/palsd/21854d6c-d848-4083-a5e7-bf55022f37f6/files/th_taskmaster() [0x2040a6]
/var/run/palsd/21854d6c-d848-4083-a5e7-bf55022f37f6/files/th_taskmaster() [0x20422f]
/lib64/libpthread.so.0(+0xa6ea) [0x14c4bcc706ea]
/lib64/libc.so.6(clone+0x3f) [0x14c4bcb8694f]
MPICH ERROR [Rank 0] [job id 21854d6c-d848-4083-a5e7-bf55022f37f6] [Mon Apr 15 11:03:52 2024] [x1001c5s5b0n0] - Abort(1): Internal error
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 1

Passed Thread Group creation - comm_create_threads

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Every thread paticipates in a distinct MPI_Comm_create group, distinguished by its thread-id (used as the tag). Threads on even ranks join an even comm and threads on odd ranks join the odd comm.

No errors

Passed Thread/RMA interaction - multirma

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This is a simple test of threads in MPI.

No errors

Passed Threaded group - comm_create_group_threads

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

In this test a number of threads are created with a distinct MPI communicator (or comm) group distinguished by its thread-id (used as a tag). Threads on even ranks join an even comm and threads on odd ranks join the odd comm.

No errors

Passed Threaded ibsend - ibsend

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This program performs a short test of MPI_BSEND in a multithreaded environment. It starts a single receiver thread that expects NUMSENDS messages and NUMSENDS sender threads, that use MPI_Bsend to send a message of size MSGSIZE to its right neigbour or rank 0 if (my_rank==comm_size-1), i.e. target_rank = (my_rank+1)%size.

After all messages have been received, the receiver thread prints a message, the threads are joined into the main thread and the application terminates.

No Errors

Passed Threaded request - greq_test

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Threaded generalized request tests.

Post Init ...
Testing ...
Starting work in thread ...
Work in thread done !!!
Testing ...
Starting work in thread ...
Work in thread done !!!
Testing ...
Starting work in thread ...
Work in thread done !!!
Goodbye !!!
No errors

Passed Threaded wait/test - greq_wait

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Threaded wait/test request tests.

Post Init ...
Waiting ...
Starting work in thread ...
Work in thread done !!!
Waiting ...
Starting work in thread ...
Work in thread done !!!
Waiting ...
Starting work in thread ...
Work in thread done !!!
Goodbye !!!
No errors

MPI-Toolkit Interface - Score: 80% Passed

This group features tests that involve the MPI Tool interface available in MPI-3.0 and higher.

Passed MPI_T 3.1 get index call - mpit_get_index

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests that the MPI 3.1 Toolkit interface *_get_index name lookup functions work as expected.

No errors

Passed MPI_T cycle variables - mpit_vars

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

To print out all MPI_T control variables, performance variables and their categories in the MPI implementation.

129 MPI Control Variables
	MPIR_CVAR_ALLREDUCE_MAX_SMP_SIZE=262144	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLREDUCE_GPU_MAX_SMP_SIZE=1024	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IALLGATHERV_PIPELINE_MSG_SIZE=524288	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_SCATTER_COMMUTATIVE_LONG_MSG_SIZE=524288	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_SCATTER_MAX_COMMSIZE=1000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_SYNC_FREQ=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_BLK_SIZE=16384	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_CHUNKING_MAX_NODES=90	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLGATHER_VSHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLGATHER_VSHORT_MSG_ALGORITHM=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLGATHERV_VSHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLREDUCE_NO_SMP=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_SHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALLV_THROTTLE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_ONLY_TREE=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_INTERNODE_RADIX=4	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_INTRANODE_RADIX=4	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_COLL_OPT_OFF	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_COLL_SYNC	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_SHORT_MSG=131072	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_NO_SMP=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SYNCHRONOUS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SHARED_MEM_COLL_OPT	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_STAGING_THRESHOLD=256	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_STAGING_BUF_SIZE=1048576	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_REGISTER_SMALL_STAGING_BUFFERS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IALLGATHERV_THROTTLE=6	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_SYNC_FREQ=16	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_MAX_TMP_SIZE	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_MIN_COMM_SIZE=64	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IGATHERV_MIN_COMM_SIZE=1000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IGATHERV_SYNC_FREQ=100	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SYNC_FREQ=16	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_MAX_TMP_SIZE	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_MIN_COMM_SIZE=64	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_ABORT_ON_RW_ERROR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_AGGREGATOR_PLACEMENT_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_AGGREGATOR_PLACEMENT_STRIDE=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_CB_ALIGN=2	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_DVS_MAXNODES=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_HINTS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_HINTS_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_NUM_IRECV=50	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_NUM_ISEND=50	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_SIZE_ISEND=10485760	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS_INTERVAL_MSEC	SCOPE_READONLY	NO_OBJECT	MPI_UNSIGNED_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIMERS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIMERS_SCALE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIME_WAITS=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_WRITE_EXIT_BARRIER=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_DS_WRITE_CRAY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_OFI_STARTUP_CONNECT	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_OFI_STARTUP_NODES_AGGREGATOR=2	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_DPM_DIR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SINGLE_HOST_ENABLED=1	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_REPORT=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_REPORT_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_XRCD_BASE_DIR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_POLICY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_MAPPING	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NUM_NICS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_SKIP_NIC_SYMMETRY_TEST=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_STARTUP_CONNECT=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_STARTUP_CONNECT=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_DEFAULT_TCLASS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_TCLASS_ERRORS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_PID_BASE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_VNI_INDEX=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_USE_SCALABLE_STARTUP=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_LATENCY_TCLASS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_LATENCY_TCLASS_THRESHOLD=2048	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_UCX_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_UCX_RC_MAX_RANKS=7	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SMP_SINGLE_COPY_MODE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SMP_SINGLE_COPY_SIZE=8192	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SHM_PROGRESS_MAX_BATCH_SIZE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CH4_PROGRESS_POKE=1	SCOPE_LOCAL	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SPAWN_USE_RANKPOOL=1	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CH4_RMA_THREAD_HOT=0	SCOPE_GROUP_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ABORT_ON_ERROR=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CPUMASK_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ENV_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OPTIMIZED_MEMCPY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_VERBOSITY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RANK_REORDER_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RANK_REORDER_METHOD=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_USE_SYSTEM_MEMCPY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_VERSION_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_USE_GPU_STREAM_TRIGGERED=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NUM_MAX_GPU_STREAMS=27	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MEMCPY_MEM_CHECK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MSG_QUEUE_DBG=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NO_BUFFER_ALIAS_CHECK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_AFFINITY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_INTERNAL_MEM_AFFINITY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_POLICY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_PG_SZ	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OPT_THREAD_SYNC=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_THREAD_YIELD_FREQ=10000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MEM_DEBUG_FNAME	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MALLOC_FALLBACK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NO_DIRECT_GPU_ACCESS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_G2G_PIPELINE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_SUPPORT_ENABLED=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_MANAGED_MEMORY_SUPPORT_ENABLED=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_STAGING_AREA_OPT=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_REGISTER_SHARED_MEM_REGION=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_ENABLED=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_THRESHOLD=1024	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_PROTOCOL	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_NO_ASYNC_COPY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ENABLE_YAKSA=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_EAGER_DEVICE_MEM=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_EAGER_REGISTER_HOST_MEM=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_USE_KERNEL=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_BLK_SIZE=8388608	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_KERNEL_THRESHOLD=131072	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RMA_MAX_PENDING=128	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RMA_SHM_ACCUMULATE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_LOCAL_SPAWN_SERVER=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
0 MPI Performance Variables
22 MPI_T categories
Category COLLECTIVE has 39 control variables, 0 performance variables, 0 subcategories
	Description: A category for collective communication variables.
Category COMMUNICATOR has 0 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control communicator construction and operation
Category DATALOOP has 0 control variables, 0 performance variables, 0 subcategories
	Description: Dataloop-related CVARs
Category ERROR_HANDLING has 0 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control error handling behavior (stack traces, aborts, etc)
Category THREADS has 0 control variables, 0 performance variables, 0 subcategories
	Description: multi-threading cvars
Category DEBUGGER has 0 control variables, 0 performance variables, 0 subcategories
	Description: cvars relevant to the "MPIR" debugger interface
Category DEVELOPER has 0 control variables, 0 performance variables, 0 subcategories
	Description: useful for developers working on MPICH itself
Category CRAY_MPIIO has 20 control variables, 0 performance variables, 0 subcategories
	Description: A category for variables that control Cray's MPI-IO technology.
Category DIMS has 0 control variables, 0 performance variables, 0 subcategories
	Description: Dims_create cvars
Category PROCESS_MANAGER has 2 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control the client-side process manager code
Category MEMORY has 0 control variables, 0 performance variables, 0 subcategories
	Description: affects memory allocation and usage, including MPI object handles
Category NODEMAP has 1 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control behavior of nodemap
Category REQUEST has 0 control variables, 0 performance variables, 0 subcategories
	Description: A category for requests mangement variables
Category NEMESIS has 0 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control behavior of the ch3:nemesis channel
Category FT has 0 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control behavior of fault tolerance
Category CH3 has 0 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control behavior of ch3
Category CH4_OFI has 20 control variables, 0 performance variables, 0 subcategories
	Description: A category for CH4 OFI netmod variables
Category CH4 has 3 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control behavior of the CH4 device
Category CRAY_CONTROL has 17 control variables, 0 performance variables, 0 subcategories
	Description: A category for variables that control the flow of Cray MPICH
Category CRAY_DISPLAY has 7 control variables, 0 performance variables, 0 subcategories
	Description: A category for variables that enable displaying of system details. Has no effect on the flow of Cray MPICH.
Category CRAY_GPU has 18 control variables, 0 performance variables, 0 subcategories
	Description: A category for variables that affect Cray's GPU support
Category CH4_UCX has 2 control variables, 0 performance variables, 0 subcategories
	Description: 
No errors

Passed MPI_T multithreaded - mpit_threading

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test is adapted from test/mpi/mpi_t/mpit_vars.c. But this is a multithreading version in which multiple threads will call MPI_T routines.

With verbose set, thread 0 will prints out MPI_T control variables, performance variables and their categories.

129 MPI Control Variables
	MPIR_CVAR_ALLREDUCE_MAX_SMP_SIZE=262144	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLREDUCE_GPU_MAX_SMP_SIZE=1024	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IALLGATHERV_PIPELINE_MSG_SIZE=524288	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_SCATTER_COMMUTATIVE_LONG_MSG_SIZE=524288	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_SCATTER_MAX_COMMSIZE=1000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_SYNC_FREQ=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_BLK_SIZE=16384	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_CHUNKING_MAX_NODES=90	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLGATHER_VSHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLGATHER_VSHORT_MSG_ALGORITHM=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLGATHERV_VSHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLREDUCE_NO_SMP=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_SHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALLV_THROTTLE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_ONLY_TREE=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_INTERNODE_RADIX=4	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_INTRANODE_RADIX=4	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_COLL_OPT_OFF	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_COLL_SYNC	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_SHORT_MSG=131072	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_NO_SMP=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SYNCHRONOUS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SHARED_MEM_COLL_OPT	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_STAGING_THRESHOLD=256	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_STAGING_BUF_SIZE=1048576	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_REGISTER_SMALL_STAGING_BUFFERS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IALLGATHERV_THROTTLE=6	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_SYNC_FREQ=16	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_MAX_TMP_SIZE	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_MIN_COMM_SIZE=64	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IGATHERV_MIN_COMM_SIZE=1000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IGATHERV_SYNC_FREQ=100	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SYNC_FREQ=16	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_MAX_TMP_SIZE	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_MIN_COMM_SIZE=64	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_ABORT_ON_RW_ERROR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_AGGREGATOR_PLACEMENT_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_AGGREGATOR_PLACEMENT_STRIDE=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_CB_ALIGN=2	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_DVS_MAXNODES=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_HINTS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_HINTS_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_NUM_IRECV=50	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_NUM_ISEND=50	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_SIZE_ISEND=10485760	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS_INTERVAL_MSEC	SCOPE_READONLY	NO_OBJECT	MPI_UNSIGNED_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIMERS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIMERS_SCALE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIME_WAITS=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_WRITE_EXIT_BARRIER=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_DS_WRITE_CRAY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_OFI_STARTUP_CONNECT	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_OFI_STARTUP_NODES_AGGREGATOR=2	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_DPM_DIR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SINGLE_HOST_ENABLED=1	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_REPORT=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_REPORT_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_XRCD_BASE_DIR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_POLICY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_MAPPING	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NUM_NICS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_SKIP_NIC_SYMMETRY_TEST=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_STARTUP_CONNECT=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_STARTUP_CONNECT=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_DEFAULT_TCLASS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_TCLASS_ERRORS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_PID_BASE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_VNI_INDEX=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_USE_SCALABLE_STARTUP=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_LATENCY_TCLASS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_LATENCY_TCLASS_THRESHOLD=2048	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_UCX_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_UCX_RC_MAX_RANKS=7	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SMP_SINGLE_COPY_MODE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SMP_SINGLE_COPY_SIZE=8192	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SHM_PROGRESS_MAX_BATCH_SIZE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CH4_PROGRESS_POKE=1	SCOPE_LOCAL	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SPAWN_USE_RANKPOOL=1	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CH4_RMA_THREAD_HOT=0	SCOPE_GROUP_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ABORT_ON_ERROR=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CPUMASK_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ENV_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OPTIMIZED_MEMCPY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_VERBOSITY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RANK_REORDER_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RANK_REORDER_METHOD=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_USE_SYSTEM_MEMCPY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_VERSION_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_USE_GPU_STREAM_TRIGGERED=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NUM_MAX_GPU_STREAMS=27	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MEMCPY_MEM_CHECK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MSG_QUEUE_DBG=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NO_BUFFER_ALIAS_CHECK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_AFFINITY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_INTERNAL_MEM_AFFINITY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_POLICY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_PG_SZ	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OPT_THREAD_SYNC=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_THREAD_YIELD_FREQ=10000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MEM_DEBUG_FNAME	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MALLOC_FALLBACK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NO_DIRECT_GPU_ACCESS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_G2G_PIPELINE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_SUPPORT_ENABLED=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_MANAGED_MEMORY_SUPPORT_ENABLED=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_STAGING_AREA_OPT=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_REGISTER_SHARED_MEM_REGION=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_ENABLED=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_THRESHOLD=1024	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_PROTOCOL	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_NO_ASYNC_COPY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ENABLE_YAKSA=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_EAGER_DEVICE_MEM=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_EAGER_REGISTER_HOST_MEM=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_USE_KERNEL=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_BLK_SIZE=8388608	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_KERNEL_THRESHOLD=131072	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RMA_MAX_PENDING=128	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RMA_SHM_ACCUMULATE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_LOCAL_SPAWN_SERVER=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
0 MPI Performance Variables
22 MPI_T categories
Category COLLECTIVE has 39 control variables, 0 performance variables, 0 subcategories
Category COMMUNICATOR has 0 control variables, 0 performance variables, 0 subcategories
Category DATALOOP has 0 control variables, 0 performance variables, 0 subcategories
Category ERROR_HANDLING has 0 control variables, 0 performance variables, 0 subcategories
Category THREADS has 0 control variables, 0 performance variables, 0 subcategories
Category DEBUGGER has 0 control variables, 0 performance variables, 0 subcategories
Category DEVELOPER has 0 control variables, 0 performance variables, 0 subcategories
Category CRAY_MPIIO has 20 control variables, 0 performance variables, 0 subcategories
Category DIMS has 0 control variables, 0 performance variables, 0 subcategories
Category PROCESS_MANAGER has 2 control variables, 0 performance variables, 0 subcategories
Category MEMORY has 0 control variables, 0 performance variables, 0 subcategories
Category NODEMAP has 1 control variables, 0 performance variables, 0 subcategories
Category REQUEST has 0 control variables, 0 performance variables, 0 subcategories
Category NEMESIS has 0 control variables, 0 performance variables, 0 subcategories
Category FT has 0 control variables, 0 performance variables, 0 subcategories
Category CH3 has 0 control variables, 0 performance variables, 0 subcategories
Category CH4_OFI has 20 control variables, 0 performance variables, 0 subcategories
Category CH4 has 3 control variables, 0 performance variables, 0 subcategories
Category CRAY_CONTROL has 17 control variables, 0 performance variables, 0 subcategories
Category CRAY_DISPLAY has 7 control variables, 0 performance variables, 0 subcategories
Category CRAY_GPU has 18 control variables, 0 performance variables, 0 subcategories
Category CH4_UCX has 2 control variables, 0 performance variables, 0 subcategories
No errors

Passed MPI_T string handling - mpi_t_str

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

A test that MPI_T string handling is working as expected.

No errors

Failed MPI_T write variable - cvarwrite

Build: Passed

Execution: Failed

Exit Status: Failed with signal 15

MPI Processes: 1

Test Description:

This test writes to control variables exposed by MPI_T functionality of MPI_3.0.

Total 129 MPI control variables
INTERNAL ERROR: invalid error code 44 (Ring ids do not match) in PMPI_T_cvar_write:129
MPICH ERROR [Rank 0] [job id 3ef3ec8f-7f4e-465d-bd28-8311a17d6e1b] [Mon Apr 15 11:03:35 2024] [x1001c5s5b0n0] - Abort(940175119) (rank 0 in comm 0): Fatal error in PMPI_T_cvar_write: Other MPI error, error stack:
PMPI_T_cvar_write(143):  MPI_T_cvar_write(handle=0x1463170, buf=0x7ffe0c202a64)
PMPI_T_cvar_write(129): 
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 15

MPI-3.0 - Score: 99% Passed

This group features tests that exercises MPI-3.0 and higher functionality. Note that the test suite was designed to be compiled and executed under all versions of MPI. If the current version of MPI the test suite is less that MPI-3.0, the executed code will report "MPI-3.0 or higher required" and will exit.

Passed Aint add and diff - aintmath

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests the MPI 3.1 standard functions MPI_Aint_diff and MPI_Aint_add.

No errors

Passed C++ datatypes - cxx-types

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test checks for the existence of four new C++ named predefined datatypes that should be accessible from C and Fortran.

No errors

Passed Comm_create_group excl 4 rank - comm_create_group4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test using 4 processes creates a group with the even processes using MPI_Group_excl() and uses this group to create a communicator. Then both the communicator and group are freed.

No errors

Passed Comm_create_group excl 8 rank - comm_create_group8

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This test using 8 processes creates a group with the even processes using MPI_Group_excl() and uses this group to create a communicator. Then both the communicator and group are freed.

No errors

Passed Comm_create_group incl 2 rank - comm_group_half2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test using 2 processes creates a group with ranks less than size/2 using MPI_Group_range_incl() and uses this group to create a communicator. Then both the communicator and group are freed.

No errors

Passed Comm_create_group incl 4 rank - comm_group_half4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test using 4 processes creates a group with ranks less than size/2 using MPI_Group_range_incl() and uses this group to create a communicator. Then both the communicator and group are freed.

No errors

Passed Comm_create_group incl 8 rank - comm_group_half8

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This test using 8 processes creates a group with ranks less than size/2 using MPI_Group_range_incl() and uses this group to create a communicator. Then both the communicator and group are freed.

No errors

Passed Comm_create_group random 2 rank - comm_group_rand2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test using 2 processes creates and frees groups by randomly adding processes to a group, then creating a communicator with the group.

No errors

Passed Comm_create_group random 4 rank - comm_group_rand4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test using 4 processes creates and frees groups by randomly adding processes to a group, then creating a communicator with the group.

No errors

Passed Comm_create_group random 8 rank - comm_group_rand8

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This test using 8 processes creates and frees groups by randomly adding processes to a group, then creating a communicator with the group.

No errors

Passed Comm_idup 2 rank - comm_idup2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Multiple tests using 2 processes that make rank 0 wait in a blocking receive until all other processes have called MPI_Comm_idup(), then call idup afterwards. Should ensure that idup doesn't deadlock. Includes a test using an intercommunicator.]

No errors

Passed Comm_idup 4 rank - comm_idup4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Multiple tests using 4 processes that make rank 0 wait in a blocking receive until all other processes have called MPI_Comm_idup(), then call idup afterwards. Should ensure that idup doesn't deadlock. Includes a test using an intercommunicator.

No errors

Passed Comm_idup 9 rank - comm_idup9

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 9

Test Description:

Multiple tests using 9 processes that make rank 0 wait in a blocking receive until all other processes have called MPI_Comm_idup(), then call idup afterwards. Should ensure that idup doesn't deadlock. Includes a test using an intercommunicator.]

No errors

Passed Comm_idup multi - comm_idup_mul

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Simple test creating multiple communicators with MPI_Comm_idup.

No errors

Passed Comm_idup overlap - comm_idup_overlap

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Each pair of processes uses MPI_Comm_idup() to dup the communicator such that the dups are overlapping. If this were done with MPI_Comm_dup() this should deadlock.

No errors

Passed Comm_split_type basic - cmsplit_type

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Tests MPI_Comm_split_type() including a test using MPI_UNDEFINED.

Created subcommunicator of size 2
Created subcommunicator of size 1
No errors
Created subcommunicator of size 2
Created subcommunicator of size 1

Passed Comm_with_info dup 2 rank - dup_with_info2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test exercises MPI_Comm_dup_with_info() with 2 processes by setting the info for a communicator, duplicating it, and then testing the communicator.

No errors

Passed Comm_with_info dup 4 rank - dup_with_info4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises MPI_Comm_dup_with_info() with 4 processes by setting the info for a communicator, duplicating it, and then testing the communicator.

No errors

Passed Comm_with_info dup 9 rank - dup_with_info9

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 9

Test Description:

This test exercises MPI_Comm_dup_with_info() with 9 processes by setting the info for a communicator, duplicating it, and then testing the communicator.

No errors

Passed Compare_and_swap contention - compare_and_swap

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Tests MPI_Compare_and_swap using self communication, neighbor communication, and communication with the root causing contention.

No errors

Passed Datatype get structs - get-struct

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test was motivated by the failure of an example program for RMA involving simple operations on a struct that included a struct. The observed failure was a SEGV in the MPI_Get.

No errors

Passed Fetch_and_op basic - fetch_and_op

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This simple set of tests executes the MPI_Fetch_and op() calls on RMA windows using a selection of datatypes with multiple different communicators, communication patterns, and operations.

No errors

Passed Get_acculumate basic - get_acc_local

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Get Accumulated Test. This is a simple test of MPI_Get_accumulate() on a local window.

No errors

Passed Get_accumulate communicators - get_accumulate

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Get Accumulate Test. This simple set of tests executes MPI_Get_accumulate on RMA windows using a selection of datatypes with multiple different communicators, communication patterns, and operations.

No errors

Passed Iallreduce basic - iallred

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Simple test for MPI_Iallreduce() and MPI_Allreduce().

No errors

Passed Ibarrier - ibarrier

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test calls MPI_Ibarrier() followed by an conditional loop containing usleep(1000) and MPI_Test(). This test hung indefinitely under some MPI implementations.

No errors

Passed Large counts for types - large-count

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test checks for large count functionality ("MPI_Count") mandated by MPI-3, as well as behavior of corresponding pre-MPI-3 interfaces that have better defined behavior when an "int" quantity would overflow.

No errors

Passed Large types - large_type

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test checks that MPI can handle large datatypes.

No errors

Passed Linked list construction fetch/op - linked_list_fop

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test constructs a distributed shared linked list using MPI-3 dynamic windows with MPI_Fetch_and_op. Initially process 0 creates the head of the list, attaches it to an RMA window, and broadcasts the pointer to all processes. All processes then concurrently append N new elements to the list. When a process attempts to attach its element to the tail of list it may discover that its tail pointer is stale and it must chase ahead to the new tail before the element can be attached.

No errors

Passed Linked list construction lockall - linked_list_lockall

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Construct a distributed shared linked list using MPI-3 dynamic RMA windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. All processes then concurrently append N new elements to the list. When a process attempts to attach its element to the tail of list it may discover that its tail pointer is stale and it must chase ahead to the new tail before the element can be attached. This version of the test suite uses MPI_Win_lock_all() instead of MPI_Win_lock(MPI_LOCK_EXCLUSIVE, ...).

No errors

Passed Linked-list construction lock shr - linked_list_bench_lock_shr

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test constructs a distributed shared linked list using MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. Each process "p" then appends N new elements to the list when the tail reaches process "p-1". This test is similar to Linked_list construction test 2 (rma/linked_list_bench_lock_excl) but uses an MPI_LOCK_SHARED parameter to MPI_Win_Lock().

No errors

Passed Linked_list construction - linked_list_bench_lock_all

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Construct a distributed shared linked list using MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. Each process "p" then appends N new elements to the list when the tail reaches process "p-1".

No errors

Passed Linked_list construction lock excl - linked_list_bench_lock_excl

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

MPI-3 distributed linked list construction example. Construct a distributed shared linked list using proposed MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. Each process "p" then appends N new elements to the list when the tail reaches process "p-1". The test uses the MPI_LOCK_EXCLUSIVE argument with MPI_Win_lock().

No errors

Passed Linked_list construction put/get - linked_list

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test constructs a distributed shared linked list using MPI-3 dynamic windows with MPI_Put and MPI_Get. Initially process 0 creates the head of the list, attaches it to an RMA window, and broadcasts the pointer to all processes. All processes then concurrently append N new elements to the list. When a process attempts to attach its element to the tail of list it may discover that its tail pointer is stale and it must chase ahead to the new tail before the element can be attached.

No errors

Passed MCS_Mutex_trylock - mutex_bench

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises the MCS_Mutex_lock calls by having multiple competing processes repeatedly lock and unlock a mutex.

No errors

Passed MPI RMA read-and-ops - reqops

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises atomic, one-sided read-and-operation calls. Includes multiple tests for different RMA request-based operations, communicators, and wait patterns.

No errors

Passed MPI_Dist_graph_create - distgraph1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test excercises MPI_Dist_graph_create() and MPI_Dist_graph_adjacent().

using graph layout 'deterministic complete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'every other edge deleted'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'only self-edges'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'no edges'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'a random incomplete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'a random incomplete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'a random incomplete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'a random incomplete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'a random incomplete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
using graph layout 'a random incomplete graph'
testing MPI_Dist_graph_create_adjacent
testing MPI_Dist_graph_create w/ outgoing only
testing MPI_Dist_graph_create w/ incoming only
testing MPI_Dist_graph_create w/ rank 0 specifies only
testing MPI_Dist_graph_create w/ rank 0 specifies only -- NULLs
testing MPI_Dist_graph_create w/ no graph
testing MPI_Dist_graph_create w/ no graph
testing MPI_Dist_graph_create w/ no graph -- NULLs
testing MPI_Dist_graph_create w/ no graph -- NULLs+MPI_UNWEIGHTED
testing MPI_Dist_graph_create_adjacent w/ no graph
testing MPI_Dist_graph_create_adjacent w/ no graph -- MPI_WEIGHTS_EMPTY
testing MPI_Dist_graph_create_adjacent w/ no graph -- NULLs
testing MPI_Dist_graph_create_adjacent w/ no graph -- NULLs+MPI_UNWEIGHTED
No errors

Passed MPI_Get_library_version test - library_version

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

MPI-3.0 Test returns MPI library version.

MPI VERSION    : CRAY MPICH version 8.1.25.17 (ANL base 3.4a2)
MPI BUILD INFO : Sun Feb 26 14:33 2023 (git hash aecd99f)
No errors

Passed MPI_Info_create basic - comm_info

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 6

Test Description:

Simple test for MPI_Comm_{set,get}_info.

No errors

Passed MPI_Info_get basic - infoenv

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This is a simple test of the MPI_Info_get() function.

No errors

Passed MPI_Mprobe() series - mprobe1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This tests MPI_Mprobe() using a series of tests. Includes tests with send and Mprobe+Mrecv, send and Mprobe+Imrecv, send and Improbe+Mrecv, send and Improbe+Irecv, Mprobe+Mrecv with MPI_PROC_NULL, Mprobe+Imrecv with MPI_PROC_NULL, Improbe+Mrecv with MPI_PROC_NULL, Improbe+Imrecv, and test to verify MPI_Message_c2f() and MPI_Message_f2c() are present.

No errors

Passed MPI_Status large count - big_count_status

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test manipulates an MPI status object using MPI_Status_set_elements_x() with various large count values and verifying MPI_Get_elements_x() and MPI_Test_cancelled() produce the correct values.

No errors

Passed MPI_T 3.1 get index call - mpit_get_index

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests that the MPI 3.1 Toolkit interface *_get_index name lookup functions work as expected.

No errors

Passed MPI_T cycle variables - mpit_vars

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

To print out all MPI_T control variables, performance variables and their categories in the MPI implementation.

129 MPI Control Variables
	MPIR_CVAR_ALLREDUCE_MAX_SMP_SIZE=262144	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLREDUCE_GPU_MAX_SMP_SIZE=1024	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IALLGATHERV_PIPELINE_MSG_SIZE=524288	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_SCATTER_COMMUTATIVE_LONG_MSG_SIZE=524288	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_SCATTER_MAX_COMMSIZE=1000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_SYNC_FREQ=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_BLK_SIZE=16384	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_CHUNKING_MAX_NODES=90	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLGATHER_VSHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLGATHER_VSHORT_MSG_ALGORITHM=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLGATHERV_VSHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLREDUCE_NO_SMP=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_SHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALLV_THROTTLE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_ONLY_TREE=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_INTERNODE_RADIX=4	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_INTRANODE_RADIX=4	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_COLL_OPT_OFF	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_COLL_SYNC	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_SHORT_MSG=131072	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_NO_SMP=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SYNCHRONOUS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SHARED_MEM_COLL_OPT	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_STAGING_THRESHOLD=256	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_STAGING_BUF_SIZE=1048576	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_REGISTER_SMALL_STAGING_BUFFERS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IALLGATHERV_THROTTLE=6	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_SYNC_FREQ=16	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_MAX_TMP_SIZE	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_MIN_COMM_SIZE=64	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IGATHERV_MIN_COMM_SIZE=1000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IGATHERV_SYNC_FREQ=100	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SYNC_FREQ=16	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_MAX_TMP_SIZE	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_MIN_COMM_SIZE=64	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_ABORT_ON_RW_ERROR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_AGGREGATOR_PLACEMENT_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_AGGREGATOR_PLACEMENT_STRIDE=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_CB_ALIGN=2	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_DVS_MAXNODES=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_HINTS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_HINTS_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_NUM_IRECV=50	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_NUM_ISEND=50	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_SIZE_ISEND=10485760	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS_INTERVAL_MSEC	SCOPE_READONLY	NO_OBJECT	MPI_UNSIGNED_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIMERS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIMERS_SCALE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIME_WAITS=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_WRITE_EXIT_BARRIER=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_DS_WRITE_CRAY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_OFI_STARTUP_CONNECT	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_OFI_STARTUP_NODES_AGGREGATOR=2	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_DPM_DIR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SINGLE_HOST_ENABLED=1	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_REPORT=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_REPORT_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_XRCD_BASE_DIR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_POLICY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_MAPPING	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NUM_NICS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_SKIP_NIC_SYMMETRY_TEST=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_STARTUP_CONNECT=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_STARTUP_CONNECT=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_DEFAULT_TCLASS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_TCLASS_ERRORS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_PID_BASE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_VNI_INDEX=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_USE_SCALABLE_STARTUP=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_LATENCY_TCLASS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_LATENCY_TCLASS_THRESHOLD=2048	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_UCX_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_UCX_RC_MAX_RANKS=7	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SMP_SINGLE_COPY_MODE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SMP_SINGLE_COPY_SIZE=8192	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SHM_PROGRESS_MAX_BATCH_SIZE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CH4_PROGRESS_POKE=1	SCOPE_LOCAL	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SPAWN_USE_RANKPOOL=1	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CH4_RMA_THREAD_HOT=0	SCOPE_GROUP_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ABORT_ON_ERROR=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CPUMASK_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ENV_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OPTIMIZED_MEMCPY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_VERBOSITY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RANK_REORDER_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RANK_REORDER_METHOD=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_USE_SYSTEM_MEMCPY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_VERSION_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_USE_GPU_STREAM_TRIGGERED=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NUM_MAX_GPU_STREAMS=27	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MEMCPY_MEM_CHECK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MSG_QUEUE_DBG=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NO_BUFFER_ALIAS_CHECK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_AFFINITY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_INTERNAL_MEM_AFFINITY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_POLICY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_PG_SZ	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OPT_THREAD_SYNC=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_THREAD_YIELD_FREQ=10000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MEM_DEBUG_FNAME	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MALLOC_FALLBACK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NO_DIRECT_GPU_ACCESS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_G2G_PIPELINE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_SUPPORT_ENABLED=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_MANAGED_MEMORY_SUPPORT_ENABLED=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_STAGING_AREA_OPT=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_REGISTER_SHARED_MEM_REGION=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_ENABLED=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_THRESHOLD=1024	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_PROTOCOL	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_NO_ASYNC_COPY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ENABLE_YAKSA=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_EAGER_DEVICE_MEM=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_EAGER_REGISTER_HOST_MEM=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_USE_KERNEL=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_BLK_SIZE=8388608	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_KERNEL_THRESHOLD=131072	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RMA_MAX_PENDING=128	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RMA_SHM_ACCUMULATE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_LOCAL_SPAWN_SERVER=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
0 MPI Performance Variables
22 MPI_T categories
Category COLLECTIVE has 39 control variables, 0 performance variables, 0 subcategories
	Description: A category for collective communication variables.
Category COMMUNICATOR has 0 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control communicator construction and operation
Category DATALOOP has 0 control variables, 0 performance variables, 0 subcategories
	Description: Dataloop-related CVARs
Category ERROR_HANDLING has 0 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control error handling behavior (stack traces, aborts, etc)
Category THREADS has 0 control variables, 0 performance variables, 0 subcategories
	Description: multi-threading cvars
Category DEBUGGER has 0 control variables, 0 performance variables, 0 subcategories
	Description: cvars relevant to the "MPIR" debugger interface
Category DEVELOPER has 0 control variables, 0 performance variables, 0 subcategories
	Description: useful for developers working on MPICH itself
Category CRAY_MPIIO has 20 control variables, 0 performance variables, 0 subcategories
	Description: A category for variables that control Cray's MPI-IO technology.
Category DIMS has 0 control variables, 0 performance variables, 0 subcategories
	Description: Dims_create cvars
Category PROCESS_MANAGER has 2 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control the client-side process manager code
Category MEMORY has 0 control variables, 0 performance variables, 0 subcategories
	Description: affects memory allocation and usage, including MPI object handles
Category NODEMAP has 1 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control behavior of nodemap
Category REQUEST has 0 control variables, 0 performance variables, 0 subcategories
	Description: A category for requests mangement variables
Category NEMESIS has 0 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control behavior of the ch3:nemesis channel
Category FT has 0 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control behavior of fault tolerance
Category CH3 has 0 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control behavior of ch3
Category CH4_OFI has 20 control variables, 0 performance variables, 0 subcategories
	Description: A category for CH4 OFI netmod variables
Category CH4 has 3 control variables, 0 performance variables, 0 subcategories
	Description: cvars that control behavior of the CH4 device
Category CRAY_CONTROL has 17 control variables, 0 performance variables, 0 subcategories
	Description: A category for variables that control the flow of Cray MPICH
Category CRAY_DISPLAY has 7 control variables, 0 performance variables, 0 subcategories
	Description: A category for variables that enable displaying of system details. Has no effect on the flow of Cray MPICH.
Category CRAY_GPU has 18 control variables, 0 performance variables, 0 subcategories
	Description: A category for variables that affect Cray's GPU support
Category CH4_UCX has 2 control variables, 0 performance variables, 0 subcategories
	Description: 
No errors

Passed MPI_T multithreaded - mpit_threading

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test is adapted from test/mpi/mpi_t/mpit_vars.c. But this is a multithreading version in which multiple threads will call MPI_T routines.

With verbose set, thread 0 will prints out MPI_T control variables, performance variables and their categories.

129 MPI Control Variables
	MPIR_CVAR_ALLREDUCE_MAX_SMP_SIZE=262144	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLREDUCE_GPU_MAX_SMP_SIZE=1024	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IALLGATHERV_PIPELINE_MSG_SIZE=524288	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_SCATTER_COMMUTATIVE_LONG_MSG_SIZE=524288	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_SCATTER_MAX_COMMSIZE=1000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_SYNC_FREQ=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_BLK_SIZE=16384	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_CHUNKING_MAX_NODES=90	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLGATHER_VSHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLGATHER_VSHORT_MSG_ALGORITHM=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLGATHERV_VSHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLREDUCE_NO_SMP=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALL_SHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLTOALLV_THROTTLE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_ONLY_TREE=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_INTERNODE_RADIX=4	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_BCAST_INTRANODE_RADIX=4	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_COLL_OPT_OFF	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_COLL_SYNC	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_SHORT_MSG=131072	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_REDUCE_NO_SMP=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SYNCHRONOUS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SHARED_MEM_COLL_OPT	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_STAGING_THRESHOLD=256	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_STAGING_BUF_SIZE=1048576	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_REGISTER_SMALL_STAGING_BUFFERS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IALLGATHERV_THROTTLE=6	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_SYNC_FREQ=16	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_MAX_TMP_SIZE	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GATHERV_MIN_COMM_SIZE=64	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IGATHERV_MIN_COMM_SIZE=1000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_IGATHERV_SYNC_FREQ=100	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SHORT_MSG=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_SYNC_FREQ=16	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_MAX_TMP_SIZE	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SCATTERV_MIN_COMM_SIZE=64	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_ABORT_ON_RW_ERROR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_AGGREGATOR_PLACEMENT_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_AGGREGATOR_PLACEMENT_STRIDE=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_CB_ALIGN=2	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_DVS_MAXNODES=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_HINTS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_HINTS_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_NUM_IRECV=50	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_NUM_ISEND=50	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_MAX_SIZE_ISEND=10485760	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_STATS_INTERVAL_MSEC	SCOPE_READONLY	NO_OBJECT	MPI_UNSIGNED_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIMERS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIMERS_SCALE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_TIME_WAITS=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_WRITE_EXIT_BARRIER=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_DS_WRITE_CRAY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_OFI_STARTUP_CONNECT	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MPIIO_OFI_STARTUP_NODES_AGGREGATOR=2	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_DPM_DIR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SINGLE_HOST_ENABLED=1	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_REPORT=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_REPORT_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_COUNTER_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_XRCD_BASE_DIR	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_POLICY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NIC_MAPPING	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_NUM_NICS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_SKIP_NIC_SYMMETRY_TEST=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_STARTUP_CONNECT=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_STARTUP_CONNECT=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_DEFAULT_TCLASS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_TCLASS_ERRORS	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_PID_BASE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_CXI_VNI_INDEX=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_USE_SCALABLE_STARTUP=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_LATENCY_TCLASS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OFI_RMA_LATENCY_TCLASS_THRESHOLD=2048	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_UCX_VERBOSE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_UCX_RC_MAX_RANKS=7	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SMP_SINGLE_COPY_MODE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SMP_SINGLE_COPY_SIZE=8192	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SHM_PROGRESS_MAX_BATCH_SIZE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CH4_PROGRESS_POKE=1	SCOPE_LOCAL	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_SPAWN_USE_RANKPOOL=1	SCOPE_ALL_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CH4_RMA_THREAD_HOT=0	SCOPE_GROUP_EQ	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ABORT_ON_ERROR=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_CPUMASK_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ENV_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OPTIMIZED_MEMCPY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_VERBOSITY=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_STATS_FILE	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RANK_REORDER_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RANK_REORDER_METHOD=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_USE_SYSTEM_MEMCPY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_VERSION_DISPLAY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_USE_GPU_STREAM_TRIGGERED=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NUM_MAX_GPU_STREAMS=27	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MEMCPY_MEM_CHECK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MSG_QUEUE_DBG=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NO_BUFFER_ALIAS_CHECK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_AFFINITY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_INTERNAL_MEM_AFFINITY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_POLICY	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ALLOC_MEM_PG_SZ	SCOPE_READONLY	NO_OBJECT	Invalid:MPI_LONG	VERBOSITY_USER_BASIC	
	MPIR_CVAR_OPT_THREAD_SYNC=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_THREAD_YIELD_FREQ=10000	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MEM_DEBUG_FNAME	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_MALLOC_FALLBACK=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_NO_DIRECT_GPU_ACCESS=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_G2G_PIPELINE=8	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_SUPPORT_ENABLED=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_MANAGED_MEMORY_SUPPORT_ENABLED=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_STAGING_AREA_OPT=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_COLL_REGISTER_SHARED_MEM_REGION=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_ENABLED=-1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_THRESHOLD=1024	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_IPC_PROTOCOL	SCOPE_READONLY	NO_OBJECT	MPI_CHAR	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_NO_ASYNC_COPY=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_ENABLE_YAKSA=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_EAGER_DEVICE_MEM=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_EAGER_REGISTER_HOST_MEM=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_USE_KERNEL=1	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_BLK_SIZE=8388608	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_GPU_ALLREDUCE_KERNEL_THRESHOLD=131072	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RMA_MAX_PENDING=128	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_RMA_SHM_ACCUMULATE=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
	MPIR_CVAR_LOCAL_SPAWN_SERVER=0	SCOPE_READONLY	NO_OBJECT	MPI_INT	VERBOSITY_USER_BASIC	
0 MPI Performance Variables
22 MPI_T categories
Category COLLECTIVE has 39 control variables, 0 performance variables, 0 subcategories
Category COMMUNICATOR has 0 control variables, 0 performance variables, 0 subcategories
Category DATALOOP has 0 control variables, 0 performance variables, 0 subcategories
Category ERROR_HANDLING has 0 control variables, 0 performance variables, 0 subcategories
Category THREADS has 0 control variables, 0 performance variables, 0 subcategories
Category DEBUGGER has 0 control variables, 0 performance variables, 0 subcategories
Category DEVELOPER has 0 control variables, 0 performance variables, 0 subcategories
Category CRAY_MPIIO has 20 control variables, 0 performance variables, 0 subcategories
Category DIMS has 0 control variables, 0 performance variables, 0 subcategories
Category PROCESS_MANAGER has 2 control variables, 0 performance variables, 0 subcategories
Category MEMORY has 0 control variables, 0 performance variables, 0 subcategories
Category NODEMAP has 1 control variables, 0 performance variables, 0 subcategories
Category REQUEST has 0 control variables, 0 performance variables, 0 subcategories
Category NEMESIS has 0 control variables, 0 performance variables, 0 subcategories
Category FT has 0 control variables, 0 performance variables, 0 subcategories
Category CH3 has 0 control variables, 0 performance variables, 0 subcategories
Category CH4_OFI has 20 control variables, 0 performance variables, 0 subcategories
Category CH4 has 3 control variables, 0 performance variables, 0 subcategories
Category CRAY_CONTROL has 17 control variables, 0 performance variables, 0 subcategories
Category CRAY_DISPLAY has 7 control variables, 0 performance variables, 0 subcategories
Category CRAY_GPU has 18 control variables, 0 performance variables, 0 subcategories
Category CH4_UCX has 2 control variables, 0 performance variables, 0 subcategories
No errors

Passed MPI_T string handling - mpi_t_str

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

A test that MPI_T string handling is working as expected.

No errors

Failed MPI_T write variable - cvarwrite

Build: Passed

Execution: Failed

Exit Status: Failed with signal 15

MPI Processes: 1

Test Description:

This test writes to control variables exposed by MPI_T functionality of MPI_3.0.

Total 129 MPI control variables
INTERNAL ERROR: invalid error code 44 (Ring ids do not match) in PMPI_T_cvar_write:129
MPICH ERROR [Rank 0] [job id 3ef3ec8f-7f4e-465d-bd28-8311a17d6e1b] [Mon Apr 15 11:03:35 2024] [x1001c5s5b0n0] - Abort(940175119) (rank 0 in comm 0): Fatal error in PMPI_T_cvar_write: Other MPI error, error stack:
PMPI_T_cvar_write(143):  MPI_T_cvar_write(handle=0x1463170, buf=0x7ffe0c202a64)
PMPI_T_cvar_write(129): 
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 15

Passed MPI_Win_allocate_shared - win_large_shm

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test MPI_Win_allocate and MPI_Win_allocate_shared when allocating memory with size of 1GB per process. Also tests having every other process allocate zero bytes and tests having every other process allocate 0.5GB.

No errors

Passed Matched Probe - mprobe

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This routine is designed to test the MPI-3.0 matched probe support. The support provided in MPI-2.2 was not thread safe allowing other threads to usurp messages probed in other threads.

The rank=0 process generates a random array of floats that is sent to mpi rank 1. Rank 1 send a message back to rank 0 with the message length of the received array. Rank 1 spawns 2 or more threads that each attempt to read the message sent by rank 0. In general, all of the threads have equal access to the data, but the first one to probe the data will eventually end of processing the data, and all the others will relent. The threads use MPI_Improbe(), so if there is nothing to read, the thread will rest for 0.1 secs before reprobing. If nothing is probed within a fixed number of cycles, the thread exists and sets it thread exit status to 1. If a thread is able to read the message, it returns an exit status of 0.

mpi_rank:1 thread 0 MPI_rank:1
mpi_rank:1 thread 1 MPI_rank:1
mpi_rank:1 thread 2 MPI_rank:1
mpi_rank:1 thread 3 MPI_rank:1
mpi_rank:1 thread 2 used 4 read cycle.
mpi_rank:1 thread 2 local memory request (bytes):400 of local allocation:800
mpi_rank:1 thread 2 recv'd 100 MPI_FLOATs from rank:0.
mpi_rank:1 thread 2 sending rank:0 the number of MPI_FLOATs received:100
mpi_rank:1 thread 3 giving up reading data.
mpi_rank:1 thread 1 giving up reading data.
mpi_rank:1 thread 0 giving up reading data.
mpi_rank:1 main() thread 0 exit status:1
mpi_rank:1 main() thread 1 exit status:1
mpi_rank:1 main() thread 2 exit status:0
mpi_rank:1 main() thread 3 exit status:1
mpi_rank:0 main() received message from rank:1 that the received message length was 400 bytes long.
No errors.

Passed Multiple threads context dup - ctxdup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test creates communicators concurrently in different threads.

No errors

Passed Multiple threads context idup - ctxidup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test creates communicators concurrently, non-blocking, in different threads.

No errors

Passed Non-blocking basic - nonblocking4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a weak sanity test that all non-blocking collectives specified by MPI-3 are present in the library and accept arguments as expected. This test does not check for progress, matching issues, or sensible output buffer values.

No errors

Passed Non-blocking intracommunicator - nonblocking2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

This is a basic test of all 17 non-blocking collective operations specified by the MPI-3 standard. It only exercises the intracommunicator functionality, does not use MPI_IN_PLACE, and only transmits/receives simple integer types with relatively small counts. It does check a few fancier issues, such as ensuring that "premature user releases" of MPI_Op and MPI_Datatype objects does not result in a segfault.

No errors

Passed Non-blocking overlapping - nonblocking3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

This test attempts to execute multiple simultaneous non-blocking collective (NBC) MPI routines at the same time, and manages their completion with a variety of routines (MPI_{Wait,Test}{,_all,_any,_some}). The test also exercises a few point-to-point operations.

No errors

Passed Non-blocking wait - nonblocking

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 10

Test Description:

This is a weak sanity test that all non-blocking collectives specified by MPI-3 are present in the library and take arguments as expected. Includes calls to MPI_Wait() immediately following non-blocking collective calls. This test does not check for progress, matching issues, or sensible output buffer values.

No errors

Passed One-Sided get-accumulate indexed - strided_getacc_indexed

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This code performs N strided get accumulate operations into a 2d patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI indexed type.

No errors

Passed One-Sided get-accumulate shared - strided_getacc_indexed_shared

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This code performs N strided get accumulate operations into a 2d patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI indexed type. Shared buffers are created by MPI_Win_allocate_shared.

No errors

Passed One-Sided put-get shared - strided_putget_indexed_shared

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This code performs N strided put operations followed by get operations into a 2-D patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI indexed type. Shared buffers are created by MPI_Win_allocate_shared.

No errors

Passed RMA MPI_PROC_NULL target - rmanull

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test MPI_PROC_NULL as a valid target for many RMA operations using active target synchronization, passive target synchronization, and request-based passive target synchronization.

No errors

Passed RMA Shared Memory - fence_shm

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This simple RMA shared memory test uses MPI_Win_allocate_shared() with MPI_Win_fence() and MPI_Put() calls with and without assert MPI_MODE_NOPRECEDE.

No errors

Passed RMA zero-byte transfers - rmazero

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests zero-byte transfers for a selection of communicators for many RMA operations using active target synchronizaiton and request-based passive target synchronization.

No errors

Passed RMA zero-size compliance - badrma

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

The test uses various combinations of either zero size datatypes or zero size counts for Put, Get, Accumulate, and Get_Accumulate. All tests should pass to be compliant with the MPI-3.0 specification.

No errors

Passed Request-based operations - req_example

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Example 11.21 from the MPI 3.0 spec. The following example shows how RMA request-based operations can be used to overlap communication with computation. Each process fetches, processes, and writes the result for NSTEPS chunks of data. Instead of a single buffer, M local buffers are used to allow up to M communication operations to overlap with computation.

No errors

Passed Simple thread comm idup - comm_idup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a simple test of threads in MPI with non-blocking communicator duplication.

No Errors

Passed Thread/RMA interaction - multirma

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This is a simple test of threads in MPI.

No errors

Passed Threaded group - comm_create_group_threads

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

In this test a number of threads are created with a distinct MPI communicator (or comm) group distinguished by its thread-id (used as a tag). Threads on even ranks join an even comm and threads on odd ranks join the odd comm.

No errors

Passed Type_create_hindexed_block - hindexed_block

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Tests behavior with a hindexed_block that can be converted to a contig easily. This is specifically for coverage. Returns the number of errors encountered.

No errors

Passed Type_create_hindexed_block contents - hindexed_block_contents

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test is a simple check of MPI_Type_create_hindexed_block() using MPI_Type_get_envelope() and MPI_Type_get_contents().

No errors

Passed Win_allocate_shared zero - win_zero

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test MPI_Win_allocate_shared when size of the shared memory region is 0 and when the size is 0 on every other process and 1 on the others.

No errors

Passed Win_create_dynamic - win_dynamic_acc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises dynamic RMA windows using the MPI_Win_create_dynamic() and MPI_Accumulate() operations.

No errors

Passed Win_flush basic - flush

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Window Flush. This simple test flushes a shared window using MPI_Win_flush() and MPI_Win_flush_all().

No errors

Passed Win_flush_local basic - flush_local

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Window Flush. This simple test flushes a shared window using MPI_Win_flush_local() and MPI_Win_flush_local_all().

No errors

Passed Win_get_attr - win_flavors

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test determines which "flavor" of RMA is created by creating windows and using MPI_Win_get_attr to access the attributes of each window.

No errors

Passed Win_info - win_info

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test creates an RMA info object, sets key/value pairs on the object, then duplicates the info object and confirms that the key/value pairs are in the same order as the original.

No errors

Passed Win_shared_query basic - win_shared

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This simple test exercises the MPI_Win_shared_query() by querying a shared window and verifying it produced the correct results.

1 -- size = 40000 baseptr = 0x150e693c4000 my_baseptr = 0x150e693cdc40
0 -- size = 40000 baseptr = 0x14d5133c3000 my_baseptr = 0x14d5133c3000
No errors
0 -- size = 40000 baseptr = 0x14c529bfa000 my_baseptr = 0x14c529bfa000
1 -- size = 40000 baseptr = 0x14589a996000 my_baseptr = 0x14589a99fc40

Passed Win_shared_query non-contig put - win_shared_noncontig_put

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

MPI_Put test with noncontiguous datatypes using MPI_Win_shared_query() to query windows on different ranks and verify they produced the correct results.

No errors

Passed Win_shared_query non-contiguous - win_shared_noncontig

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises MPI_Win_shared_query() by querying windows on different ranks and verifying they produced the correct results.

No errors

Passed Window same_disp_unit - win_same_disp_unit

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test the acceptance of the MPI 3.1 standard same_disp_unit info key for window creation.

No errors

MPI-2.2 - Score: 95% Passed

This group features tests that exercises MPI functionality of MPI-2.2 and earlier.

Passed Alloc_mem - alloc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Simple check to see if MPI_Alloc_mem() is supported.

No errors

Passed C/Fortran interoperability supported - interoperability

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Checks if the C-Fortran (F77) interoperability functions are supported using the MPI-2.2 specification.

No errors

Passed Comm_create intercommunicators - iccreate

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This program tests MPI_Comm_create() using a selection of intercommunicators. Creates a new communicator from an intercommunicator, duplicates the communicator, and verifies that it works. Includes test with one side of intercommunicator being set with MPI_GROUP_EMPTY.

Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Testing communication on intercomm 'Single rank in each group', remote_size=1
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
isends posted, about to recv
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
isends posted, about to recv
my recvs completed, about to waitall
isends posted, about to recv
my recvs completed, about to waitall
isends posted, about to recv
my recvs completed, about to waitall
isends posted, about to recv
isends posted, about to recv
my recvs completed, about to waitall
isends posted, about to recv
my recvs completed, about to waitall
my recvs completed, about to waitall
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=7
isends posted, about to recv
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=2
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=2
isends posted, about to recv
my recvs completed, about to waitall
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=2
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=2
isends posted, about to recv
my recvs completed, about to waitall
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=2
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=2
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=6
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=6
isends posted, about to recv
my recvs completed, about to waitall
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=4
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Creating a new intercomm 0-0
Testing communication on intercomm 'Single rank in each group', remote_size=1
isends posted, about to recv
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall
No errors
my recvs completed, about to waitall
Creating a new intercomm (manual dup)
Creating a new intercomm (manual dup (done))
Result of comm/intercomm compare is 1
Testing communication on intercomm 'Dup of original', remote_size=3
isends posted, about to recv
my recvs completed, about to waitall

Passed Comm_split intercommunicators - icsplit

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

This tests MPI_Comm_split() using a selection of intercommunicators. The split communicator is tested using simple send and receive routines.

Created intercomm Intercomm by splitting MPI_COMM_WORLD
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 1, rest
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD into 2, rest
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then dup'ing
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then then splitting again
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then discarding 0 ranks with MPI_Comm_create
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Created intercomm Intercomm by splitting MPI_COMM_WORLD then discarding 0 ranks with MPI_Comm_create
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Testing communication on intercomm 
No errors
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD (discarding rank 0 in the left group) then MPI_Intercomm_create'ing
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then discarding 0 ranks with MPI_Comm_create
Testing communication on intercomm 
Testing communication on intercomm 
Created intercomm Intercomm by splitting MPI_COMM_WORLD then discarding 0 ranks with MPI_Comm_create
Created intercomm Intercomm by splitting MPI_COMM_WORLD then discarding 0 ranks with MPI_Comm_create
Created intercomm Intercomm by splitting MPI_COMM_WORLD then discarding 0 ranks with MPI_Comm_create
Testing communication on intercomm 
Testing communication on intercomm 
Testing communication on intercomm

Passed Communicator attributes - attributes

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Returns all communicator attributes that are not supported. The test is run as a single process MPI job and fails if any attributes are not supported.

No errors

Passed Deprecated routines - deprecated

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Checks all MPI deprecated routines as of MPI-2.2, but not including routines removed by MPI-3 if this is an MPI-3 implementation.

MPI_Attr_delete(): is functional.
MPI_Attr_get(): is functional.
MPI_Attr_put(): is functional.
MPI_Keyval_create(): is functional.
MPI_Keyval_free(): is functional.
MPI_Address(): is removed by MPI 3.0+.
MPI_Errhandler_create(): is removed by MPI 3.0+.
MPI_Errhandler_get(): is removed by MPI 3.0+.
MPI_Errhandler_set(): is removed by MPI 3.0+.
MPI_Type_extent(): is removed by MPI 3.0+.
MPI_Type_hindexed(): is removed by MPI 3.0+.
MPI_Type_hvector(): is removed by MPI 3.0+.
MPI_Type_lb(): is removed by MPI 3.0+.
MPI_Type_struct(): is removed by MPI 3.0+.
MPI_Type_ub(): is removed by MPI 3.0+.
No errors

Passed Error Handling - errors

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Reports the default action taken on an error. It also reports if error handling can be changed to "returns", and if so, if this functions properly.

MPI errors are fatal by default.
MPI errors can be changed to MPI_ERRORS_RETURN.
Call MPI_Send() with a bad destination rank.
Error code: 940169478
Error string: Invalid rank, error stack:
PMPI_Send(163): MPI_Send(buf=0x7ffd96b41f7c, count=1, MPI_INT, dest=1, tag=MPI_ANY_TAG, MPI_COMM_WORLD) failed
PMPI_Send(100): Invalid rank has value 1 but must be nonnegative and less than 1
No errors

Passed Extended collectives - collectives

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Checks if "extended collectives" are supported, i.e., collective operations with MPI-2 intercommunicators.

No errors

Passed Init arguments - init_args

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

In MPI-1.1, it is explicitly stated that an implementation is allowed to require that the arguments argc and argv passed by an application to MPI_Init in C be the same arguments passed into the application as the arguments to main. In MPI-2 implementations are not allowed to impose this requirement. Conforming implementations of MPI allow applications to pass NULL for both the argc and argv arguments of MPI_Init(). This test prints the result of the error status of MPI_Init(). If the test completes without error, it reports 'No errors.'

MPI_INIT accepts Null arguments for MPI_init().
No errors

Passed MPI-2 replaced routines - mpi_2_functions

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Checks the presence of all MPI-2.2 routines that replaced deprecated routines.

No errors

Passed MPI-2 type routines - mpi_2_functions_bcast

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test checks that a subset of MPI-2 routines that replaced MPI-1 routines work correctly.

rank:0/2 c:'C',d[6]={3.141590,1.000000,12.310000,7.980000,5.670000,12.130000},b="123456"
rank:0/2 MPI_Bcast() of struct.
No errors
rank:1/2 c:'C',d[6]={3.141590,1.000000,12.310000,7.980000,5.670000,12.130000},b="123456"
rank:1/2 MPI_Bcast() of struct.

Passed MPI_Topo_test dgraph - dgraph_unwgt

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Specify a distributed graph of a bidirectional ring of the MPI_COMM_WORLD communicator. Thus each node in the graph has a left and right neighbor.

No errors

Failed Master/slave - master

Build: Passed

Execution: Failed

Exit Status: Application_ended_with_error(s)

MPI Processes: 1

Test Description:

This test running as a single MPI process spawns four slave processes using MPI_Comm_spawn(). The master process sends and receives a message from each slave. If the test completes, it will report 'No errors.', otherwise specific error messages are listed.

MPI_UNIVERSE_SIZE read 1
MPI_UNIVERSE_SIZE forced to 4
master rank creating 4 slave processes.
Assertion failed in file ../src/mpid/ch4/netmod/ofi/ofi_spawn.c at line 753: 0
/opt/cray/pe/lib64/libmpi_cray.so.12(MPL_backtrace_show+0x26) [0x14c02d0ccc2b]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x1fef8d4) [0x14c02cb038d4]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x22a6fe8) [0x14c02cdbafe8]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x202ac89) [0x14c02cb3ec89]
/opt/cray/pe/lib64/libmpi_cray.so.12(+0x202b03e) [0x14c02cb3f03e]
/opt/cray/pe/lib64/libmpi_cray.so.12(MPI_Comm_spawn+0x1e2) [0x14c02c6ab212]
/var/run/palsd/0b3f3421-650f-4245-9890-f437cc2ae0ee/files/master() [0x203e09]
/lib64/libc.so.6(__libc_start_main+0xef) [0x14c029e9524d]
/var/run/palsd/0b3f3421-650f-4245-9890-f437cc2ae0ee/files/master() [0x203afa]
MPICH ERROR [Rank 0] [job id 0b3f3421-650f-4245-9890-f437cc2ae0ee] [Mon Apr 15 11:03:44 2024] [x1001c5s5b0n0] - Abort(1): Internal error
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 1

Passed One-sided communication - one_sided_modes

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Checks MPI-2.2 one-sided communication modes reporting those that are not defined. If the test compiles, then "No errors" is reported, else, all undefined modes are reported as "not defined."

No errors

Passed One-sided fences - one_sided_fences

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Verifies that one-sided communication with active target synchronization with fences functions properly. If all operations succeed, one-sided communication with active target synchronization with fences is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with active target synchronization with fences is reported as NOT supported.

No errors

Passed One-sided passiv - one_sided_passive

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Verifies one-sided-communication with passive target synchronization functions properly. If all operations succeed, one-sided-communication with passive target synchronization is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with passive target synchronization with fences is reported as NOT supported.

No errors

Passed One-sided post - one_sided_post

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Verifies one-sided-communication with active target synchronization with post/start/complete/wait functions properly. If all operations succeed, one-sided communication with active target synchronization with post/start/complete/wait is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with active target synchronization with post/start/complete/wait is reported as NOT supported.

No errors

Passed One-sided routines - one_sided_routines

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Reports if one-sided communication routines are defined. If this test compiles, one-sided communication is reported as defined, otherwise "not supported".

No errors

Passed Reduce_local basic - reduce_local

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

A simple test of MPI_Reduce_local(). Uses MPI_SUM as well as user defined operators on arrays of increasing size.

No errors

Passed Thread support - thread_safety

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Reports the level of thread support provided. This is either MPI_THREAD_SINGLE, MPI_THREAD_FUNNELED, MPI_THREAD_SERIALIZED, or MPI_THREAD_MULTIPLE.

MPI_THREAD_MULTIPLE requested.
MPI_THREAD_MULTIPLE is supported.
No errors

RMA - Score: 100% Passed

This group features tests that involve Remote Memory Access, sometimes called one-sided communication. Remote Memory Access is similar in fuctionality to shared memory access.

Passed ADLB mimic - adlb_mimic1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 3

Test Description:

This test uses one server process (S), one target process (T) and a bunch of origin processes (O). 'O' PUTs (LOCK/PUT/UNLOCK) data to a distinct part of the window, and sends a message to 'S' once the UNLOCK has completed. The server forwards this message to 'T'. 'T' GETS the data from this buffer (LOCK/GET/UNLOCK) after it receives the message from 'S', to see if it contains the correct contents.

diagram showing communication steps between the S, O, and T processes
No errors

Passed Accumulate fence sum alloc_mem - accfence2_am

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test MPI_Accumulate with fence. This test is the same as "Accumulate with fence sum" except that it uses alloc_mem() to allocate memory.

No errors

Passed Accumulate parallel pi - ircpi

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test calculates pi by integrating the function 4/(1+x*x) using MPI_Accumulate and other RMA functions.

Enter the number of intervals: (0 quits) 
Number if intervals used: 10
pi is approximately 3.1424259850010983, Error is 0.0008333314113051
Enter the number of intervals: (0 quits) 
Number if intervals used: 100
pi is approximately 3.1416009869231241, Error is 0.0000083333333309
Enter the number of intervals: (0 quits) 
Number if intervals used: 1000
pi is approximately 3.1415927369231254, Error is 0.0000000833333322
Enter the number of intervals: (0 quits) 
Number if intervals used: 10000
pi is approximately 3.1415926544231318, Error is 0.0000000008333387
Enter the number of intervals: (0 quits) 
Number if intervals used: 100000
pi is approximately 3.1415926535981016, Error is 0.0000000000083085
Enter the number of intervals: (0 quits) 
Number if intervals used: 1000000
pi is approximately 3.1415926535899388, Error is 0.0000000000001457
Enter the number of intervals: (0 quits) 
Number if intervals used: 10000000
pi is approximately 3.1415926535899850, Error is 0.0000000000001918
Enter the number of intervals: (0 quits) 
Number if intervals used: 0
No errors.

Passed Accumulate with Lock - acc-loc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Accumulate Lock. This test uses MAXLOC and MINLOC with MPI_Accumulate on a 2Int datatype with and without MPI_Win_lock set with MPI_LOCK_SHARED.

No errors

Passed Accumulate with fence comms - accfence1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Simple test of Accumulate/Replace with fence for a selection of communicators and datatypes.

No errors

Passed Accumulate with fence sum - accfence2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test MPI_Accumulate using MPI_SUM with fence using a selection of communicators and datatypes and verifying the operations produce the correct result.

No errors

Passed Alloc_mem - alloc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Simple check to see if MPI_Alloc_mem() is supported.

No errors

Passed Alloc_mem basic - allocmem

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Allocate Memory. Simple test where MPI_Alloc_mem() and MPI_Free_mem() work together.

No errors

Passed Compare_and_swap contention - compare_and_swap

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Tests MPI_Compare_and_swap using self communication, neighbor communication, and communication with the root causing contention.

No errors

Passed Contention Put - contention_put

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Contended RMA put test. Each process issues COUNT put operations to non-overlapping locations on every other process and checks the correct result was returned.

No errors

Passed Contention Put/Get - contention_putget

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Contended RMA put/get test. Each process issues COUNT put and get operations to non-overlapping locations on every other process.

No errors

Passed Contiguous Get - contig_displ

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This program calls MPI_Get with an indexed datatype. The datatype comprises a single integer at an initial displacement of 1 integer. That is, the first integer in the array is to be skipped. This program found a bug in IBM's MPI in which MPI_Get ignored the displacement and got the first integer instead of the second. Run with one (1) process.

No errors

Passed Fetch_and_add allocmem - fetchandadd_am

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

Fetch and add example from Using MPI-2 (the non-scalable version, Fig. 6.12). This test is the same as fetch_and_add test 1 (rma/fetchandadd) but uses MPI_Alloc_mem and MPI_Free_mem.

No errors

Passed Fetch_and_add basic - fetchandadd

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

Fetch and add example from Using MPI-2 (the non-scalable version, Fig. 6.12). Root provides a shared counter array that other processes fetch and increment. Each process records the sum of values in the counter array after each fetch then the root gathers these sums and verifies each counter state is observed.

No errors

Passed Fetch_and_add tree allocmem - fetchandadd_tree_am

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

Scalable tree-based fetch and add example from Using MPI-2, pg 206-207. This test is the same as fetch_and_add test 3 but uses MPI_Alloc_mem and MPI_Free_mem.

No errors

Passed Fetch_and_add tree atomic - fetchandadd_tree

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

Scalable tree-based fetch and add example from the book Using MPI-2, p. 206-207. This test is functionally attempting to perform an atomic read-modify-write sequence using MPI-2 one-sided operations. This version uses a tree instead of a simple array, where internal nodes of the tree hold the sums of the contributions of their children. The code in the book (Fig 6.16) has bugs that are fixed in this test.

No errors

Passed Fetch_and_op basic - fetch_and_op

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This simple set of tests executes the MPI_Fetch_and op() calls on RMA windows using a selection of datatypes with multiple different communicators, communication patterns, and operations.

No errors

Passed Get series - test5

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests a series of Gets. Runs using exactly two processors.

No errors

Passed Get series allocmem - test5_am

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests a series of Gets. Run with 2 processors. Same as "Get series" test (rma/test5) but uses alloc_mem.

No errors

Passed Get with fence basic - getfence1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Get with Fence. This is a simple test using MPI_Get() with fence for a selection of communicators and datatypes.

No errors

Passed Get_acculumate basic - get_acc_local

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Get Accumulated Test. This is a simple test of MPI_Get_accumulate() on a local window.

No errors

Passed Get_accumulate communicators - get_accumulate

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Get Accumulate Test. This simple set of tests executes MPI_Get_accumulate on RMA windows using a selection of datatypes with multiple different communicators, communication patterns, and operations.

No errors

Passed Keyvalue create/delete - fkeyvalwin

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Free keyval window. Test freeing keyvals while still attached to an RMA window, then make sure that the keyval delete code is still executed. Tested with a selection of windows.

No errors

Passed Linked list construction fetch/op - linked_list_fop

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test constructs a distributed shared linked list using MPI-3 dynamic windows with MPI_Fetch_and_op. Initially process 0 creates the head of the list, attaches it to an RMA window, and broadcasts the pointer to all processes. All processes then concurrently append N new elements to the list. When a process attempts to attach its element to the tail of list it may discover that its tail pointer is stale and it must chase ahead to the new tail before the element can be attached.

No errors

Passed Linked list construction lockall - linked_list_lockall

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Construct a distributed shared linked list using MPI-3 dynamic RMA windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. All processes then concurrently append N new elements to the list. When a process attempts to attach its element to the tail of list it may discover that its tail pointer is stale and it must chase ahead to the new tail before the element can be attached. This version of the test suite uses MPI_Win_lock_all() instead of MPI_Win_lock(MPI_LOCK_EXCLUSIVE, ...).

No errors

Passed Linked-list construction lock shr - linked_list_bench_lock_shr

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test constructs a distributed shared linked list using MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. Each process "p" then appends N new elements to the list when the tail reaches process "p-1". This test is similar to Linked_list construction test 2 (rma/linked_list_bench_lock_excl) but uses an MPI_LOCK_SHARED parameter to MPI_Win_Lock().

No errors

Passed Linked_list construction - linked_list_bench_lock_all

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Construct a distributed shared linked list using MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. Each process "p" then appends N new elements to the list when the tail reaches process "p-1".

No errors

Passed Linked_list construction lock excl - linked_list_bench_lock_excl

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

MPI-3 distributed linked list construction example. Construct a distributed shared linked list using proposed MPI-3 dynamic windows. Initially process 0 creates the head of the list, attaches it to the window, and broadcasts the pointer to all processes. Each process "p" then appends N new elements to the list when the tail reaches process "p-1". The test uses the MPI_LOCK_EXCLUSIVE argument with MPI_Win_lock().

No errors

Passed Linked_list construction put/get - linked_list

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test constructs a distributed shared linked list using MPI-3 dynamic windows with MPI_Put and MPI_Get. Initially process 0 creates the head of the list, attaches it to an RMA window, and broadcasts the pointer to all processes. All processes then concurrently append N new elements to the list. When a process attempts to attach its element to the tail of list it may discover that its tail pointer is stale and it must chase ahead to the new tail before the element can be attached.

No errors

Passed Lock-single_op-unlock - lockopts

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test passive target RMA on 2 processes with the original datatype derived from the target datatype. Includes multiple tests for MPI_Accumulate, MPI_Put, MPI_Put with MPI_Get move-to-end optimization, and MPI_Put with a MPI_Get already at the end move-to-end optimization.

No errors

Passed Locks with no RMA ops - locknull

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test creates a window, clears the memory in it using memset(), locks and unlocks it, then terminates.

No errors

Passed MCS_Mutex_trylock - mutex_bench

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises the MCS_Mutex_lock calls by having multiple competing processes repeatedly lock and unlock a mutex.

No errors

Passed MPI RMA read-and-ops - reqops

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises atomic, one-sided read-and-operation calls. Includes multiple tests for different RMA request-based operations, communicators, and wait patterns.

No errors

Passed MPI_Win_allocate_shared - win_large_shm

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test MPI_Win_allocate and MPI_Win_allocate_shared when allocating memory with size of 1GB per process. Also tests having every other process allocate zero bytes and tests having every other process allocate 0.5GB.

No errors

Passed Matrix transpose PSCW - transpose3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Transposes a matrix using post/start/complete/wait and derived datatypes. Uses vector and hvector (Example 3.32 from MPI 1.1 Standard). Run using 2 processors.

No errors

Passed Matrix transpose accum - transpose5

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This does a transpose-accumulate operation. Uses vector and hvector datatypes (Example 3.32 from MPI 1.1 Standard). Run using 2 processors.

No errors

Passed Matrix transpose get hvector - transpose7

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test transpose a matrix with a get operation, fence, and derived datatypes. Uses vector and hvector (Example 3.32 from MPI 1.1 Standard). Run using exactly 2 processorss.

No errors

Passed Matrix transpose local accum - transpose6

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This does a local transpose-accumulate operation. Uses vector and hvector datatypes (Example 3.32 from MPI 1.1 Standard). Run using exactly 1 processor.

No errors

Passed Matrix transpose passive - transpose4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Transposes a matrix using passive target RMA and derived datatypes. Uses vector and hvector (Example 3.32 from MPI 1.1 Standard). Run using 2 processors.

No errors

Passed Matrix transpose put hvector - transpose1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Transposes a matrix using put, fence, and derived datatypes. Uses vector and hvector (Example 3.32 from MPI 1.1 Standard). Run using 2 processors.

No errors

Passed Matrix transpose put struct - transpose2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Transposes a matrix using put, fence, and derived datatypes. Uses vector and struct (Example 3.33 from MPI 1.1 Standard). We could use vector and type_create_resized instead. Run using exactly 2 processors.

No errors

Passed Mixed synchronization test - mixedsync

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Perform several RMA communication operations, mixing synchronization types. Use multiple communication to avoid the single-operation optimization that may be present.

Beginning loop 0 of mixed sync put operations
Beginning loop 0 of mixed sync put operations
About to perform exclusive lock
Beginning loop 0 of mixed sync put operations
Beginning loop 0 of mixed sync put operations
About to start fence
About to start fence
Released exclusive lock
About to start fence
Finished with fence sync
Beginning loop 1 of mixed sync put operations
About to perform exclusive lock
About to start fence
Finished with fence sync
Beginning loop 1 of mixed sync put operations
Finished with fence sync
Beginning loop 1 of mixed sync put operations
About to start fence
Finished with fence sync
Beginning loop 1 of mixed sync put operations
Released exclusive lock
About to start fence
Finished with fence sync
Begining loop 0 of mixed sync put/acc operations
About to start fence
Finished with fence sync
Begining loop 0 of mixed sync put/acc operations
Finished with fence sync
Begining loop 0 of mixed sync put/acc operations
About to start fence
Finished with fence sync
Begining loop 0 of mixed sync put/acc operations
Begining loop 1 of mixed sync put/acc operations
Begining loop 1 of mixed sync put/acc operations
Begining loop 1 of mixed sync put/acc operations
Begining loop 1 of mixed sync put/acc operations
Begining loop 0 of mixed sync put/get/acc operations
Begining loop 0 of mixed sync put/get/acc operations
Begining loop 0 of mixed sync put/get/acc operations
Begining loop 0 of mixed sync put/get/acc operations
Begining loop 1 of mixed sync put/get/acc operations
Begining loop 1 of mixed sync put/get/acc operations
Begining loop 1 of mixed sync put/get/acc operations
Begining loop 1 of mixed sync put/get/acc operations
Freeing the window
Freeing the window
Freeing the window
Freeing the window
No errors

Passed One-Sided accumulate indexed - strided_acc_indexed

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This code performs N accumulates into a 2d patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI indexed type.

No errors

Passed One-Sided accumulate one lock - strided_acc_onelock

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This code performs one-sided accumulate into a 2-D patch of a shared array.

No errors

Passed One-Sided accumulate subarray - strided_acc_subarray

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This code performs N accumulates into a 2d patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI subarray type.

No errors

Passed One-Sided get indexed - strided_get_indexed

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This code performs N strided get operations into a 2d patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI indexed type.

No errors

Passed One-Sided get-accumulate indexed - strided_getacc_indexed

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This code performs N strided get accumulate operations into a 2d patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI indexed type.

No errors

Passed One-Sided get-accumulate shared - strided_getacc_indexed_shared

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This code performs N strided get accumulate operations into a 2d patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI indexed type. Shared buffers are created by MPI_Win_allocate_shared.

No errors

Passed One-Sided put-get indexed - strided_putget_indexed

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This code performs N strided put operations followed by get operations into a 2-D patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI indexed datatype.

No errors

Passed One-Sided put-get shared - strided_putget_indexed_shared

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This code performs N strided put operations followed by get operations into a 2-D patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI indexed type. Shared buffers are created by MPI_Win_allocate_shared.

No errors

Passed One-sided communication - one_sided_modes

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Checks MPI-2.2 one-sided communication modes reporting those that are not defined. If the test compiles, then "No errors" is reported, else, all undefined modes are reported as "not defined."

No errors

Passed One-sided fences - one_sided_fences

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Verifies that one-sided communication with active target synchronization with fences functions properly. If all operations succeed, one-sided communication with active target synchronization with fences is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with active target synchronization with fences is reported as NOT supported.

No errors

Passed One-sided passiv - one_sided_passive

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Verifies one-sided-communication with passive target synchronization functions properly. If all operations succeed, one-sided-communication with passive target synchronization is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with passive target synchronization with fences is reported as NOT supported.

No errors

Passed One-sided post - one_sided_post

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Verifies one-sided-communication with active target synchronization with post/start/complete/wait functions properly. If all operations succeed, one-sided communication with active target synchronization with post/start/complete/wait is reported as supported. If one or more operations fail, the failures are reported and one-sided-communication with active target synchronization with post/start/complete/wait is reported as NOT supported.

No errors

Passed One-sided routines - one_sided_routines

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Reports if one-sided communication routines are defined. If this test compiles, one-sided communication is reported as defined, otherwise "not supported".

No errors

Passed Put with fences - epochtest

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Put with Fences used to seperate epochs. This test looks at the behavior of MPI_Win_fence and epochs. Each MPI_Win_fence may both begin and end both the exposure and access epochs. Thus, it is not necessary to use MPI_Win_fence in pairs. Tested with a selection of communicators and datatypes.

The tests have the following form:

      Process A             Process B
        fence                 fence
        put,put
        fence                 fence
                              put,put
        fence                 fence
        put,put               put,put
        fence                 fence
      
Putting count = 1 of sendtype MPI_INT receive type MPI_INT
Putting count = 1 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1 of sendtype int-vector receive type MPI_INT
Putting count = 1 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2 of sendtype MPI_INT receive type MPI_INT
Putting count = 2 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2 of sendtype int-vector receive type MPI_INT
Putting count = 2 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4 of sendtype MPI_INT receive type MPI_INT
Putting count = 4 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4 of sendtype int-vector receive type MPI_INT
Putting count = 4 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8 of sendtype MPI_INT receive type MPI_INT
Putting count = 8 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8 of sendtype int-vector receive type MPI_INT
Putting count = 8 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16 of sendtype MPI_INT receive type MPI_INT
Putting count = 16 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16 of sendtype int-vector receive type MPI_INT
Putting count = 16 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32 of sendtype MPI_INT receive type MPI_INT
Putting count = 32 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32 of sendtype int-vector receive type MPI_INT
Putting count = 32 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 64 of sendtype MPI_INT receive type MPI_INT
Putting count = 64 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 64 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 64 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 64 of sendtype int-vector receive type MPI_INT
Putting count = 64 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 64 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 64 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 64 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 64 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 64 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 64 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 64 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 64 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 128 of sendtype MPI_INT receive type MPI_INT
Putting count = 128 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 128 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 128 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 128 of sendtype int-vector receive type MPI_INT
Putting count = 128 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 128 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 128 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 128 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 128 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 128 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 128 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 128 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 128 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 256 of sendtype MPI_INT receive type MPI_INT
Putting count = 256 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 256 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 256 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 256 of sendtype int-vector receive type MPI_INT
Putting count = 256 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 256 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 256 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 256 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 256 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 256 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 256 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 256 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 256 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 512 of sendtype MPI_INT receive type MPI_INT
Putting count = 512 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 512 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 512 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 512 of sendtype int-vector receive type MPI_INT
Putting count = 512 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 512 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 512 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 512 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 512 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 512 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 512 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 512 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 512 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1024 of sendtype MPI_INT receive type MPI_INT
Putting count = 1024 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1024 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1024 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1024 of sendtype int-vector receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1024 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1024 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1024 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1024 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1024 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1024 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1024 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2048 of sendtype MPI_INT receive type MPI_INT
Putting count = 2048 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2048 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2048 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2048 of sendtype int-vector receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2048 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2048 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2048 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2048 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2048 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2048 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2048 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4096 of sendtype MPI_INT receive type MPI_INT
Putting count = 4096 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4096 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4096 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4096 of sendtype int-vector receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4096 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4096 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4096 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4096 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4096 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4096 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4096 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8192 of sendtype MPI_INT receive type MPI_INT
Putting count = 8192 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8192 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8192 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8192 of sendtype int-vector receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8192 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8192 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8192 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8192 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8192 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8192 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8192 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16384 of sendtype MPI_INT receive type MPI_INT
Putting count = 16384 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16384 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16384 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16384 of sendtype int-vector receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16384 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16384 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16384 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16384 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16384 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16384 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16384 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32768 of sendtype MPI_INT receive type MPI_INT
Putting count = 32768 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32768 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32768 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32768 of sendtype int-vector receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32768 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32768 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32768 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32768 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32768 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32768 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32768 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1 of sendtype MPI_INT receive type MPI_INT
Putting count = 1 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1 of sendtype int-vector receive type MPI_INT
Putting count = 1 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2 of sendtype MPI_INT receive type MPI_INT
Putting count = 2 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2 of sendtype int-vector receive type MPI_INT
Putting count = 2 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4 of sendtype MPI_INT receive type MPI_INT
Putting count = 4 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4 of sendtype int-vector receive type MPI_INT
Putting count = 4 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8 of sendtype MPI_INT receive type MPI_INT
Putting count = 8 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8 of sendtype int-vector receive type MPI_INT
Putting count = 8 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16 of sendtype MPI_INT receive type MPI_INT
Putting count = 16 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16 of sendtype int-vector receive type MPI_INT
Putting count = 16 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32 of sendtype MPI_INT receive type MPI_INT
Putting count = 32 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32 of sendtype int-vector receive type MPI_INT
Putting count = 32 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 64 of sendtype MPI_INT receive type MPI_INT
Putting count = 64 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 64 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 64 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 64 of sendtype int-vector receive type MPI_INT
Putting count = 64 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 64 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 64 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 64 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 64 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 64 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 64 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 64 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 64 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 128 of sendtype MPI_INT receive type MPI_INT
Putting count = 128 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 128 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 128 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 128 of sendtype int-vector receive type MPI_INT
Putting count = 128 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 128 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 128 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 128 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 128 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 128 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 128 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 128 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 128 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 256 of sendtype MPI_INT receive type MPI_INT
Putting count = 256 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 256 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 256 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 256 of sendtype int-vector receive type MPI_INT
Putting count = 256 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 256 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 256 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 256 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 256 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 256 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 256 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 256 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 256 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 512 of sendtype MPI_INT receive type MPI_INT
Putting count = 512 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 512 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 512 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 512 of sendtype int-vector receive type MPI_INT
Putting count = 512 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 512 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 512 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 512 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 512 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 512 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 512 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 512 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 512 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1024 of sendtype MPI_INT receive type MPI_INT
Putting count = 1024 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1024 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1024 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1024 of sendtype int-vector receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1024 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1024 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1024 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1024 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1024 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1024 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1024 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2048 of sendtype MPI_INT receive type MPI_INT
Putting count = 2048 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2048 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2048 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2048 of sendtype int-vector receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2048 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2048 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2048 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2048 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2048 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2048 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2048 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4096 of sendtype MPI_INT receive type MPI_INT
Putting count = 4096 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4096 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4096 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4096 of sendtype int-vector receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4096 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4096 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4096 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4096 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4096 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4096 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4096 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8192 of sendtype MPI_INT receive type MPI_INT
Putting count = 8192 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8192 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8192 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8192 of sendtype int-vector receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8192 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8192 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8192 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8192 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8192 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8192 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8192 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16384 of sendtype MPI_INT receive type MPI_INT
Putting count = 16384 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16384 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16384 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16384 of sendtype int-vector receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16384 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16384 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16384 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16384 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16384 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16384 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16384 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32768 of sendtype MPI_INT receive type MPI_INT
Putting count = 32768 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32768 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32768 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32768 of sendtype int-vector receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32768 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32768 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32768 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32768 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32768 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32768 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32768 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1 of sendtype MPI_INT receive type MPI_INT
Putting count = 1 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1 of sendtype int-vector receive type MPI_INT
Putting count = 1 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2 of sendtype MPI_INT receive type MPI_INT
Putting count = 2 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2 of sendtype int-vector receive type MPI_INT
Putting count = 2 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4 of sendtype MPI_INT receive type MPI_INT
Putting count = 4 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4 of sendtype int-vector receive type MPI_INT
Putting count = 4 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8 of sendtype MPI_INT receive type MPI_INT
Putting count = 8 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8 of sendtype int-vector receive type MPI_INT
Putting count = 8 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16 of sendtype MPI_INT receive type MPI_INT
Putting count = 16 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16 of sendtype int-vector receive type MPI_INT
Putting count = 16 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32 of sendtype MPI_INT receive type MPI_INT
Putting count = 32 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32 of sendtype int-vector receive type MPI_INT
Putting count = 32 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 64 of sendtype MPI_INT receive type MPI_INT
Putting count = 64 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 64 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 64 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 64 of sendtype int-vector receive type MPI_INT
Putting count = 64 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 64 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 64 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 64 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 64 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 64 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 64 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 64 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 64 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 128 of sendtype MPI_INT receive type MPI_INT
Putting count = 128 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 128 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 128 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 128 of sendtype int-vector receive type MPI_INT
Putting count = 128 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 128 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 128 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 128 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 128 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 128 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 128 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 128 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 128 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 256 of sendtype MPI_INT receive type MPI_INT
Putting count = 256 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 256 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 256 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 256 of sendtype int-vector receive type MPI_INT
Putting count = 256 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 256 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 256 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 256 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 256 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 256 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 256 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 256 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 256 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 512 of sendtype MPI_INT receive type MPI_INT
Putting count = 512 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 512 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 512 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 512 of sendtype int-vector receive type MPI_INT
Putting count = 512 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 512 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 512 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 512 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 512 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 512 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 512 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 512 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 512 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1024 of sendtype MPI_INT receive type MPI_INT
Putting count = 1024 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1024 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1024 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1024 of sendtype int-vector receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1024 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1024 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1024 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1024 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1024 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1024 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1024 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2048 of sendtype MPI_INT receive type MPI_INT
Putting count = 2048 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2048 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2048 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2048 of sendtype int-vector receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2048 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2048 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2048 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2048 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2048 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2048 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2048 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4096 of sendtype MPI_INT receive type MPI_INT
Putting count = 4096 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4096 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4096 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4096 of sendtype int-vector receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4096 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4096 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4096 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4096 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4096 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4096 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4096 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8192 of sendtype MPI_INT receive type MPI_INT
Putting count = 8192 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8192 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8192 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8192 of sendtype int-vector receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8192 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8192 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8192 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8192 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8192 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8192 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8192 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16384 of sendtype MPI_INT receive type MPI_INT
Putting count = 16384 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16384 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16384 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16384 of sendtype int-vector receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16384 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16384 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16384 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16384 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16384 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16384 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16384 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32768 of sendtype MPI_INT receive type MPI_INT
Putting count = 32768 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32768 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32768 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32768 of sendtype int-vector receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32768 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32768 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32768 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32768 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32768 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32768 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32768 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1 of sendtype MPI_INT receive type MPI_INT
Putting count = 1 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1 of sendtype int-vector receive type MPI_INT
Putting count = 1 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2 of sendtype MPI_INT receive type MPI_INT
Putting count = 2 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2 of sendtype int-vector receive type MPI_INT
Putting count = 2 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4 of sendtype MPI_INT receive type MPI_INT
Putting count = 4 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4 of sendtype int-vector receive type MPI_INT
Putting count = 4 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8 of sendtype MPI_INT receive type MPI_INT
Putting count = 8 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8 of sendtype int-vector receive type MPI_INT
Putting count = 8 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16 of sendtype MPI_INT receive type MPI_INT
Putting count = 16 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16 of sendtype int-vector receive type MPI_INT
Putting count = 16 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32 of sendtype MPI_INT receive type MPI_INT
Putting count = 32 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32 of sendtype int-vector receive type MPI_INT
Putting count = 32 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 64 of sendtype MPI_INT receive type MPI_INT
Putting count = 64 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 64 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 64 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 64 of sendtype int-vector receive type MPI_INT
Putting count = 64 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 64 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 64 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 64 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 64 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 64 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 64 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 64 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 64 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 128 of sendtype MPI_INT receive type MPI_INT
Putting count = 128 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 128 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 128 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 128 of sendtype int-vector receive type MPI_INT
Putting count = 128 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 128 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 128 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 128 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 128 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 128 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 128 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 128 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 128 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 256 of sendtype MPI_INT receive type MPI_INT
Putting count = 256 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 256 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 256 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 256 of sendtype int-vector receive type MPI_INT
Putting count = 256 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 256 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 256 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 256 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 256 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 256 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 256 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 256 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 256 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 512 of sendtype MPI_INT receive type MPI_INT
Putting count = 512 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 512 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 512 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 512 of sendtype int-vector receive type MPI_INT
Putting count = 512 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 512 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 512 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 512 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 512 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 512 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 512 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 512 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 512 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1024 of sendtype MPI_INT receive type MPI_INT
Putting count = 1024 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1024 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1024 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1024 of sendtype int-vector receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1024 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1024 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1024 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1024 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1024 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1024 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1024 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2048 of sendtype MPI_INT receive type MPI_INT
Putting count = 2048 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2048 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2048 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2048 of sendtype int-vector receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2048 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2048 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2048 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2048 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2048 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2048 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2048 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4096 of sendtype MPI_INT receive type MPI_INT
Putting count = 4096 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4096 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4096 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4096 of sendtype int-vector receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4096 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4096 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4096 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4096 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4096 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4096 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4096 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8192 of sendtype MPI_INT receive type MPI_INT
Putting count = 8192 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8192 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8192 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8192 of sendtype int-vector receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8192 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8192 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8192 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8192 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8192 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8192 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8192 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16384 of sendtype MPI_INT receive type MPI_INT
Putting count = 16384 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16384 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16384 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16384 of sendtype int-vector receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16384 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16384 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16384 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16384 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16384 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16384 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16384 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32768 of sendtype MPI_INT receive type MPI_INT
Putting count = 32768 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32768 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32768 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32768 of sendtype int-vector receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32768 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32768 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32768 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32768 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32768 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32768 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32768 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1 of sendtype MPI_INT receive type MPI_INT
Putting count = 1 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1 of sendtype int-vector receive type MPI_INT
Putting count = 1 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2 of sendtype MPI_INT receive type MPI_INT
Putting count = 2 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2 of sendtype int-vector receive type MPI_INT
Putting count = 2 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4 of sendtype MPI_INT receive type MPI_INT
Putting count = 4 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4 of sendtype int-vector receive type MPI_INT
Putting count = 4 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8 of sendtype MPI_INT receive type MPI_INT
Putting count = 8 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8 of sendtype int-vector receive type MPI_INT
Putting count = 8 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16 of sendtype MPI_INT receive type MPI_INT
Putting count = 16 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16 of sendtype int-vector receive type MPI_INT
Putting count = 16 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32 of sendtype MPI_INT receive type MPI_INT
Putting count = 32 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32 of sendtype int-vector receive type MPI_INT
Putting count = 32 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 64 of sendtype MPI_INT receive type MPI_INT
Putting count = 64 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 64 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 64 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 64 of sendtype int-vector receive type MPI_INT
Putting count = 64 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 64 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 64 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 64 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 64 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 64 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 64 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 64 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 64 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 128 of sendtype MPI_INT receive type MPI_INT
Putting count = 128 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 128 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 128 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 128 of sendtype int-vector receive type MPI_INT
Putting count = 128 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 128 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 128 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 128 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 128 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 128 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 128 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 128 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 128 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 256 of sendtype MPI_INT receive type MPI_INT
Putting count = 256 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 256 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 256 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 256 of sendtype int-vector receive type MPI_INT
Putting count = 256 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 256 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 256 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 256 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 256 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 256 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 256 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 256 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 256 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 512 of sendtype MPI_INT receive type MPI_INT
Putting count = 512 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 512 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 512 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 512 of sendtype int-vector receive type MPI_INT
Putting count = 512 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 512 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 512 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 512 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 512 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 512 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 512 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 512 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 512 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1024 of sendtype MPI_INT receive type MPI_INT
Putting count = 1024 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1024 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1024 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1024 of sendtype int-vector receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1024 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1024 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1024 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1024 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1024 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1024 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1024 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2048 of sendtype MPI_INT receive type MPI_INT
Putting count = 2048 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2048 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2048 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2048 of sendtype int-vector receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2048 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2048 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2048 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2048 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2048 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2048 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2048 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4096 of sendtype MPI_INT receive type MPI_INT
Putting count = 4096 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4096 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4096 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4096 of sendtype int-vector receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4096 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4096 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4096 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4096 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4096 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4096 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4096 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8192 of sendtype MPI_INT receive type MPI_INT
Putting count = 8192 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8192 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8192 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8192 of sendtype int-vector receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8192 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8192 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8192 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8192 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8192 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8192 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8192 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16384 of sendtype MPI_INT receive type MPI_INT
Putting count = 16384 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16384 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16384 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16384 of sendtype int-vector receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16384 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16384 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16384 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16384 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16384 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16384 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16384 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32768 of sendtype MPI_INT receive type MPI_INT
Putting count = 32768 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32768 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32768 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32768 of sendtype int-vector receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32768 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32768 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32768 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32768 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32768 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32768 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32768 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1 of sendtype MPI_INT receive type MPI_INT
Putting count = 1 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1 of sendtype int-vector receive type MPI_INT
Putting count = 1 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2 of sendtype MPI_INT receive type MPI_INT
Putting count = 2 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2 of sendtype int-vector receive type MPI_INT
Putting count = 2 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4 of sendtype MPI_INT receive type MPI_INT
Putting count = 4 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4 of sendtype int-vector receive type MPI_INT
Putting count = 4 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8 of sendtype MPI_INT receive type MPI_INT
Putting count = 8 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8 of sendtype int-vector receive type MPI_INT
Putting count = 8 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16 of sendtype MPI_INT receive type MPI_INT
Putting count = 16 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16 of sendtype int-vector receive type MPI_INT
Putting count = 16 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32 of sendtype MPI_INT receive type MPI_INT
Putting count = 32 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32 of sendtype int-vector receive type MPI_INT
Putting count = 32 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 64 of sendtype MPI_INT receive type MPI_INT
Putting count = 64 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 64 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 64 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 64 of sendtype int-vector receive type MPI_INT
Putting count = 64 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 64 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 64 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 64 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 64 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 64 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 64 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 64 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 64 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 128 of sendtype MPI_INT receive type MPI_INT
Putting count = 128 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 128 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 128 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 128 of sendtype int-vector receive type MPI_INT
Putting count = 128 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 128 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 128 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 128 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 128 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 128 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 128 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 128 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 128 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 256 of sendtype MPI_INT receive type MPI_INT
Putting count = 256 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 256 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 256 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 256 of sendtype int-vector receive type MPI_INT
Putting count = 256 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 256 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 256 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 256 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 256 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 256 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 256 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 256 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 256 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 512 of sendtype MPI_INT receive type MPI_INT
Putting count = 512 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 512 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 512 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 512 of sendtype int-vector receive type MPI_INT
Putting count = 512 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 512 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 512 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 512 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 512 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 512 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 512 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 512 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 512 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1024 of sendtype MPI_INT receive type MPI_INT
Putting count = 1024 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1024 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1024 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1024 of sendtype int-vector receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1024 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1024 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1024 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1024 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1024 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1024 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1024 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2048 of sendtype MPI_INT receive type MPI_INT
Putting count = 2048 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2048 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2048 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2048 of sendtype int-vector receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2048 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2048 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2048 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2048 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2048 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2048 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2048 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4096 of sendtype MPI_INT receive type MPI_INT
Putting count = 4096 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4096 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4096 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4096 of sendtype int-vector receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4096 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4096 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4096 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4096 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4096 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4096 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4096 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8192 of sendtype MPI_INT receive type MPI_INT
Putting count = 8192 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8192 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8192 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8192 of sendtype int-vector receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8192 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8192 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8192 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8192 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8192 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8192 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8192 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16384 of sendtype MPI_INT receive type MPI_INT
Putting count = 16384 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16384 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16384 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16384 of sendtype int-vector receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16384 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16384 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16384 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16384 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16384 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16384 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16384 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32768 of sendtype MPI_INT receive type MPI_INT
Putting count = 32768 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32768 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32768 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32768 of sendtype int-vector receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32768 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32768 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32768 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32768 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32768 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32768 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32768 of sendtype MPI_INT receive type MPI_BYTE
No errors

Passed Put-Get-Accum PSCW - test2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests put and get with post/start/complete/wait on 2 processes.

No errors

Passed Put-Get-Accum PSCW allocmem - test2_am

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests put and get with post/start/complete/wait on 2 processes. Same as Put,Gets,Accumulate test 4 (rma/test2) but uses alloc_mem.

No errors

Passed Put-Get-Accum fence - test1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests a series of puts, gets, and accumulate on 2 processes using fence.

No errors

Passed Put-Get-Accum fence allocmem - test1_am

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests a series of put, get, and accumulate on 2 processes using fence. This test is the same as "Put-Get-Accumulate fence" (rma/test1) but uses alloc_mem.

No errors

Passed Put-Get-Accum fence derived - test1_dt

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests a series of puts, gets, and accumulate on 2 processes using fence. Same as "Put-Get-Accumulate fence" (rma/test1) but uses derived datatypes to receive data.

No errors

Passed Put-Get-Accum lock opt - test4

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests passive target RMA on 2 processes using a lock-single_op-unlock optimization.

No errors

Passed Put-Get-Accum lock opt allocmem - test4_am

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests passive target RMA on 2 processes. tests the lock-single_op-unlock optimization. Same as "Put-Get-accum lock opt" test (rma/test4) but uses alloc_mem.

No errors

Passed Put-Get-Accum true one-sided - test3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests the example in Fig 6.8, pg 142, MPI-2 standard. Process 1 has a blocking MPI_Recv between the Post and Wait. Therefore, this example will not run if the one-sided operations are simply implemented on top of MPI_Isends and Irecvs. They either need to be implemented inside the progress engine or using threads with Isends and Irecvs. In MPICH-2 (in MPICH), they are implemented in the progress engine.

No errors

Passed Put-Get-Accum true-1 allocmem - test3_am

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests the example in Fig 6.8, pg 142, MPI-2 standard. Process 1 has a blocking MPI_Recv between the Post and Wait. Therefore, this example will not run if the one-sided operations are simply implemented on top of MPI_Isends and Irecvs. They either need to be implemented inside the progress engine or using threads with Isends and Irecvs. In MPICH-2, they are implemented in the progress engine. This test is the same as Put,Gets,Accumulate test 6 (rma/test3) but uses alloc_mem.

No errors

Passed RMA MPI_PROC_NULL target - rmanull

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test MPI_PROC_NULL as a valid target for many RMA operations using active target synchronization, passive target synchronization, and request-based passive target synchronization.

No errors

Passed RMA Shared Memory - fence_shm

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This simple RMA shared memory test uses MPI_Win_allocate_shared() with MPI_Win_fence() and MPI_Put() calls with and without assert MPI_MODE_NOPRECEDE.

No errors

Passed RMA contiguous calls - rma-contig

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test exercises the one-sided contiguous MPI calls using repeated RMA calls for multiple operations. Includes multiple tests for different lock modes and assert types.

Starting one-sided contiguous performance test with 2 processes
Synchronization mode: Exclusive lock
   Trg. Rank    Xfer Size   Get (usec)   Put (usec)   Acc (usec)  Get (MiB/s)  Put (MiB/s)  Acc (MiB/s)
           0            8        1.031        0.999        1.397        7.398        7.636        5.460
           0           16        1.008        1.003        1.418       15.144       15.219       10.763
           0           32        1.009        0.995        1.391       30.254       30.669       21.946
           0           64        0.996        1.004        1.597       61.300       60.770       38.223
           0          128        1.010        0.999        1.414      120.875      122.167       86.324
           0          256        1.012        1.013        1.457      241.247      240.979      167.590
           0          512        1.008        1.003        1.490      484.373      486.632      327.686
           0         1024        1.021        0.999        1.523      956.640      977.528      641.136
           0         2048        1.022        1.036        1.815     1910.900     1885.757     1076.365
           0         4096        1.024        1.029        2.133     3815.511     3795.680     1830.968
           0         8192        1.060        1.067        2.777     7368.948     7321.070     2813.606
           0        16384        1.372        1.349        4.219    11387.477    11580.438     3703.160
           0        32768        1.714        1.688        6.977    18230.355    18509.940     4479.225
           0        65536        2.336        2.339       12.234    26759.187    26723.119     5108.616
           0       131072        3.578        3.575       24.495    34933.100    34968.329     5103.057
           0       262144        7.438        7.425       46.698    33612.964    33672.187     5353.503
           0       524288       15.539       14.744       88.538    32176.834    33911.790     5647.277
           0      1048576       28.318       27.851      173.686    35313.003    35905.265     5757.512
           0      2097152       54.584       53.981      345.279    36640.659    37049.982     5792.412
           1            8       11.662       13.925       12.605        0.654        0.548        0.605
           1           16       11.645       14.149       12.932        1.310        1.078        1.180
           1           32       11.760       14.176       13.022        2.595        2.153        2.344
           1           64       11.786       14.162       12.869        5.179        4.310        4.743
           1          128       11.805       14.109       13.314       10.340        8.652        9.169
           1          256       11.811       14.814       13.615       20.671       16.480       17.932
           1          512       11.811       14.845       13.673       41.340       32.891       35.711
           1         1024       11.980       15.139       14.211       81.513       64.507       68.720
           1         2048       12.276       15.311       15.163      159.103      127.563      128.809
           1         4096       12.922       15.906       15.911      302.300      245.580      245.513
           1         8192       13.432       16.619       16.999      581.635      470.105      459.597
           1        16384       14.702       17.598       22.014     1062.763      887.887      709.764
           1        32768       16.851       19.812       26.321     1854.516     1577.338     1187.283
           1        65536       20.276       23.235       33.014     3082.482     2689.938     1893.162
           1       131072       29.134       32.144       50.473     4290.461     3888.710     2476.564
           1       262144       40.717       44.713       81.311     6139.944     5591.232     3074.633
           1       524288       62.532       67.423      131.806     7995.944     7415.863     3793.450
           1      1048576      105.031      110.536      228.507     9521.005     9046.803     4376.239
           1      2097152      190.846      196.711      441.293    10479.635    10167.195     4532.139
Starting one-sided contiguous performance test with 2 processes
Synchronization mode: Exclusive lock, MPI_MODE_NOCHECK
   Trg. Rank    Xfer Size   Get (usec)   Put (usec)   Acc (usec)  Get (MiB/s)  Put (MiB/s)  Acc (MiB/s)
           0            8        0.516        0.511        0.810       14.779       14.918        9.421
           0           16        0.526        0.533        0.822       28.996       28.651       18.560
           0           32        0.524        0.521        0.827       58.246       58.562       36.898
           0           64        0.521        0.528        0.826      117.198      115.650       73.864
           0          128        0.522        0.524        0.846      233.928      233.109      144.336
           0          256        0.523        0.531        0.913      466.570      459.972      267.458
           0          512        0.528        0.530        0.888      925.124      920.478      549.736
           0         1024        0.526        0.538        0.937     1856.634     1816.702     1042.562
           0         2048        0.541        0.546        1.169     3611.074     3575.471     1671.256
           0         4096        0.562        0.563        1.476     6952.491     6935.098     2647.318
           0         8192        0.589        0.589        2.095    13263.399    13268.763     3729.284
           0        16384        0.784        0.781        3.492    19926.979    20000.799     4474.253
           0        32768        1.150        1.151        6.095    27168.115    27144.300     5127.042
           0        65536        1.781        1.780       11.270    35097.648    35120.028     5545.820
           0       131072        3.029        3.033       23.080    41273.241    41208.885     5415.916
           0       262144        6.520        6.520       45.627    38343.003    38342.473     5479.211
           0       524288       14.935       14.010       87.793    33478.318    35688.670     5695.199
           0      1048576       27.697       27.105      173.561    36104.700    36893.852     5761.671
           0      2097152       53.821       53.270      350.047    37160.551    37544.575     5713.520
           1            8        3.638        5.936        4.609        2.097        1.285        1.655
           1           16        3.462        5.874        4.828        4.408        2.598        3.161
           1           32        3.581        5.893        4.547        8.522        5.179        6.711
           1           64        3.496        5.866        4.578       17.461       10.405       13.333
           1          128        3.562        5.947        5.157       34.270       20.528       23.671
           1          256        3.641        6.576        5.223       67.061       37.124       46.748
           1          512        3.642        6.485        5.343      134.079       75.296       91.391
           1         1024        3.791        6.688        5.616      257.575      146.023      173.905
           1         2048        4.101        6.959        6.240      476.280      280.681      312.987
           1         4096        4.711        7.526        7.443      829.091      519.006      524.793
           1         8192        5.315        8.177        8.900     1470.019      955.367      877.858
           1        16384        6.481        9.394       14.283     2410.872     1663.347     1093.965
           1        32768        8.500       11.599       18.218     3676.439     2694.199     1715.342
           1        65536       11.612       14.857       25.370     5382.167     4206.755     2463.580
           1       131072       20.995       23.972       41.155     5953.885     5214.455     3037.294
           1       262144       31.974       35.578       67.067     7818.804     7026.810     3727.616
           1       524288       53.504       57.391      120.600     9345.125     8712.133     4145.948
           1      1048576       96.504      101.028      222.482    10362.290     9898.200     4494.749
           1      2097152      182.776      191.630      461.901    10942.353    10436.762     4329.928
Starting one-sided contiguous performance test with 2 processes
Synchronization mode: Shared lock
   Trg. Rank    Xfer Size   Get (usec)   Put (usec)   Acc (usec)  Get (MiB/s)  Put (MiB/s)  Acc (MiB/s)
           0            8        1.004        0.989        1.352        7.600        7.717        5.644
           0           16        1.005        1.003        1.372       15.189       15.221       11.122
           0           32        0.996        1.003        1.367       30.633       30.416       22.325
           0           64        0.992        0.994        1.384       61.509       61.413       44.095
           0          128        0.990        0.992        1.393      123.320      123.004       87.605
           0          256        0.996        1.003        1.457      245.114      243.357      167.561
           0          512        1.004        0.999        1.557      486.141      488.912      313.538
           0         1024        1.002        1.015        1.754      975.080      962.361      556.779
           0         2048        1.010        1.005        1.930     1933.731     1944.097     1011.914
           0         4096        1.025        1.029        2.067     3809.553     3794.545     1889.574
           0         8192        1.078        1.057        2.714     7247.089     7393.911     2878.247
           0        16384        1.374        1.370        4.156    11375.388    11407.971     3759.843
           0        32768        1.718        1.712        6.690    18190.434    18250.869     4670.908
           0        65536        2.323        2.310       11.883    26901.630    27060.739     5259.716
           0       131072        3.589        3.579       23.728    34833.345    34925.592     5268.046
           0       262144        7.515        7.533       46.589    33265.204    33186.147     5366.095
           0       524288       15.865       14.823       88.657    31515.220    33731.543     5639.695
           0      1048576       28.506       27.859      174.464    35080.662    35895.226     5731.836
           0      2097152       54.409       54.062      342.898    36758.600    36994.435     5832.644
           1            8       12.254       14.823       13.795        0.623        0.515        0.553
           1           16       12.298       14.676       13.552        1.241        1.040        1.126
           1           32       12.256       14.730       13.572        2.490        2.072        2.249
           1           64       12.277       14.735       13.629        4.972        4.142        4.478
           1          128       12.568       15.014       14.246        9.713        8.130        8.569
           1          256       12.685       15.822       14.516       19.246       15.431       16.819
           1          512       12.623       15.463       14.784       38.681       31.577       33.029
           1         1024       12.820       15.599       14.791       76.173       62.605       66.025
           1         2048       12.883       15.835       15.662      151.610      123.342      124.704
           1         4096       13.518       16.392       17.305      288.960      238.298      225.724
           1         8192       14.536       17.007       18.838      537.442      459.373      414.714
           1        16384       15.582       18.230       25.344     1002.748      857.102      616.522
           1        32768       17.581       20.333       28.672     1777.443     1536.942     1089.899
           1        65536       20.450       23.663       36.114     3056.274     2641.285     1730.614
           1       131072       29.753       33.002       55.602     4201.252     3787.647     2248.110
           1       262144       41.853       45.047       90.685     5973.296     5549.740     2756.810
           1       524288       63.212       65.944      145.319     7909.877     7582.196     3440.701
           1      1048576      106.385      109.011      232.707     9399.861     9173.362     4297.250
           1      2097152      193.919      195.340      509.239    10313.590    10238.571     3927.429
Starting one-sided contiguous performance test with 2 processes
Synchronization mode: Shared lock, MPI_MODE_NOCHECK
   Trg. Rank    Xfer Size   Get (usec)   Put (usec)   Acc (usec)  Get (MiB/s)  Put (MiB/s)  Acc (MiB/s)
           0            8        0.521        0.528        0.816       14.655       14.444        9.345
           0           16        0.525        0.531        0.819       29.075       28.732       18.640
           0           32        0.523        0.523        0.819       58.341       58.314       37.250
           0           64        0.520        0.525        0.820      117.340      116.323       74.396
           0          128        0.523        0.525        0.832      233.449      232.387      146.649
           0          256        0.522        0.531        0.855      467.672      460.202      285.475
           0          512        0.526        0.528        0.875      928.359      924.102      557.763
           0         1024        0.533        0.530        0.947     1833.014     1840.917     1031.063
           0         2048        0.540        0.544        1.163     3617.375     3593.455     1679.325
           0         4096        0.554        0.563        1.463     7053.382     6935.080     2669.217
           0         8192        0.594        0.597        2.095    13161.396    13080.109     3728.436
           0        16384        0.790        0.787        3.504    19783.918    19861.857     4458.924
           0        32768        1.153        1.143        6.083    27092.493    27346.993     5137.112
           0        65536        1.789        1.784       11.282    34944.417    35042.955     5539.730
           0       131072        3.030        3.035       23.090    41250.531    41189.942     5413.703
           0       262144        6.573        6.569       45.765    38033.925    38056.881     5462.718
           0       524288       15.001       14.019       87.560    33332.163    35666.891     5710.398
           0      1048576       27.582       27.084      173.006    36254.931    36922.551     5780.135
           0      2097152       53.807       53.291      343.865    37169.850    37529.607     5816.230
           1            8        3.769        6.292        4.891        2.024        1.213        1.560
           1           16        3.737        6.077        4.905        4.083        2.511        3.111
           1           32        3.537        6.045        4.905        8.628        5.049        6.222
           1           64        3.896        6.235        4.944       15.668        9.790       12.345
           1          128        3.618        6.138        5.506       33.743       19.888       22.169
           1          256        3.958        6.929        5.687       61.690       35.235       42.929
           1          512        4.116        6.822        5.928      118.639       71.569       82.374
           1         1024        4.216        6.963        6.285      231.646      140.250      155.380
           1         2048        4.391        7.242        7.095      444.815      269.703      275.263
           1         4096        4.852        7.829        8.609      805.016      498.952      453.758
           1         8192        5.577        8.627       10.154     1400.842      905.583      769.424
           1        16384        6.834        9.957       16.850     2286.490     1569.282      927.310
           1        32768        8.732       11.929       21.072     3578.924     2619.604     1483.020
           1        65536       12.090       15.244       29.300     5169.474     4100.101     2133.121
           1       131072       21.689       24.161       48.150     5763.298     5173.595     2596.051
           1       262144       33.235       35.213       76.014     7522.139     7099.596     3288.861
           1       524288       55.241       56.952      136.191     9051.291     8779.376     3671.304
           1      1048576       98.336      100.723      221.579    10169.220     9928.253     4513.068
           1      2097152      192.570      187.652      426.680    10385.840    10658.041     4687.349
No errors

Passed RMA fence PSCW ordering - pscw_ordering

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This post/start/complete/wait operation test checks an oddball case for generalized active target synchronization where the start occurs before the post. Since start can block until the corresponding post, the group passed to start must be disjoint from the group passed to post for processes to avoid a circular wait. Here, odd/even groups are used to accomplish this and the even group reverses its start/post calls.

No errors

Passed RMA fence null - nullpscw

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 7

Test Description:

This simple test creates a window with a null pointer then performs a post/start/complete/wait operation.

No errors

Passed RMA fence put - putfence1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Tests MPI_Put and MPI_Win_fence with a selection of communicators and datatypes.

Putting count = 1 of sendtype MPI_INT receive type MPI_INT
Putting count = 1 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1 of sendtype int-vector receive type MPI_INT
Putting count = 1 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2 of sendtype MPI_INT receive type MPI_INT
Putting count = 2 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2 of sendtype int-vector receive type MPI_INT
Putting count = 2 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4 of sendtype MPI_INT receive type MPI_INT
Putting count = 4 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4 of sendtype int-vector receive type MPI_INT
Putting count = 4 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8 of sendtype MPI_INT receive type MPI_INT
Putting count = 8 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8 of sendtype int-vector receive type MPI_INT
Putting count = 8 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16 of sendtype MPI_INT receive type MPI_INT
Putting count = 16 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16 of sendtype int-vector receive type MPI_INT
Putting count = 16 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32 of sendtype MPI_INT receive type MPI_INT
Putting count = 32 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32 of sendtype int-vector receive type MPI_INT
Putting count = 32 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 64 of sendtype MPI_INT receive type MPI_INT
Putting count = 64 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 64 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 64 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 64 of sendtype int-vector receive type MPI_INT
Putting count = 64 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 64 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 64 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 64 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 64 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 64 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 64 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 64 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 64 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 128 of sendtype MPI_INT receive type MPI_INT
Putting count = 128 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 128 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 128 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 128 of sendtype int-vector receive type MPI_INT
Putting count = 128 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 128 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 128 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 128 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 128 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 128 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 128 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 128 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 128 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 256 of sendtype MPI_INT receive type MPI_INT
Putting count = 256 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 256 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 256 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 256 of sendtype int-vector receive type MPI_INT
Putting count = 256 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 256 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 256 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 256 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 256 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 256 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 256 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 256 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 256 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 512 of sendtype MPI_INT receive type MPI_INT
Putting count = 512 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 512 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 512 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 512 of sendtype int-vector receive type MPI_INT
Putting count = 512 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 512 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 512 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 512 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 512 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 512 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 512 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 512 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 512 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1024 of sendtype MPI_INT receive type MPI_INT
Putting count = 1024 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1024 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1024 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1024 of sendtype int-vector receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1024 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1024 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1024 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1024 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1024 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1024 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1024 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2048 of sendtype MPI_INT receive type MPI_INT
Putting count = 2048 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2048 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2048 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2048 of sendtype int-vector receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2048 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2048 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2048 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2048 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2048 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2048 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2048 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4096 of sendtype MPI_INT receive type MPI_INT
Putting count = 4096 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4096 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4096 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4096 of sendtype int-vector receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4096 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4096 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4096 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4096 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4096 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4096 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4096 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8192 of sendtype MPI_INT receive type MPI_INT
Putting count = 8192 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8192 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8192 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8192 of sendtype int-vector receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8192 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8192 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8192 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8192 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8192 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8192 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8192 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16384 of sendtype MPI_INT receive type MPI_INT
Putting count = 16384 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16384 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16384 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16384 of sendtype int-vector receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16384 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16384 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16384 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16384 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16384 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16384 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16384 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32768 of sendtype MPI_INT receive type MPI_INT
Putting count = 32768 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32768 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32768 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32768 of sendtype int-vector receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32768 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32768 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32768 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32768 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32768 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32768 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32768 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1 of sendtype MPI_INT receive type MPI_INT
Putting count = 1 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1 of sendtype int-vector receive type MPI_INT
Putting count = 1 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2 of sendtype MPI_INT receive type MPI_INT
Putting count = 2 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2 of sendtype int-vector receive type MPI_INT
Putting count = 2 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4 of sendtype MPI_INT receive type MPI_INT
Putting count = 4 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4 of sendtype int-vector receive type MPI_INT
Putting count = 4 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8 of sendtype MPI_INT receive type MPI_INT
Putting count = 8 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8 of sendtype int-vector receive type MPI_INT
Putting count = 8 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16 of sendtype MPI_INT receive type MPI_INT
Putting count = 16 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16 of sendtype int-vector receive type MPI_INT
Putting count = 16 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32 of sendtype MPI_INT receive type MPI_INT
Putting count = 32 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32 of sendtype int-vector receive type MPI_INT
Putting count = 32 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 64 of sendtype MPI_INT receive type MPI_INT
Putting count = 64 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 64 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 64 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 64 of sendtype int-vector receive type MPI_INT
Putting count = 64 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 64 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 64 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 64 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 64 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 64 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 64 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 64 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 64 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 128 of sendtype MPI_INT receive type MPI_INT
Putting count = 128 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 128 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 128 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 128 of sendtype int-vector receive type MPI_INT
Putting count = 128 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 128 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 128 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 128 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 128 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 128 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 128 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 128 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 128 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 256 of sendtype MPI_INT receive type MPI_INT
Putting count = 256 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 256 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 256 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 256 of sendtype int-vector receive type MPI_INT
Putting count = 256 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 256 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 256 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 256 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 256 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 256 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 256 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 256 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 256 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 512 of sendtype MPI_INT receive type MPI_INT
Putting count = 512 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 512 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 512 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 512 of sendtype int-vector receive type MPI_INT
Putting count = 512 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 512 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 512 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 512 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 512 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 512 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 512 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 512 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 512 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1024 of sendtype MPI_INT receive type MPI_INT
Putting count = 1024 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1024 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1024 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1024 of sendtype int-vector receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1024 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1024 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1024 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1024 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1024 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1024 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1024 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2048 of sendtype MPI_INT receive type MPI_INT
Putting count = 2048 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2048 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2048 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2048 of sendtype int-vector receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2048 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2048 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2048 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2048 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2048 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2048 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2048 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4096 of sendtype MPI_INT receive type MPI_INT
Putting count = 4096 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4096 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4096 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4096 of sendtype int-vector receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4096 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4096 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4096 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4096 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4096 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4096 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4096 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8192 of sendtype MPI_INT receive type MPI_INT
Putting count = 8192 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8192 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8192 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8192 of sendtype int-vector receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8192 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8192 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8192 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8192 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8192 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8192 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8192 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16384 of sendtype MPI_INT receive type MPI_INT
Putting count = 16384 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16384 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16384 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16384 of sendtype int-vector receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16384 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16384 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16384 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16384 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16384 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16384 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16384 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32768 of sendtype MPI_INT receive type MPI_INT
Putting count = 32768 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32768 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32768 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32768 of sendtype int-vector receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32768 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32768 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32768 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32768 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32768 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32768 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32768 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1 of sendtype MPI_INT receive type MPI_INT
Putting count = 1 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1 of sendtype int-vector receive type MPI_INT
Putting count = 1 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2 of sendtype MPI_INT receive type MPI_INT
Putting count = 2 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2 of sendtype int-vector receive type MPI_INT
Putting count = 2 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4 of sendtype MPI_INT receive type MPI_INT
Putting count = 4 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4 of sendtype int-vector receive type MPI_INT
Putting count = 4 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8 of sendtype MPI_INT receive type MPI_INT
Putting count = 8 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8 of sendtype int-vector receive type MPI_INT
Putting count = 8 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16 of sendtype MPI_INT receive type MPI_INT
Putting count = 16 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16 of sendtype int-vector receive type MPI_INT
Putting count = 16 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32 of sendtype MPI_INT receive type MPI_INT
Putting count = 32 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32 of sendtype int-vector receive type MPI_INT
Putting count = 32 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 64 of sendtype MPI_INT receive type MPI_INT
Putting count = 64 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 64 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 64 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 64 of sendtype int-vector receive type MPI_INT
Putting count = 64 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 64 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 64 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 64 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 64 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 64 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 64 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 64 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 64 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 128 of sendtype MPI_INT receive type MPI_INT
Putting count = 128 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 128 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 128 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 128 of sendtype int-vector receive type MPI_INT
Putting count = 128 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 128 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 128 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 128 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 128 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 128 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 128 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 128 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 128 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 256 of sendtype MPI_INT receive type MPI_INT
Putting count = 256 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 256 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 256 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 256 of sendtype int-vector receive type MPI_INT
Putting count = 256 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 256 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 256 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 256 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 256 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 256 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 256 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 256 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 256 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 512 of sendtype MPI_INT receive type MPI_INT
Putting count = 512 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 512 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 512 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 512 of sendtype int-vector receive type MPI_INT
Putting count = 512 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 512 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 512 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 512 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 512 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 512 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 512 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 512 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 512 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1024 of sendtype MPI_INT receive type MPI_INT
Putting count = 1024 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1024 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1024 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1024 of sendtype int-vector receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1024 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1024 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1024 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1024 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1024 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1024 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1024 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2048 of sendtype MPI_INT receive type MPI_INT
Putting count = 2048 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2048 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2048 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2048 of sendtype int-vector receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2048 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2048 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2048 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2048 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2048 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2048 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2048 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4096 of sendtype MPI_INT receive type MPI_INT
Putting count = 4096 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4096 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4096 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4096 of sendtype int-vector receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4096 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4096 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4096 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4096 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4096 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4096 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4096 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8192 of sendtype MPI_INT receive type MPI_INT
Putting count = 8192 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8192 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8192 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8192 of sendtype int-vector receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8192 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8192 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8192 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8192 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8192 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8192 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8192 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16384 of sendtype MPI_INT receive type MPI_INT
Putting count = 16384 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16384 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16384 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16384 of sendtype int-vector receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16384 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16384 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16384 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16384 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16384 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16384 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16384 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32768 of sendtype MPI_INT receive type MPI_INT
Putting count = 32768 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32768 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32768 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32768 of sendtype int-vector receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32768 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32768 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32768 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32768 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32768 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32768 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32768 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1 of sendtype MPI_INT receive type MPI_INT
Putting count = 1 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1 of sendtype int-vector receive type MPI_INT
Putting count = 1 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2 of sendtype MPI_INT receive type MPI_INT
Putting count = 2 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2 of sendtype int-vector receive type MPI_INT
Putting count = 2 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4 of sendtype MPI_INT receive type MPI_INT
Putting count = 4 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4 of sendtype int-vector receive type MPI_INT
Putting count = 4 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8 of sendtype MPI_INT receive type MPI_INT
Putting count = 8 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8 of sendtype int-vector receive type MPI_INT
Putting count = 8 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16 of sendtype MPI_INT receive type MPI_INT
Putting count = 16 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16 of sendtype int-vector receive type MPI_INT
Putting count = 16 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32 of sendtype MPI_INT receive type MPI_INT
Putting count = 32 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32 of sendtype int-vector receive type MPI_INT
Putting count = 32 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 64 of sendtype MPI_INT receive type MPI_INT
Putting count = 64 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 64 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 64 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 64 of sendtype int-vector receive type MPI_INT
Putting count = 64 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 64 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 64 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 64 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 64 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 64 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 64 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 64 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 64 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 128 of sendtype MPI_INT receive type MPI_INT
Putting count = 128 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 128 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 128 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 128 of sendtype int-vector receive type MPI_INT
Putting count = 128 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 128 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 128 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 128 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 128 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 128 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 128 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 128 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 128 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 256 of sendtype MPI_INT receive type MPI_INT
Putting count = 256 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 256 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 256 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 256 of sendtype int-vector receive type MPI_INT
Putting count = 256 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 256 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 256 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 256 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 256 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 256 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 256 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 256 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 256 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 512 of sendtype MPI_INT receive type MPI_INT
Putting count = 512 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 512 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 512 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 512 of sendtype int-vector receive type MPI_INT
Putting count = 512 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 512 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 512 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 512 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 512 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 512 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 512 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 512 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 512 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1024 of sendtype MPI_INT receive type MPI_INT
Putting count = 1024 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1024 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1024 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1024 of sendtype int-vector receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1024 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1024 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1024 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1024 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1024 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1024 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1024 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2048 of sendtype MPI_INT receive type MPI_INT
Putting count = 2048 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2048 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2048 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2048 of sendtype int-vector receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2048 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2048 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2048 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2048 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2048 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2048 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2048 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4096 of sendtype MPI_INT receive type MPI_INT
Putting count = 4096 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4096 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4096 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4096 of sendtype int-vector receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4096 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4096 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4096 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4096 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4096 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4096 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4096 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8192 of sendtype MPI_INT receive type MPI_INT
Putting count = 8192 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8192 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8192 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8192 of sendtype int-vector receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8192 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8192 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8192 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8192 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8192 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8192 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8192 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16384 of sendtype MPI_INT receive type MPI_INT
Putting count = 16384 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16384 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16384 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16384 of sendtype int-vector receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16384 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16384 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16384 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16384 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16384 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16384 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16384 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32768 of sendtype MPI_INT receive type MPI_INT
Putting count = 32768 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32768 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32768 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32768 of sendtype int-vector receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32768 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32768 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32768 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32768 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32768 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32768 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32768 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1 of sendtype MPI_INT receive type MPI_INT
Putting count = 1 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1 of sendtype int-vector receive type MPI_INT
Putting count = 1 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2 of sendtype MPI_INT receive type MPI_INT
Putting count = 2 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2 of sendtype int-vector receive type MPI_INT
Putting count = 2 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4 of sendtype MPI_INT receive type MPI_INT
Putting count = 4 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4 of sendtype int-vector receive type MPI_INT
Putting count = 4 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8 of sendtype MPI_INT receive type MPI_INT
Putting count = 8 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8 of sendtype int-vector receive type MPI_INT
Putting count = 8 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16 of sendtype MPI_INT receive type MPI_INT
Putting count = 16 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16 of sendtype int-vector receive type MPI_INT
Putting count = 16 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32 of sendtype MPI_INT receive type MPI_INT
Putting count = 32 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32 of sendtype int-vector receive type MPI_INT
Putting count = 32 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 64 of sendtype MPI_INT receive type MPI_INT
Putting count = 64 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 64 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 64 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 64 of sendtype int-vector receive type MPI_INT
Putting count = 64 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 64 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 64 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 64 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 64 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 64 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 64 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 64 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 64 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 128 of sendtype MPI_INT receive type MPI_INT
Putting count = 128 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 128 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 128 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 128 of sendtype int-vector receive type MPI_INT
Putting count = 128 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 128 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 128 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 128 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 128 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 128 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 128 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 128 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 128 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 256 of sendtype MPI_INT receive type MPI_INT
Putting count = 256 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 256 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 256 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 256 of sendtype int-vector receive type MPI_INT
Putting count = 256 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 256 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 256 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 256 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 256 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 256 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 256 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 256 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 256 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 512 of sendtype MPI_INT receive type MPI_INT
Putting count = 512 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 512 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 512 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 512 of sendtype int-vector receive type MPI_INT
Putting count = 512 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 512 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 512 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 512 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 512 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 512 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 512 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 512 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 512 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1024 of sendtype MPI_INT receive type MPI_INT
Putting count = 1024 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1024 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1024 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1024 of sendtype int-vector receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1024 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1024 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1024 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1024 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1024 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1024 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1024 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2048 of sendtype MPI_INT receive type MPI_INT
Putting count = 2048 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2048 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2048 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2048 of sendtype int-vector receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2048 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2048 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2048 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2048 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2048 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2048 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2048 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4096 of sendtype MPI_INT receive type MPI_INT
Putting count = 4096 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4096 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4096 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4096 of sendtype int-vector receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4096 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4096 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4096 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4096 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4096 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4096 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4096 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8192 of sendtype MPI_INT receive type MPI_INT
Putting count = 8192 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8192 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8192 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8192 of sendtype int-vector receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8192 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8192 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8192 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8192 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8192 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8192 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8192 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16384 of sendtype MPI_INT receive type MPI_INT
Putting count = 16384 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16384 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16384 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16384 of sendtype int-vector receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16384 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16384 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16384 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16384 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16384 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16384 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16384 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32768 of sendtype MPI_INT receive type MPI_INT
Putting count = 32768 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32768 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32768 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32768 of sendtype int-vector receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32768 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32768 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32768 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32768 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32768 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32768 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32768 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1 of sendtype MPI_INT receive type MPI_INT
Putting count = 1 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1 of sendtype int-vector receive type MPI_INT
Putting count = 1 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2 of sendtype MPI_INT receive type MPI_INT
Putting count = 2 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2 of sendtype int-vector receive type MPI_INT
Putting count = 2 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4 of sendtype MPI_INT receive type MPI_INT
Putting count = 4 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4 of sendtype int-vector receive type MPI_INT
Putting count = 4 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8 of sendtype MPI_INT receive type MPI_INT
Putting count = 8 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8 of sendtype int-vector receive type MPI_INT
Putting count = 8 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16 of sendtype MPI_INT receive type MPI_INT
Putting count = 16 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16 of sendtype int-vector receive type MPI_INT
Putting count = 16 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32 of sendtype MPI_INT receive type MPI_INT
Putting count = 32 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32 of sendtype int-vector receive type MPI_INT
Putting count = 32 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 64 of sendtype MPI_INT receive type MPI_INT
Putting count = 64 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 64 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 64 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 64 of sendtype int-vector receive type MPI_INT
Putting count = 64 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 64 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 64 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 64 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 64 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 64 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 64 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 64 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 64 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 128 of sendtype MPI_INT receive type MPI_INT
Putting count = 128 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 128 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 128 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 128 of sendtype int-vector receive type MPI_INT
Putting count = 128 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 128 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 128 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 128 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 128 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 128 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 128 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 128 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 128 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 256 of sendtype MPI_INT receive type MPI_INT
Putting count = 256 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 256 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 256 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 256 of sendtype int-vector receive type MPI_INT
Putting count = 256 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 256 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 256 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 256 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 256 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 256 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 256 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 256 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 256 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 512 of sendtype MPI_INT receive type MPI_INT
Putting count = 512 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 512 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 512 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 512 of sendtype int-vector receive type MPI_INT
Putting count = 512 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 512 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 512 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 512 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 512 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 512 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 512 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 512 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 512 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 1024 of sendtype MPI_INT receive type MPI_INT
Putting count = 1024 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 1024 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 1024 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 1024 of sendtype int-vector receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 1024 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 1024 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 1024 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 1024 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 1024 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 1024 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 1024 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 1024 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 2048 of sendtype MPI_INT receive type MPI_INT
Putting count = 2048 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 2048 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 2048 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 2048 of sendtype int-vector receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 2048 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 2048 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 2048 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 2048 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 2048 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 2048 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 2048 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 2048 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 4096 of sendtype MPI_INT receive type MPI_INT
Putting count = 4096 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 4096 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 4096 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 4096 of sendtype int-vector receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 4096 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 4096 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 4096 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 4096 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 4096 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 4096 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 4096 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 4096 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 8192 of sendtype MPI_INT receive type MPI_INT
Putting count = 8192 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 8192 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 8192 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 8192 of sendtype int-vector receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 8192 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 8192 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 8192 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 8192 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 8192 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 8192 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 8192 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 8192 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 16384 of sendtype MPI_INT receive type MPI_INT
Putting count = 16384 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 16384 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 16384 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 16384 of sendtype int-vector receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 16384 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 16384 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 16384 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 16384 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 16384 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 16384 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 16384 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 16384 of sendtype MPI_INT receive type MPI_BYTE
Putting count = 32768 of sendtype MPI_INT receive type MPI_INT
Putting count = 32768 of sendtype MPI_DOUBLE receive type MPI_DOUBLE
Putting count = 32768 of sendtype MPI_FLOAT_INT receive type MPI_FLOAT_INT
Putting count = 32768 of sendtype dup of MPI_INT receive type dup of MPI_INT
Putting count = 32768 of sendtype int-vector receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(4-int) receive type MPI_INT
Putting count = 32768 of sendtype int-indexed(2 blocks) receive type MPI_INT
Putting count = 32768 of sendtype MPI_INT receive type recv-int-indexed(4-int)
Putting count = 32768 of sendtype MPI_SHORT receive type MPI_SHORT
Putting count = 32768 of sendtype MPI_LONG receive type MPI_LONG
Putting count = 32768 of sendtype MPI_CHAR receive type MPI_CHAR
Putting count = 32768 of sendtype MPI_UINT64_T receive type MPI_UINT64_T
Putting count = 32768 of sendtype MPI_FLOAT receive type MPI_FLOAT
Putting count = 32768 of sendtype MPI_INT receive type MPI_BYTE
No errors

Passed RMA fence put PSCW - putpscw1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Put with Post/Start/Complete/Wait using a selection of communicators and datatypes.

No errors

Passed RMA fence put base - put_base

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This code performs N strided put operations into a 2d patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI datatype. This test generates a datatype that is relative to an arbitrary base address in memory and tests the RMA implementation's ability to perform the correct transfer.

No errors

Passed RMA fence put bottom - put_bottom

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

One-Sided MPI 2-D Strided Put Test. This code performs N strided put operations into a 2d patch of a shared array. The array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] and begins at index [0, 0]. The input and output buffers are specified using an MPI datatype. This test generates a datatype that is relative to MPI_BOTTOM and tests the RMA implementation's ability to perform the correct transfer.

No errors

Passed RMA fence put indexed - putfidx

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Put with Fence for an indexed datatype. One MPI Implementation fails this test with sufficiently large values of blksize. It appears to convert this type to an incorrect contiguous move.

No errors

Passed RMA get attributes - baseattrwin

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test creates a window, then extracts its attributes through a series of MPI_Win_get_attr calls.

No errors

Passed RMA lock contention accumulate - lockcontention

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 3

Test Description:

This is a modified version of Put,Gets,Accumulate test 9 (rma/test4). Tests passive target RMA on 3 processes. Tests the lock-single_op-unlock optimization.

No errors

Passed RMA lock contention basic - lockcontention2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

Multiple tests for lock contention, including special cases within the MPI implementation; in this case, our coverage analysis showed the lockcontention test was not covering all cases and revealed a bug in the code. In all of these tests, each process writes (or accesses) the values rank + i*size_of_world for NELM times. This test strives to avoid operations not strictly permitted by MPI RMA, for example, it doesn't target the same locations with multiple put/get calls in the same access epoch.

No errors

Passed RMA lock contention optimized - lockcontention3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 8

Test Description:

Multiple additional tests for lock contention. These are designed to exercise some of the optimizations within MPICH, but all are valid MPI programs. Tests structure includes:

Lock local (must happen at this time since application can use load store after thelock)
Send message to partner

Receive message
Send ack

Receive ack
Provide a delay so that the partner will see the conflict

Partner executes:
Lock // Note: this may block rma operations (see below)
Unlock
Send back to partner

Unlock
Receive from partner
Check for correct data

The delay may be implemented as a ring of message communication; this is likely to automatically scale the time to what is needed.

No errors

Passed RMA many ops basic - manyrma3

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Many RMA operations. This simple test creates an RMA window, locks it, and performs many accumulate operations on it.

No errors

Passed RMA many ops sync - manyrma2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests for correct handling of the case where many RMA operations occur between synchronization events. Includes options for multiple different RMA operations, and is currently run for accumulate with fence. This is one of the ways that RMA may be used, and is used in the reference implementation of the graph500 benchmark.

No errors

Passed RMA post/start/complete test - wintest

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests put and get with post/start/complete/test on 2 processes. Same as "Put-Get-Accum PSCW" test (rma/test2), but uses win_test instead of win_wait.

No errors

Passed RMA post/start/complete/wait - accpscw1

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Accumulate Post-Start-Complete-Wait. This test uses accumulate/replace with post/start/complete/wait for source and destination processes on a selection of communicators and datatypes.

No errors

Passed RMA rank 0 - selfrma

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Test RMA calls to self using multiple RMA operations and checking the accuracy of the result.

No errors

Passed RMA zero-byte transfers - rmazero

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Tests zero-byte transfers for a selection of communicators for many RMA operations using active target synchronizaiton and request-based passive target synchronization.

No errors

Passed RMA zero-size compliance - badrma

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

The test uses various combinations of either zero size datatypes or zero size counts for Put, Get, Accumulate, and Get_Accumulate. All tests should pass to be compliant with the MPI-3.0 specification.

No errors

Passed Request-based operations - req_example

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Example 11.21 from the MPI 3.0 spec. The following example shows how RMA request-based operations can be used to overlap communication with computation. Each process fetches, processes, and writes the result for NSTEPS chunks of data. Instead of a single buffer, M local buffers are used to allow up to M communication operations to overlap with computation.

No errors

Passed Thread/RMA interaction - multirma

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This is a simple test of threads in MPI.

No errors

Passed Win_allocate_shared zero - win_zero

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Test MPI_Win_allocate_shared when size of the shared memory region is 0 and when the size is 0 on every other process and 1 on the others.

No errors

Passed Win_create_dynamic - win_dynamic_acc

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises dynamic RMA windows using the MPI_Win_create_dynamic() and MPI_Accumulate() operations.

No errors

Passed Win_create_errhandler - window_creation

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test creates 1000 RMA windows using MPI_Alloc_mem(), then frees the dynamic memory and the RMA windows that were created.

No errors

Passed Win_errhandler - wincall

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test creates and frees MPI error handlers in a loop (1000 iterations) to test the internal MPI RMA memory allocation routines.

No errors

Passed Win_flush basic - flush

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Window Flush. This simple test flushes a shared window using MPI_Win_flush() and MPI_Win_flush_all().

No errors

Passed Win_flush_local basic - flush_local

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

Window Flush. This simple test flushes a shared window using MPI_Win_flush_local() and MPI_Win_flush_local_all().

No errors

Passed Win_get_attr - win_flavors

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test determines which "flavor" of RMA is created by creating windows and using MPI_Win_get_attr to access the attributes of each window.

No errors

Passed Win_get_group basic - getgroup

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This is a simple test of MPI_Win_get_group() for a selection of communicators.

No errors

Passed Win_info - win_info

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test creates an RMA info object, sets key/value pairs on the object, then duplicates the info object and confirms that the key/value pairs are in the same order as the original.

No errors

Passed Win_shared_query basic - win_shared

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This simple test exercises the MPI_Win_shared_query() by querying a shared window and verifying it produced the correct results.

1 -- size = 40000 baseptr = 0x150e693c4000 my_baseptr = 0x150e693cdc40
0 -- size = 40000 baseptr = 0x14d5133c3000 my_baseptr = 0x14d5133c3000
No errors
0 -- size = 40000 baseptr = 0x14c529bfa000 my_baseptr = 0x14c529bfa000
1 -- size = 40000 baseptr = 0x14589a996000 my_baseptr = 0x14589a99fc40

Passed Win_shared_query non-contig put - win_shared_noncontig_put

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

MPI_Put test with noncontiguous datatypes using MPI_Win_shared_query() to query windows on different ranks and verify they produced the correct results.

No errors

Passed Win_shared_query non-contiguous - win_shared_noncontig

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises MPI_Win_shared_query() by querying windows on different ranks and verifying they produced the correct results.

No errors

Passed Window attributes order - attrorderwin

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Test creating and inserting and deleting attributes in different orders using MPI_Win_set_attr and MPI_Win_delete_attr to ensure the list management code handles all cases.

No errors

Passed Window same_disp_unit - win_same_disp_unit

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

Test the acceptance of the MPI 3.1 standard same_disp_unit info key for window creation.

No errors

Passed {Get,set}_name - winname

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This simple test exercises MPI_Win_set_name() and MPI_Win_get_name() using a selection of different windows.

No errors

Attributes Tests - Score: 100% Passed

This group features tests that involve attributes objects.

Passed At_Exit attribute order - attrend2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 5

Test Description:

The MPI-2.2 specification makes it clear that attributes are called on MPI_COMM_WORLD and MPI_COMM_SELF at the very beginning of MPI_Finalize in LIFO order with respect to the order in which they are set. This is useful for tools that want to perform the MPI equivalent of an "at_exit" action.

This test uses 20 attributes to ensure that the hash-table based MPI implementations do not accidentally pass the test except by being extremely "lucky". There are (20!) possible permutations providing about a 1 in 2.43e18 chance of getting LIFO ordering out of a hash table assuming a decent hash function is used.

No errors

Passed At_Exit function - attrend

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test demonstrates how to attach an "at-exit()" function to MPI_Finalize().

No errors

Passed Attribute callback error - attrerr

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test exercises attribute routines. It checks for correct behavior of the copy and delete functions on an attribute, particularly the correct behavior when the routine returns a failure.

MPI 1.2 Clarification: Clarification of Error Behavior of Attribute Callback Functions. Any return value other than MPI_SUCCESS is erroneous. The specific value returned to the user is undefined (other than it can't be MPI_SUCCESS). Proposals to specify particular values (e.g., user's value) failed.

No errors

Passed Attribute comm callback error - attrerrcomm

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test exercises attribute routines. It checks for correct behavior of the copy and delete functions on an attribute, particularly the correct behavior when the routine returns failure.

MPI 1.2 Clarification: Clarification of Error Behavior of Attribute Callback Functions. Any return value other than MPI_SUCCESS is erroneous. The specific value returned to the user is undefined (other than it can't be MPI_SUCCESS). Proposals to specify particular values (e.g., user's value) failed. This test is similar in function to attrerr but uses communicators.

No errors

Passed Attribute delete/get - attrdeleteget

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This program illustrates the use of MPI_Comm_create_keyval() that creates a new attribute key.

No errors

Passed Attribute order - attrorder

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test creates and inserts attributes in different orders to ensure that the list management code handles all cases properly.

No errors

Passed Attribute type callback error - attrerrtype

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test checks for correct behavior of the copy and delete functions on an attribute, particularly the correct behavior when the routine returns failure.

Any return value other than MPI_SUCCESS is erroneous. The specific value returned to the user is undefined (other than it can't be MPI_SUCCESS). Proposals to specify particular values (e.g., user's value) have not been successful. This test is similar in function to attrerr but uses types.

No errors

Passed Attribute/Datatype - attr2type

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This program creates a contiguous datatype from type MPI_INT, attaches an attribute to the type, duplicates it, then deletes both the original and duplicate type.

No errors

Passed Basic Attributes - baseattrcomm

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test accesses many attributes such as MPI_TAG_UB, MPI_HOST, MPI_IO, MPI_WTIME_IS_GLOBAL, and many others and reports any errors.

No errors

Passed Basic MPI-3 attribute - baseattr2

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This program tests the integrity of the MPI-3.0 base attributes. The attribute keys tested are: MPI_TAG_UB, MPI_HOST, MPI_IO, MPI_WTIME_IS_GLOBAL, MPI_APPNUM, MPI_UNIVERSE_SIZE, MPI_LASTUSEDCODE

No errors

Passed Communicator Attribute Order - attrordercomm

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test creates and inserts communicator attributes in different orders to ensure that the list management code handles all cases properly.

No errors

Passed Communicator attributes - attributes

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Returns all communicator attributes that are not supported. The test is run as a single process MPI job and fails if any attributes are not supported.

No errors

Passed Function keyval - fkeyval

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test illustrates the use of the copy and delete functions used in the manipulation of keyvals. It also tests to confirm that attributes are copied when communicators are duplicated.

No errors

Passed Intercommunicators - attric

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test exercises communicator attribute routines for intercommunicators.

start while loop, isLeft=TRUE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
start while loop, isLeft=TRUE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=FALSE
start while loop, isLeft=FALSE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=TRUE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
start while loop, isLeft=FALSE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
start while loop, isLeft=FALSE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=FALSE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=TRUE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=FALSE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
start while loop, isLeft=TRUE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=FALSE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=TRUE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=FALSE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=TRUE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=FALSE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=TRUE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
start while loop, isLeft=FALSE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=TRUE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=FALSE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=TRUE
got COMM_NULL, skipping
start while loop, isLeft=FALSE
start while loop, isLeft=TRUE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=FALSE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=TRUE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
got COMM_NULL, skipping
No errors
start while loop, isLeft=FALSE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
start while loop, isLeft=TRUE
Keyval_create key=0xa4400000 value=9
Keyval_create key=0xa4400001 value=7
Comm_dup
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm
start while loop, isLeft=FALSE
got COMM_NULL, skipping
Keyval_free key=0xa4400000
Keyval_free key=0xa4400001
Comm_free comm
Comm_free dup_comm

Passed Keyval communicators - fkeyvalcomm

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test tests freeing of keyvals while still attached to a communicator, then tests to make sure that the keyval delete and copy functions are executed properly.

No errors

Passed Keyval test with types - fkeyvaltype

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This tests illustrates the use of keyvals associated with datatypes.

No errors

Passed Multiple keyval_free - keyval_double_free

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This tests multiple invocations of keyval_free on the same keyval.

No errors

Passed RMA get attributes - baseattrwin

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test creates a window, then extracts its attributes through a series of MPI_Win_get_attr calls.

No errors

Passed Type Attribute Order - attrordertype

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

This test creates and inserts type attributes in different orders to ensure that the list management codes handles all cases properly.

No errors

Passed Varying communicator orders/types - attrt

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test is similar to attr/attrordertype (creates/inserts attributes) but uses a different strategy of mixing attribute order, types, and with different types of communicators.

No errors

Performance - Score: 82% Passed

This group features tests that involve realtime latency performance analysis of MPI appications. Although performance testing is not an established goal of this test suite, these few tests were included because there has been discussion of including performance testing in future versions of the test suite. Such tests might be useful to aide users in determining what MPI features should be used for their particular application. These tests are exemplary of what future tests could provide.

Passed Datatype creation - twovec

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Make sure datatype creation is independent of data size. However, that there is no guarantee or expectation that the time would be constant. In particular, some optimizations might take more time than others.

The real goal of this is to ensure that the time to create a datatype doesn't increase strongly with the number of elements within the datatype, particularly for these datatypes that are quite simple patterns.

No errors

Passed Group creation - commcreatep

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 32

Test Description:

This is a performance test indexed by group number to look at how communicator creation scales with group. The cost should be linear or at worst ts*log(ts), where ts <= number of communicators.

size	time
1	4.546190e-05
2	2.980715e-05
4	3.949235e-05
8	4.826580e-05
16	5.133520e-05
32	4.585665e-05
No errors

Passed MPI-Tracing package - allredtrace

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 32

Test Description:

This code is intended to test the trace overhead when using an MPI tracing package. The test is currently run in verbose mode with the number of processes set to 32 to run on the greatest number of HPC systems.

For delay count 14963, time is 1.090641e-02
No errors.

Passed MPI_Group_Translate_ranks perf - gtranksperf

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 20

Test Description:

Measure and compare the relative performance of MPI_Group_translate_ranks with small and large group2 sizes but a constant number of ranks. This serves as a performance sanity check for the Scalasca use case where we translate to MPI_COMM_WORLD ranks. The performance should only depend on the number of ranks passed, not the size of either group (especially group2). This test is probably only meaningful for large-ish process counts.

No errors

Failed MPI_{pack,unpack} perf - dtpack

Build: Passed

Execution: Failed

Exit Status: Application_ended_with_error(s)

MPI Processes: 1

Test Description:

This code may be used to test the performance of some of the noncontiguous datatype operations, including vector and indexed pack and unpack operations. To simplify the use of this code for tuning an MPI implementation, it uses no communication, just the MPI_Pack and MPI_Unpack routines. In addition, the individual tests are in separate routines, making it easier to compare the compiler-generated code for the user (manual) pack/unpack with the code used by the MPI implementation. Further, to be fair to the MPI implementation, the routines are passed the source and destination buffers; this ensures that the compiler can't optimize for statically allocated buffers.

TestVecPackDouble (USER): 0.012 0.012 0.012 0.012 0.012 0.012 0.012 0.012 0.012 0.012 [0.000]
TestVecPackDouble (MPI): 0.015 0.015 0.015 0.015 0.015 0.015 0.015 0.015 0.015 0.015 [0.000]
VecPackDouble                 :	1.48006e-05	1.17806e-05	(20.4047%)
VecPackDouble:	MPI Pack code is too slow: MPI 1.48006e-05	 User 1.17806e-05
TestVecUnPackDouble (USER): 0.018 0.018 0.018 0.018 0.018 0.018 0.018 0.018 0.018 0.018 [0.000]
TestVecUnPackDouble (MPI): 0.020 0.020 0.020 0.020 0.020 0.020 0.020 0.020 0.020 0.020 [0.000]
VecUnPackDouble               :	2.03727e-05	1.75822e-05	(13.6971%)
VecUnPackDouble:	MPI Unpack code is too slow: MPI 2.03727e-05	 User 1.75822e-05
TestIndexPackDouble (USER): 0.017 0.017 0.017 0.017 0.017 0.017 0.017 0.017 0.017 0.017 [0.000]
TestIndexPackDouble (MPI): 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 0.016 [0.000]
VecIndexDouble                :	1.6424e-05	1.71785e-05	(4.39215%)
TestVecPack2Double (USER): 0.023 0.023 0.023 0.023 0.023 0.023 0.023 0.023 0.023 0.023 [0.000]
TestVecPack2Double (MPI): 0.023 0.023 0.023 0.023 0.023 0.023 0.023 0.023 0.023 0.023 [0.000]
VecPack2Double                :	2.3074e-05	2.27856e-05	(1.24975%)
 Found 2 performance problems
x1001c5s5b0n0.hsn.warhawk.afrl.hpc.mil: rank 0 exited with code 1

Passed Network performance - netmpi

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This test calculates bulk transfer rates and latency as a function of message buffer size.

0: x1001c5s5b0n0
1: x1001c5s7b0n0
Latency: 0.000001952
Sync Time: 0.000005830
Now starting main loop
  0:       997 bytes 64155 times -->  2307.19 Mbps in 0.000003297 sec
  1:      1000 bytes 37876 times -->  2385.56 Mbps in 0.000003198 sec
  2:      1003 bytes 39163 times -->  2453.99 Mbps in 0.000003118 sec
  3:      1497 bytes 40285 times -->  3414.76 Mbps in 0.000003345 sec
  4:      1500 bytes 49830 times -->  3417.52 Mbps in 0.000003349 sec
  5:      1503 bytes 49821 times -->  3409.99 Mbps in 0.000003363 sec
  6:      1997 bytes 49661 times -->  4274.13 Mbps in 0.000003565 sec
  7:      2000 bytes 52608 times -->  4295.00 Mbps in 0.000003553 sec
  8:      2003 bytes 52812 times -->  4283.75 Mbps in 0.000003567 sec
  9:      2497 bytes 35127 times -->  4609.26 Mbps in 0.000004133 sec
 10:      2500 bytes 36287 times -->  4590.85 Mbps in 0.000004155 sec
 11:      2503 bytes 36127 times -->  4728.44 Mbps in 0.000004039 sec
 12:      3497 bytes 37195 times -->  5926.85 Mbps in 0.000004502 sec
 13:      3500 bytes 39671 times -->  5940.40 Mbps in 0.000004495 sec
 14:      3503 bytes 39741 times -->  5944.98 Mbps in 0.000004496 sec
 15:      4497 bytes 23876 times -->  6956.93 Mbps in 0.000004932 sec
 16:      4500 bytes 28158 times -->  6930.46 Mbps in 0.000004954 sec
 17:      4503 bytes 28047 times -->  6991.76 Mbps in 0.000004914 sec
 18:      6497 bytes 28292 times -->  9165.35 Mbps in 0.000005408 sec
 19:      6500 bytes 32003 times -->  9167.28 Mbps in 0.000005410 sec
 20:      6503 bytes 32001 times -->  9016.92 Mbps in 0.000005502 sec
 21:      8497 bytes 17495 times -->  10690.16 Mbps in 0.000006064 sec
 22:      8500 bytes 21823 times -->  10695.18 Mbps in 0.000006063 sec
 23:      8503 bytes 21832 times -->  10790.54 Mbps in 0.000006012 sec
 24:     12497 bytes 22026 times -->  13168.78 Mbps in 0.000007240 sec
 25:     12500 bytes 23480 times -->  12660.26 Mbps in 0.000007533 sec
 26:     12503 bytes 22570 times -->  12682.48 Mbps in 0.000007521 sec
 27:     16497 bytes 11973 times -->  14275.57 Mbps in 0.000008817 sec
 28:     16500 bytes 14606 times -->  14309.88 Mbps in 0.000008797 sec
 29:     16503 bytes 14641 times -->  14325.94 Mbps in 0.000008789 sec
No errors.

Passed Send/Receive basic perf - sendrecvperf

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 2

Test Description:

This program provides a simple test of send-receive performance between two (or more) processes. This test is sometimes called head-to-head or ping-ping test, as both processes send at the same time.

Irecv-send
len	time    	rate
1	2.41326	0.414377
2	2.13943	0.934829
4	2.02204	1.9782
8	1.95443	4.09327
16	1.95596	8.18014
32	1.98427	16.1269
64	1.97245	32.4469
128	2.06679	61.9318
256	75.0687	3.41021
512	3.34671	152.986
1024	3.53258	289.874
2048	3.83305	534.3
4096	4.53455	903.286
8192	5.52689	1482.21
16384	7.62556	2148.56
32768	87.3322	375.211
65536	17.9854	3643.85
131072	27.8666	4703.56
262144	40.7281	6436.44
524288	73.3909	7143.78
Sendrecv
len	time (usec)	rate (MB/s)
1	2.08062	0.480626
2	2.03006	0.985192
4	1.98362	2.01651
8	1.96764	4.06579
16	2.00104	7.99583
32	2.27931	14.0393
64	2.02232	31.6468
128	2.0656	61.9673
256	3.23888	79.0396
512	3.26412	156.857
1024	3.39005	302.06
2048	3.77885	541.964
4096	4.43358	923.859
8192	5.56535	1471.96
16384	7.45525	2197.65
32768	10.3431	3168.1
65536	15.4431	4243.71
131072	27.5287	4761.29
262144	44.1106	5942.88
524288	58.4866	8964.24
Pingpong
len	time (usec)	rate (MB/s)
1	3.95404	0.252906
2	3.79653	0.526797
4	3.67187	1.08936
8	3.64832	2.19279
16	3.71	4.31267
32	3.73678	8.56353
64	3.76886	16.9812
128	3.84977	33.2488
256	5.14971	49.7115
512	5.29707	96.6572
1024	5.73757	178.473
2048	6.60585	310.028
4096	8.51355	481.115
8192	10.2512	799.127
16384	14.5041	1129.61
32768	18.7045	1751.88
65536	27.2076	2408.74
131072	41.6766	3144.98
262144	78.2105	3351.77
524288	122.863	4267.27
1	        2.41	        2.08	        3.95
2	        2.14	        2.03	        3.80
4	        2.02	        1.98	        3.67
8	        1.95	        1.97	        3.65
16	        1.96	        2.00	        3.71
32	        1.98	        2.28	        3.74
64	        1.97	        2.02	        3.77
128	        2.07	        2.07	        3.85
256	       75.07	        3.24	        5.15
512	        3.35	        3.26	        5.30
1024	        3.53	        3.39	        5.74
2048	        3.83	        3.78	        6.61
4096	        4.53	        4.43	        8.51
8192	        5.53	        5.57	       10.25
16384	        7.63	        7.46	       14.50
32768	       87.33	       10.34	       18.70
Irecv-Send too long:	32768	        7.63	       87.33
65536	       17.99	       15.44	       27.21
131072	       27.87	       27.53	       41.68
262144	       40.73	       44.11	       78.21
524288	       73.39	       58.49	      122.86
No errors

Passed Synchonization basic perf - non_zero_root

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 4

Test Description:

This test compares the time it takes between a synchronization step between rank 0 and rank 1. If that difference is greater than 10 percent, it is considered an error.

No errors

Passed Timer sanity - timer

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 1

Test Description:

Check that the timer produces monotone nondecreasing times and that the tick is reasonable.

No errors

Failed Transposition type - transp-datatype

Build: Passed

Execution: Failed

Exit Status: Application_ended_with_error(s)

MPI Processes: 2

Test Description:

This test transposes a (100x100) two-dimensional array using two options: (1) manually send and transpose, and (2) send using an automatic hvector type. It fails if (2) is too much slower than (1).

Transpose time with datatypes is more than twice time without datatypes
Found 1 errors
0.000124	0.000014	0.000014
x1001c5s7b0n0.hsn.warhawk.afrl.hpc.mil: rank 1 exited with code 1

Passed Variable message length - adapt

Build: Passed

Execution: Passed

Exit Status: Execution_completed_no_errors

MPI Processes: 3

Test Description:

This test measures the latency involved in sending/receiving messages of varying size.

0: x1001c5s5b0n0
1: x1001c5s5b0n0
2: x1001c5s7b0n0
To determine 2 <-> 0       latency, using 65536 reps.
To determine       0 <-> 1 latency, using 131072 reps.
To determine 2 <-- 0 --> 1 latency, using 32768 reps
  0:        72 bytes 27754 times --> Latency20_ : 0.000001874
Latency_01 : 0.000000972
Latency201 : 0.000002853
Now starting main loop
  0:        72 bytes 53492 times -->  281.55 Mbps in 0.000001951 sec
 580.12 Mbps in 0.000000947 sec
  0:        72 bytes 18225 times -->  0.000001921 0.000002071 0.000002511 0.000002680 0.000002806 0.000002889 0.000002888 0.000002882 0.000002998 0.000002936 0.000002937 0.000002939 0.000002958 0.000002867 0.000002908
  1:        75 bytes 25626 times -->   1:        75 bytes 52803 times -->  293.89 Mbps in 0.000001947 sec
 591.37 Mbps in 0.000000968 sec
  1:        75 bytes 17194 times -->  0.000001950 0.000002118 0.000002482 0.000002709 0.000002842 0.000002879 0.000002959 0.000002962 0.000002937 0.000002910 0.000002935 0.000002923 0.000002915 0.000002926 0.000002945
  2:        78 bytes 26707 times -->   2:        78 bytes 53741 times -->  297.09 Mbps in 0.000002003 sec
 628.31 Mbps in 0.000000947 sec
  2:        78 bytes 17654 times -->  0.000002000 0.000002163 0.000002566 0.000002743 0.000002852 0.000002907 0.000002947 0.000002964 0.000002951 0.000003106 0.000003101 0.000003117 0.000003108 0.000003066 0.000003055
No errors.