[openib-general] mvapich2 pmi scalability problems

Don.Dhondt at Bull.com Don.Dhondt at Bull.com
Fri Jul 21 16:03:57 PDT 2006


$ grep MPI_CFLAGS ./opt/mpi/mvapich2/bin/mpicc
#    MPI_CFLAGS         - Any special flags needed to compile
MPI_CFLAGS="-D_IA64_ -DUSE_INLINE -DRDMA_FAST_PATH -D_SMP_ 
-DUSE_HEADER_CACHING -DLAZY_MEM_UNREGISTER -DONE_SIDED 
-D_MLX_PCI_EX_DDR_ -DMPID_USE_SEQUENCE_NUMBERS 
-D_MEDIUM_CLUSTER -DUSE_MPD_RING
-I/usr/local/ofed/include -O2"
CFLAGS="$CFLAGS $MPI_CFLAGS"




Matthew Koop <koop at cse.ohio-state.edu> 
07/21/2006 03:26 PM

To
Don.Dhondt at Bull.com
cc
openib-general at openib.org
Subject
Re: [openib-general] mvapich2 pmi scalability problems







> Since we are compiling for ia64 our assumption is it compiled with
> HAVE_MPD_RING="-DUSE_MPD_RING". Is this correct?
> Also, we are not using mpd to run start the jobs. Since we are
> using slurm as the resource manager the jobs are started with
> srun. Does MPD_RING on apply if using MDP?

It should be using the USE_MPD_RING flag in that case. Just to make sure,
can you just verify by seeing the compile flags used by `grep`ing mpicc:

grep MPI_CFLAGS mpicc

it should print out -DUSE_MPD_RING. Even though you are using SLURM this
option should still work since MPD is not actually used -- it is making
use of the PMI interface only.

The numbers of messages you posted earlier would be consistent with what
would be expected due to IB QP information that is exchanged at startup.
The preferred setup, which should show superior scalability is using
-DUSE_MPD_RING. Even in the all PMI case we should be able to add some
additional optimizations, but the ring startup is really what should be
used.

Matt



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20060721/8940bf0b/attachment.html>


More information about the general mailing list