[libfabric-users] libfabric/intel mpi with mlx5 and > 300 cores/ranks

Walter, Eric J ejwalt at wm.edu
Tue Aug 6 10:47:46 PDT 2019


Hi, we are currently standing up a new cluster with Mellanox ConnectX-5 adapters. I have found that using openMPI, mvapich2, and intel2018-mpi, we can run MPI jobs on all 960 cores in the cluster, however, using intel2019-mpi we can't get beyond ~300 mpi ranks. If we do, we get the following error for every rank:

Abort(273768207) on node 650 (rank 650 in comm 0): Fatal error in PMPI_Comm_split: Other MPI error, error stack:
PMPI_Comm_split(507)...................: MPI_Comm_split(MPI_COMM_WORLD, color=0, key=650, new_comm=0x7911e8) failed
PMPI_Comm_split(489)...................:
MPIR_Comm_split_impl(167)..............:
MPIR_Allgather_intra_auto(145).........: Failure during collective
MPIR_Allgather_intra_auto(141).........:
MPIR_Allgather_intra_brucks(115).......:
MPIC_Sendrecv(344).....................:
MPID_Isend(662)........................:
MPID_isend_unsafe(282).................:
MPIDI_OFI_send_lightweight_request(106):
(unknown)(): Other MPI error
----------------------------------------------------------------------------------------------------------
This is using the default FI_PROVIDER of ofi_rxm. If we switch to using "verbs", we can run all 960 cores, but tests show an order of magnitude increase in latency and much longer run times.

We have tried installing our own libfabrics (from the git repo ; also we verified with verbose debugging that we are using this libfabrics) and this behavoir does not change

Is there anything I can change to allow all 960 cores using the default ofi_rxm provider?  Or, is there a way to improve performance using the verbs provider?

For completeness:
Using MLNX_OFED_LINUX-4.6-1.0.1.1-rhel7.6-x86_64 ofed
CentOS 7.6.1810 (kernel = 3.10.0-957.21.3.el7.x86_64)
Intel Parallel studio version 19.0.4.243
Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]


Thanks!

Eric
--
Eric J. Walter

College of William and Mary
IT/High Performance Computing Group
ISC 1271
P.O. Box 8795
Williamsburg, VA  23187-8795
email:    ejwalt at wm.edu
phone:  (757) 221-1886
fax:        (757) 221-1321

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/libfabric-users/attachments/20190806/f00800cb/attachment-0001.html>


More information about the Libfabric-users mailing list