[openfabrics-ewg] Open MPI in OFED
Aviram Gutman
aviram at mellanox.co.il
Thu May 4 05:43:01 PDT 2006
Hi Scott,
If you use the latest FW with OFED you should see better results. We
have seen here latency of 2.6uSec.
Arbel MemFree - FW 5.1.400
Arbel in Tavor mode - FW 4.7.600
Sinai - FW 1.0.800
Regards,
Aviram
-----Original Message-----
From: Scott Weitzenkamp (sweitzen) [mailto:sweitzen at cisco.com]
Sent: Wednesday, May 03, 2006 1:47 AM
To: Aviram Gutman; Jeff Squyres (jsquyres); openfabrics-ewg at openib.org
Cc: Gil Bloch
Subject: RE: [openfabrics-ewg] Open MPI in OFED
Here's some perfomance Cisco SQA has gathered using the OSU benchmarks
osu_latency.c, osu_bw.c, and osu_bibw.c. Red cells have unacceptable
performance (in my opinion), yellow are marginal, and green are better
than what we've seen historically.
OpenMPI 1.1a2 has load balancing enabled by default. I have not used
load balancing on MVAPICH, I'm guessing the OFED install.sh script does
not compile multirail support?
OpenMPI gets good throughput in these benchmarks when you use --mca
mpi_leave_pinned 1. OpenMPI 1.1a2 has better latency on these
benchmarks than 1.0.2.
I have higher-than-expected latency for Intel MPI (using uDAPL), I guess
I am missing some tuning params (waiting back for a reply from Arlin at
Intel).
We will be adding LionCub data soon.
Scott Weitzenkamp
SQA and Release Manager
Server Virtualization Business Unit
Cisco Systems
> -----Original Message-----
> From: openfabrics-ewg-bounces at openib.org
> [mailto:openfabrics-ewg-bounces at openib.org] On Behalf Of Aviram Gutman
> Sent: Wednesday, April 26, 2006 5:53 AM
> To: Jeff Squyres (jsquyres); openfabrics-ewg at openib.org
> Cc: Gil Bloch
> Subject: RE: [openfabrics-ewg] Open MPI in OFED
>
>
> I think it is the right move. But please make sure to test it on the
> matrix.
>
> Can you also send a performance report for this version?
>
>
> Regards,
> Aviram
>
> -----Original Message-----
> From: openfabrics-ewg-bounces at openib.org
> [mailto:openfabrics-ewg-bounces at openib.org] On Behalf Of Jeff Squyres
> (jsquyres)
> Sent: Wednesday, April 26, 2006 3:05 PM
> To: openfabrics-ewg at openib.org
> Subject: [openfabrics-ewg] Open MPI in OFED
>
> All --
>
> Gleb and I (the A and B maintainers of Open MPI) would like to propose
> that we change the version of Open MPI in OFED. The current version
> in OFED is v1.0.2. Gleb and I would like to change it to v1.1a2.
> Rationale:
>
> - The 1.1 series has better performance (especially on
> OpenIB) than the
> 1.0 series
> - Although it is an "alpha" version (meaning that the OMPI team has
> branched for v1.1 but has not yet completed all of its test), the 1.1
> series has received significantly more testing at both Cisco and
> Voltaire than the 1.0 series
> - Open MPI in OFED 1.0 is a "technology preview" (meaning:
> unsupported)
> so that if there are problems, it's not a huge deal
>
> I initially proposed that we put in v1.0.2 in OFED, but have been
> convinced that putting in 1.1a2 is a better idea (mainly because of
> better performance and a lack of problems seen on internal testing).
>
> We would like to commit this in time for rc4 so that others can test
> it.
>
> What does the group feel about this?
>
> --
> Jeff Squyres
> Server Virtualization Business Unit
> Cisco Systems
> _______________________________________________
> openfabrics-ewg mailing list
> openfabrics-ewg at openib.org
> http://openib.org/mailman/listinfo/openfabrics-ewg
> _______________________________________________
> openfabrics-ewg mailing list
> openfabrics-ewg at openib.org
> http://openib.org/mailman/listinfo/openfabrics-ewg
>
More information about the ewg
mailing list