[openfabrics-ewg] Open MPI in OFED

Scott Weitzenkamp (sweitzen) sweitzen at cisco.com
Tue May 2 15:46:49 PDT 2006


Here's some perfomance Cisco SQA has gathered using the OSU benchmarks
osu_latency.c, osu_bw.c, and osu_bibw.c.  Red cells have unacceptable
performance (in my opinion), yellow are marginal, and green are better
than what we've seen historically.

OpenMPI 1.1a2 has load balancing enabled by default.  I have not used
load balancing on MVAPICH, I'm guessing the OFED install.sh script does
not compile multirail support?

OpenMPI gets good throughput in these benchmarks when you use --mca
mpi_leave_pinned 1.  OpenMPI 1.1a2 has better latency on these
benchmarks than 1.0.2.

I have higher-than-expected latency for Intel MPI (using uDAPL), I guess
I am missing some tuning params (waiting back for a reply from Arlin at
Intel).

We will be adding LionCub data soon.

Scott Weitzenkamp
SQA and Release Manager
Server Virtualization Business Unit
Cisco Systems
 

> -----Original Message-----
> From: openfabrics-ewg-bounces at openib.org 
> [mailto:openfabrics-ewg-bounces at openib.org] On Behalf Of Aviram Gutman
> Sent: Wednesday, April 26, 2006 5:53 AM
> To: Jeff Squyres (jsquyres); openfabrics-ewg at openib.org
> Cc: Gil Bloch
> Subject: RE: [openfabrics-ewg] Open MPI in OFED
> 
> 
> I think it is the right move. But please make sure to test it on the
> matrix.
> 
> Can you also send a performance report for this version? 
> 
> 
> Regards,
>    Aviram
> 
> -----Original Message-----
> From: openfabrics-ewg-bounces at openib.org
> [mailto:openfabrics-ewg-bounces at openib.org] On Behalf Of Jeff Squyres
> (jsquyres)
> Sent: Wednesday, April 26, 2006 3:05 PM
> To: openfabrics-ewg at openib.org
> Subject: [openfabrics-ewg] Open MPI in OFED
> 
> All --
> 
> Gleb and I (the A and B maintainers of Open MPI) would like to propose
> that we change the version of Open MPI in OFED.  The current 
> version in
> OFED is v1.0.2.  Gleb and I would like to change it to v1.1a2.
> Rationale:
> 
> - The 1.1 series has better performance (especially on 
> OpenIB) than the
> 1.0 series
> - Although it is an "alpha" version (meaning that the OMPI team has
> branched for v1.1 but has not yet completed all of its test), the 1.1
> series has received significantly more testing at both Cisco and
> Voltaire than the 1.0 series
> - Open MPI in OFED 1.0 is a "technology preview" (meaning: 
> unsupported)
> so that if there are problems, it's not a huge deal
> 
> I initially proposed that we put in v1.0.2 in OFED, but have been
> convinced that putting in 1.1a2 is a better idea (mainly because of
> better performance and a lack of problems seen on internal testing).
> 
> We would like to commit this in time for rc4 so that others 
> can test it.
> 
> What does the group feel about this?
> 
> --
> Jeff Squyres
> Server Virtualization Business Unit
> Cisco Systems
> _______________________________________________
> openfabrics-ewg mailing list
> openfabrics-ewg at openib.org
> http://openib.org/mailman/listinfo/openfabrics-ewg
> _______________________________________________
> openfabrics-ewg mailing list
> openfabrics-ewg at openib.org
> http://openib.org/mailman/listinfo/openfabrics-ewg
> 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: mpi_perf.xls
Type: application/vnd.ms-excel
Size: 22528 bytes
Desc: mpi_perf.xls
URL: <http://lists.openfabrics.org/pipermail/ewg/attachments/20060502/ee0c9095/attachment.xls>


More information about the ewg mailing list