[openfabrics-ewg] Open MPI in OFED
Scott Weitzenkamp (sweitzen)
sweitzen at cisco.com
Wed May 3 12:18:41 PDT 2006
Forgot what I wrote about Intel MPI, I was running all nodes on same
machine and thus not using any IB traffic. I'll fix this in next
version of the spreadsheet I send out.
Scott
> -----Original Message-----
> From: openfabrics-ewg-bounces at openib.org
> [mailto:openfabrics-ewg-bounces at openib.org] On Behalf Of
> Scott Weitzenkamp (sweitzen)
> Sent: Tuesday, May 02, 2006 3:47 PM
> To: Aviram Gutman; Jeff Squyres (jsquyres); openfabrics-ewg at openib.org
> Cc: Gil Bloch
> Subject: RE: [openfabrics-ewg] Open MPI in OFED
>
> Here's some perfomance Cisco SQA has gathered using the OSU benchmarks
> osu_latency.c, osu_bw.c, and osu_bibw.c. Red cells have unacceptable
> performance (in my opinion), yellow are marginal, and green are better
> than what we've seen historically.
>
> OpenMPI 1.1a2 has load balancing enabled by default. I have not used
> load balancing on MVAPICH, I'm guessing the OFED install.sh
> script does
> not compile multirail support?
>
> OpenMPI gets good throughput in these benchmarks when you use --mca
> mpi_leave_pinned 1. OpenMPI 1.1a2 has better latency on these
> benchmarks than 1.0.2.
>
> I have higher-than-expected latency for Intel MPI (using
> uDAPL), I guess
> I am missing some tuning params (waiting back for a reply
> from Arlin at
> Intel).
>
> We will be adding LionCub data soon.
>
> Scott Weitzenkamp
> SQA and Release Manager
> Server Virtualization Business Unit
> Cisco Systems
>
>
> > -----Original Message-----
> > From: openfabrics-ewg-bounces at openib.org
> > [mailto:openfabrics-ewg-bounces at openib.org] On Behalf Of
> Aviram Gutman
> > Sent: Wednesday, April 26, 2006 5:53 AM
> > To: Jeff Squyres (jsquyres); openfabrics-ewg at openib.org
> > Cc: Gil Bloch
> > Subject: RE: [openfabrics-ewg] Open MPI in OFED
> >
> >
> > I think it is the right move. But please make sure to test it on the
> > matrix.
> >
> > Can you also send a performance report for this version?
> >
> >
> > Regards,
> > Aviram
> >
> > -----Original Message-----
> > From: openfabrics-ewg-bounces at openib.org
> > [mailto:openfabrics-ewg-bounces at openib.org] On Behalf Of
> Jeff Squyres
> > (jsquyres)
> > Sent: Wednesday, April 26, 2006 3:05 PM
> > To: openfabrics-ewg at openib.org
> > Subject: [openfabrics-ewg] Open MPI in OFED
> >
> > All --
> >
> > Gleb and I (the A and B maintainers of Open MPI) would like
> to propose
> > that we change the version of Open MPI in OFED. The current
> > version in
> > OFED is v1.0.2. Gleb and I would like to change it to v1.1a2.
> > Rationale:
> >
> > - The 1.1 series has better performance (especially on
> > OpenIB) than the
> > 1.0 series
> > - Although it is an "alpha" version (meaning that the OMPI team has
> > branched for v1.1 but has not yet completed all of its
> test), the 1.1
> > series has received significantly more testing at both Cisco and
> > Voltaire than the 1.0 series
> > - Open MPI in OFED 1.0 is a "technology preview" (meaning:
> > unsupported)
> > so that if there are problems, it's not a huge deal
> >
> > I initially proposed that we put in v1.0.2 in OFED, but have been
> > convinced that putting in 1.1a2 is a better idea (mainly because of
> > better performance and a lack of problems seen on internal testing).
> >
> > We would like to commit this in time for rc4 so that others
> > can test it.
> >
> > What does the group feel about this?
> >
> > --
> > Jeff Squyres
> > Server Virtualization Business Unit
> > Cisco Systems
> > _______________________________________________
> > openfabrics-ewg mailing list
> > openfabrics-ewg at openib.org
> > http://openib.org/mailman/listinfo/openfabrics-ewg
> > _______________________________________________
> > openfabrics-ewg mailing list
> > openfabrics-ewg at openib.org
> > http://openib.org/mailman/listinfo/openfabrics-ewg
> >
>
More information about the ewg
mailing list