[openib-general] Announcing the release of MVAPICH2 0.9.0 (MPI-2 over InfiniBand and other RDMA Interconnects)

Dhabaleswar Panda panda at cse.ohio-state.edu
Tue Nov 1 20:54:49 PST 2005


The MVAPICH team is pleased to announce the release of MVAPICH2 0.9.0
for the following platforms, OS, compilers, and InfiniBand adapters:

  - Platforms: EM64T, Opteron, IA-32, and Mac G5 
  - Operating Systems: Linux, Solaris, and Mac OSX 
  - Compilers: gcc, intel, and pgi 
  - InfiniBand Adapters: Mellanox adapters with PCI-X 
    and PCI-Express (SDR and DDR with mem-full and mem-free cards) 

In addition to delivering high performance with VAPI interface,
MVAPICH2 0.9.0 also provides uDAPL support for portability across
networks and platforms with highest performance. The uDAPL interface
of this release has been tested with InfiniBand (OpenIB/Gen2 uDAPL,
IBGD/uDAPL, and Solaris IBTL/uDAPL), Ammasso GigE (Ammasso uDAPL), and
Myrinet (DAPL-GM beta).

Starting with this release, MVAPICH2 enables InfiniBand support for
Solaris environment through uDAPL support.

MVAPICH2 0.9.0 is being distributed as a single integrated package
(with MPICH2 1.0.2p1 and MVICH).  It is available under BSD license.

This new release has the following features:

      - MPI-2 functionalities (one-sided, collectives, datatype)
      - all MPI-1 functionalities
      - high performance and optimized support for all one-sided 
        operations (Get, Put, and Accumulate)
      - support for active and passive synchronization
      - optimized two-sided operations with RDMA support
      - efficient memory registration/de-registration schemes 
        for RDMA operations
      - optimized intra-node shared memory support (bus-based and NUMA)
      - shared library support
      - ROMIO support
      - uDAPL support (tested for InfiniBand on Linux and Solaris, 
        Myrinet, and Ammasso GigE) 
      - scalable job start-up
      - optimized and tuned for the above platforms and different 
        network interfaces (PCI-X and PCI-Express with SDR and DDR)
      - support for multiple compilers (gcc, icc, and pgi) 
      - single code base for all of the above platforms and OS
      - memory efficient scaling modes for medium and large clusters

Other features of this release include:

- Excellent performance: Sample performance numbers include:
 
  Two-sided operations on EM64T, PCI-Ex: 
     - 3.47 microsec one-way latency with IBA-SDR
     - 1502 MB/sec unidirectional bandwidth with IBA-DDR 
     - 2752 MB/sec bidirectional bandwidth with IBA-DDR 

  One-sided operations on EM64T, PCI-Ex, IBA-DDR:
     - 5.96 microsec Put latency 
     - 1503 MB/sec unidirectional PUT bandwidth 
     - 2759 MB/sec bidirectional PUT bandwidth 

  Two-sided operations with Solaris uDAPL/IBTL on Opteron, PCI-X,
  IBA-SDR:
     - 5.58 microsec one-way latency
     - 655 MB/sec unidirectional bandwidth
     - 799 MB/sec bidirectional bandwidth

  Two-sided operations with OpenIB/Gen2 uDAPL on Opteron, PCI-Ex
  IBA-SDR:
     - 3.63 microsec one-way latency
     - 962 MB/sec unidirectional bandwidth
     - 1869 MB/sec bidirectional bandwidth

  Performance numbers for all other platforms, system configurations and 
  operations can be viewed by visiting `Performance Results' section
  of the project's web page. 

- Similar performance with MVAPICH: With the new ADI-3-level design,
  MVAPICH2 0.9.0 delivers similar performance for two-sided operations
  compared to MVAPICH 0.9.5. Organizations and users interested in 
  getting the best performance for both two-sided and one-sided 
  operations may migrate from MVAPICH code base to MVAPICH2 code base. 

- A set of benchmarks to evaluate both two-sided and one-sided
  operations (Put, Get, and Accumulate)

- An enhanced and detailed `User Guide' to assist users: 

       - to install this package on different platforms 
            with both interfaces (VAPI and uDAPL) and different options

       - to vary different parameters of the MPI installation to 
            extract maximum performance and achieve scalability, 
            especially on large-scale systems.

You are welcome to download the MVAPICH2 0.9.0 package and access
relevant information from the following URL:

http://nowlab.cse.ohio-state.edu/projects/mpi-iba/

A successive version with support for OpenIB/Gen2 will be available
soon.

All feedbacks, including bug reports and hints for performance tuning,
are welcome. Please send an e-mail to mvapich-help at cse.ohio-state.edu.

Thanks, 

MVAPICH Team at OSU/NBCL 

----------

PS: If you would like to be removed from this mailing list, please end
an e-mail to mvapich_request at cse.ohio-state.edu.


======================================================================
MVAPICH/MVAPICH2 project is currently supported with funding from
U.S. National Science Foundation , U.S. DOE Office of Science,
Mellanox, Intel, Cisco Systems, Sun Microsystems, and Linux Networx;
and with equipment support from AMD, Ammasso, Apple, IBM, Intel,
Mellanox, Microway, PathScale, SilverStorm and Sun Microsystems. Other
technology partner include Etnus.
======================================================================




More information about the general mailing list