[openib-general] Announcing the release of MVAPICH 0.9.7 with support for Gen2, SRQ with Flow Control, Fault Tolerance and anonymous SVN access
Dhabaleswar Panda
panda at cse.ohio-state.edu
Tue Mar 14 21:46:11 PST 2006
The MVAPICH team is pleased to announce the release of MVAPICH 0.9.7
with the following new features:
- Shared Receive Queue (SRQ) support with flow control: The new
design uses significantly less memory for MPI library - less than
300MBytes of MPI internal buffers per process for clusters with 16K
processes!! Performance benefits of the proposed scheme for sample
applications (HPL and NAS LU) can be obtained from MVAPICH project's
page -> Performance.
- OpenIB/Gen2 support: All features (such as RDMA-based collectives,
multi-rail, scalable MPD-based startup, etc.) available with VAPI are
now available with OpenIB/Gen2.
- MVAPICH-Gen2 also enables the use of IBM ehca adapter and
PathScale adapter through OpenIB/Gen2 support
- Support for Fault Tolerance: Mem-to-mem reliable data transfer
(detection of I/O bus error with 32bit CRC). Additional fault
tolerance support (such as checkpoint restart, automatic path
migration (APM), etc.) will be introduced in successive releases.
- Advanced AVL tree-based Resource-aware registration cache
- Tuning and Optimization of various collective algorithms for a wide
range of system sizes
- Multi-rail communication: Various schemes (Multiple queue pairs per
port, Multiple ports per adapter, and Multiple adapters). Flexible
scheduling policies: round-robin for small and non-blocking
communication and striping for large blocking communication
MVAPICH 0.9.7 release supports Gen2, VAPI and uDAPL transport
interfaces. It also has support for the standard TCP/IP (provided by
MPICH stack). It is tested with the following architecture, OS,
compilers and InfiniBand adapters:
- Architecture: EM64T, Opteron, IA-32, IBM PPC and Mac G5
- Operating Systems: Linux, Solaris, AIX and Mac OSX
- Compilers: gcc, intel, pathscale and pgi
- InfiniBand Adapters:
- Mellanox adapters with PCI-X and PCI-Express
(SDR and DDR with mem-full and mem-free cards)
- PathScale adapter (through OpenIB/Gen2 support)
- IBM ehca adapter (through OpenIB/Gen2 support)
More details on all features and supported platforms can be obtained
by visiting the project's web page -> Overview -> features.
MVAPICH 0.9.7 is being distributed as a single integrated package
(with MPICH 1.2.7 and MVICH). It is available under BSD license.
Starting with this 0.9.7 release, MVAPICH team is also pleased to
announce the availability of the code base through anonymous SVN
access. Nightly tarballs are also available. A new mvapich-commit
mailing list has been established for users, developers and vendors to
keep track of all commits happening to the SVN. (SVN access to
MVAPICH2 code base will be coming soon.)
MVAPICH 0.9.7 continues to deliver excellent performance. Performance
numbers for all platforms and system configurations and operations can
be viewed by visiting `Performance' section of the project's web page.
An enhanced and detailed `User Guide' is now available (in both html
and pdf forms) from the FAQ page.
For downloading MVAPICH 0.9.7 package and accessing the anonymous
SVN, please visit the following URL:
http://nowlab.cse.ohio-state.edu/projects/mpi-iba/
A stripped down version of this release is also available at the
OpenIB SVN.
All feedbacks, including bug reports, hints for performance tuning,
patches and enhancements are welcome. Please post it to the newly
established mvapich-discuss mailing list.
Thanks,
MVAPICH Team at OSU/NBCL
======================================================================
MVAPICH/MVAPICH2 project is currently supported with funding from
U.S. National Science Foundation, U.S. DOE Office of Science,
Mellanox, Intel, Cisco Systems, Sun Microsystems and Linux Networx;
and with equipment support from AMD, Apple, Appro, IBM, Intel,
Mellanox, Microway, PathScale, SilverStorm and Sun Microsystems. Other
technology partner includes Etnus.
======================================================================
More information about the general
mailing list