[OFIWG-MPI] Priority feature list for MPI
Jeff Squyres (jsquyres)
jsquyres at cisco.com
Fri Apr 1 03:34:07 PDT 2016
On Mar 31, 2016, at 8:57 AM, Eva Mishra <evam at cdac.in> wrote:
>
> Libfabrics man pages list many features that fulfill requirements of different applications.
>
> Existing OFI providers have implemented some of these features and dropped the other.
>
> For a new HPC provider, what are the most important OFI features needed for MPI (priority-wise) ?
That may not be an answerable question.
MPI is a complex API that has many modes and types of communications. All of MPI functionality can be implemented with basic send and receive of any underlying network API (e.g., you can implement all of MPI over POSIX sockets). Hence, you can implement all of MPI via simple modes of fi_send/fi_recv. Just by using standard OS-bypass techniques, you can get lots of speedup (e.g., compared to POSIX sockets), and get good performance out of MPI applications.
Additional libfabric features provide much more "natural" implementation and/or more possibility for overlap of communication and computation or other types of optimizations. E.g., using NIC offload functionality allows message passing to progress without main CPU intervention, direct data placement allows elimination of copies at the receiver, patterned inbound message matching combined with inbound message steering can optimize MPI matching engines at the receiver, ...etc.
So while the libfabric API gives many, many opportunities for MPI implementations, it's a choice as to which ones you implement in which order. These usually depend on your use cases (e.g., customer applications), the underlying hardware that you're trying to support, etc.
--
Jeff Squyres
jsquyres at cisco.com
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
More information about the ofiwg-mpi
mailing list