From jsquyres at cisco.com Fri Apr 1 03:34:07 2016 From: jsquyres at cisco.com (Jeff Squyres (jsquyres)) Date: Fri, 1 Apr 2016 10:34:07 +0000 Subject: [OFIWG-MPI] Priority feature list for MPI In-Reply-To: <956261296.83176.1459429051005.JavaMail.open-xchange@webmail.cdac.in> References: <956261296.83176.1459429051005.JavaMail.open-xchange@webmail.cdac.in> Message-ID: On Mar 31, 2016, at 8:57 AM, Eva Mishra wrote: > > Libfabrics man pages list many features that fulfill requirements of different applications. > > Existing OFI providers have implemented some of these features and dropped the other. > > For a new HPC provider, what are the most important OFI features needed for MPI (priority-wise) ? That may not be an answerable question. MPI is a complex API that has many modes and types of communications. All of MPI functionality can be implemented with basic send and receive of any underlying network API (e.g., you can implement all of MPI over POSIX sockets). Hence, you can implement all of MPI via simple modes of fi_send/fi_recv. Just by using standard OS-bypass techniques, you can get lots of speedup (e.g., compared to POSIX sockets), and get good performance out of MPI applications. Additional libfabric features provide much more "natural" implementation and/or more possibility for overlap of communication and computation or other types of optimizations. E.g., using NIC offload functionality allows message passing to progress without main CPU intervention, direct data placement allows elimination of copies at the receiver, patterned inbound message matching combined with inbound message steering can optimize MPI matching engines at the receiver, ...etc. So while the libfabric API gives many, many opportunities for MPI implementations, it's a choice as to which ones you implement in which order. These usually depend on your use cases (e.g., customer applications), the underlying hardware that you're trying to support, etc. -- Jeff Squyres jsquyres at cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/ From sean.hefty at intel.com Fri Apr 1 13:51:30 2016 From: sean.hefty at intel.com (Hefty, Sean) Date: Fri, 1 Apr 2016 20:51:30 +0000 Subject: [OFIWG-MPI] Priority feature list for MPI In-Reply-To: <956261296.83176.1459429051005.JavaMail.open-xchange@webmail.cdac.in> References: <956261296.83176.1459429051005.JavaMail.open-xchange@webmail.cdac.in> Message-ID: <1828884A29C6694DAF28B7E6B8A82373AB0286A0@ORSMSX109.amr.corp.intel.com> > Libfabrics man pages list many features that fulfill requirements of > different applications. > > Existing OFI providers have implemented some of these features and dropped > the other. > > For a new HPC provider, what are the most important OFI features needed > for MPI (priority-wise) ? All MPIs make use of internal communication abstractions. For Intel MPI, the tagged matching interfaces align well with one of the layers. As Jeff mentioned in his email, other MPIs and other internal layers will make use of different features. - Sean From jsquyres at cisco.com Fri Apr 1 20:05:10 2016 From: jsquyres at cisco.com (Jeff Squyres (jsquyres)) Date: Sat, 2 Apr 2016 03:05:10 +0000 Subject: [OFIWG-MPI] 1.3.0rc2 releases Message-ID: Given that there were a bunch of changes today (including a bug fix within the last hour), it seemed prudent to *not* do a final release at 11pm on a Friday evening. Instead, I have made 1.3.0rc2 and published it in the usual location: http://www.openfabrics.org/downloads/ofi/ I leave it to all of you to do the final release on Monday. It would probably be good to get visual agreement from everyone in the room together before doing the final release, anyway. The checklist items on https://github.com/ofiwg/libfabric/issues/1905 should guide you through what needs to be done for the release. -- Jeff Squyres jsquyres at cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/