[libfabric-users] [ofiwg] provider developer documentation
Kenneth Raffenetti
raffenet at mcs.anl.gov
Wed Feb 8 07:36:28 PST 2017
Similarly, the MPICH/CH3 support requires FI_EP_RDM endpoint and
FI_TAGGED capability.
CH4 supports a more complex set of EPs and caps.
Ken
On 02/08/2017 09:00 AM, Howard Pritchard wrote:
> Hi Robert
>
> For the current OFI MTL inside Open MPI a provider needs to support
> FI_EP_RDM.
>
> Howard
>
> Jeff Squyres (jsquyres) <jsquyres at cisco.com <mailto:jsquyres at cisco.com>>
> schrieb am Mi. 8. Feb. 2017 um 06:41:
>
> +1 -- each MPI is different.
>
> Open MPI nominally supports the same Libfabric tag matching
> interfaces as MPICH/ch4, but the exact requirements may be slightly
> different (I don't know offhand).
>
> Additionally, some vendors have chosen to support different
> Libfabric interfaces in Open MPI. For example, Cisco supports its
> usNIC hardware with the EP_DGRAM Libfabric interfaces.
>
>
> > On Feb 7, 2017, at 6:22 PM, Blocksome, Michael
> <michael.blocksome at intel.com <mailto:michael.blocksome at intel.com>>
> wrote:
> >
> > For compiling MPICH/CH4/OFI the pertinent ofi features required of
> the provider are specified via “capability sets” (basically a
> collection of ofi features).
> >
> >
> https://github.com/pmodels/mpich/blob/master/src/mpid/ch4/netmod/ofi/ofi_capability_sets.h
> >
> > /*
> > * The definitions map to these capability sets:
> > *
> > * MPIDI_OFI_ENABLE_DATA fi_tsenddata (and other
> functions with immediate data)
> > * Uses
> FI_REMOTE_CQ_DATA, FI_DIRECTED_RECV
> > * MPIDI_OFI_ENABLE_AV_TABLE Use FI_AV_TABLE instead of
> FI_AV_MAP
> > * MPIDI_OFI_ENABLE_SCALABLE_ENDPOINTS fi_scalable_ep instead of fi_ep
> > * domain_attr.max_ep_tx_ctx > 1
> > * MPIDI_OFI_ENABLE_STX_RMA Use shared transmit contexts
> for RMA
> > * Uses FI_SHARED_CONTEXT
> > * MPIDI_OFI_ENABLE_MR_SCALABLE Use FI_MR_SCALABLE instead
> of FI_MR_BASIC
> > * If using runtime mode, this
> will be set to FI_MR_UNSPEC
> > * MPIDI_OFI_ENABLE_TAGGED Use FI_TAGGED interface
> instead of FI_MSG
> > * MPIDI_OFI_ENABLE_AM Use FI_MSG and FI_MULTI_RECV
> for active messages
> > * MPIDI_OFI_ENABLE_RMA Use FI_ATOMICS and FI_RMA
> interfaces
> > * MPIDI_OFI_FETCH_ATOMIC_IOVECS The maximum number of iovecs
> that can be used for fetch_atomic operations
> > */
> >
> > most things you can turn off, but some stuff is required. You need
> to support one of the two address vector types, a memory region type
> (if rma and/or atomics), etc. In general, you need “FI_MSG and
> FI_MULTI_RECV” to get MPICH bootstrapped, the rest is sugar.
> >
> > Of course all MPIs are different … this is just for MPICH.
> >
> > mike.
> >
> > -----Original Message-----
> > From: Libfabric-users
> [mailto:libfabric-users-bounces at lists.openfabrics.org
> <mailto:libfabric-users-bounces at lists.openfabrics.org>] On Behalf Of
> Hefty, Sean
> > Sent: Tuesday, February 7, 2017 5:18 PM
> > To: Robert Cauble <rcauble at google.com <mailto:rcauble at google.com>>
> > Cc: ofiwg at lists.openfabrics.org
> <mailto:ofiwg at lists.openfabrics.org>;
> libfabric-users at lists.openfabrics.org
> <mailto:libfabric-users at lists.openfabrics.org>
> > Subject: Re: [libfabric-users] provider developer documentation
> >
> >> You mention "minimal amount of the API" -- I would like to do
> that for
> >> starters. Are there guidelines WRT which operations/features are
> >> required by various MPI implementations and implications for not
> >> supporting them (loss of functionality vs loss of performance?)
> >
> > I think this greatly depends on the MPI. :) The target minimal
> amount of work depends on your hardware. The expectation is that
> the utility layers will provide the rest. Note that the utility
> layers are designed for performance, just may not be optimal for the
> underlying hardware. (We are actively working on the utility code,
> so fully drop in support isn't there yet.)
> >
> > If your target HW just sends/receives packets, you can aim for the
> functionality supported by the UDP provider -- DGRAM EPs with simple
> send/receive support. If your target HW works best with
> connection-oriented communication, I would mimic the verbs provider
> -- MSG EPs with RMA and shared receive contexts. For
> reliable-unconnected hardware, I would implement support for the
> tagged interfaces first.
> >
> > I'm not familiar enough with the MPI implementations to know what
> features are optional versus required.
> > _______________________________________________
> > Libfabric-users mailing list
> > Libfabric-users at lists.openfabrics.org
> <mailto:Libfabric-users at lists.openfabrics.org>
> > http://lists.openfabrics.org/mailman/listinfo/libfabric-users
> > _______________________________________________
> > ofiwg mailing list
> > ofiwg at lists.openfabrics.org <mailto:ofiwg at lists.openfabrics.org>
> > http://lists.openfabrics.org/mailman/listinfo/ofiwg
>
>
> --
> Jeff Squyres
> jsquyres at cisco.com <mailto:jsquyres at cisco.com>
>
> _______________________________________________
> Libfabric-users mailing list
> Libfabric-users at lists.openfabrics.org
> <mailto:Libfabric-users at lists.openfabrics.org>
> http://lists.openfabrics.org/mailman/listinfo/libfabric-users
>
>
>
> _______________________________________________
> Libfabric-users mailing list
> Libfabric-users at lists.openfabrics.org
> http://lists.openfabrics.org/mailman/listinfo/libfabric-users
>
More information about the Libfabric-users
mailing list