[libfabric-users] Multiple ranks/instances on the same node

Howard Pritchard hppritcha at gmail.com
Sat Sep 12 09:58:32 PDT 2020


Hi JB,

The GNI provider uses XPMEM for large intra-node message transfers.  Short
messages are still routed through the GNI SMSG path hence the relatively
high latency compared to say, psm2 intra-node.

Howard


Am Do., 10. Sept. 2020 um 15:36 Uhr schrieb Biddiscombe, John A. <
biddisco at cscs.ch>:

> Thanks Sean, I guess assumed that I'd have to handle the case myself,
> however it shouldn't be too hard to enable a shared memory provider, then
> use a dedicated endpoint for ranks known to be on the same node. After
> quickly looking again at the docs, I see that I missed *GNI_XPMEM_ENABLE*
> for the GNI backend, so I'll play with that and see if it makes a
> difference.
>
>
> Cheers
>
>
> JB
> ------------------------------
> *From:* Hefty, Sean <sean.hefty at intel.com>
> *Sent:* 10 September 2020 19:22:06
> *To:* Biddiscombe, John A.; libfabric-users at lists.openfabrics.org
> *Subject:* RE: Multiple ranks/instances on the same node
>
> > If I have multiple processes (ranks) on the same node, and they send
> messages to each
> > other. Does libfabric auto-magically do the right thing and use some
> kind of shared
> > memory for them. If not - can this be enabled by using one of the shared
> memory
> > providers (in conjunction with the gni provider for example) - and if
> this is the case,
> > does one need to use a special endpoint to communicate within node - or
> will that be
> > handled automatically.
>
> The answer is provider specific.  For example, psm/psm2 will use shared
> memory.  Other providers will not. (I'm not sure about gni).  A generic
> solution is unlikely.  Shared memory support must be integrated on a
> provider by provider basis to support proper tag matching semantics.
> Ideally, at least rxm and rxd would integrate shared memory support.
> There's just a developer resource shortage to make that happen.
>
> - Sean
> _______________________________________________
> Libfabric-users mailing list
> Libfabric-users at lists.openfabrics.org
> https://lists.openfabrics.org/mailman/listinfo/libfabric-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/libfabric-users/attachments/20200912/909e25c0/attachment.htm>


More information about the Libfabric-users mailing list