[ofiwg] input on intra-node implementation

Jeff Hammond jeff.science at gmail.com
Fri Feb 12 09:31:51 PST 2016


As far as I know, XPMEM is available standard on Cray and SGI machines but
nowhere else.  Of course, any operator can install the kernel module, but
that is uncommon in my experience, as in, I have never seen it.

XPMEM appears to be the most powerful option, but since CMA is standard
now, I think that's the better option to pursue.

I think respecting security concerns is essential to mainstream adoption.
And in HPC, security isn't just about protecting from malicious actors, but
bad programmers.

Best,

Jeff

On Tue, Feb 9, 2016 at 12:54 PM, Hefty, Sean <sean.hefty at intel.com> wrote:

> I want to provide an intra-node communication (i.e. loopback) utility to
> libfabric.  The loopback utility could be part of a stand-alone provider,
> or incorporated into other providers.  For this, I'm looking at selecting a
> single, easily maintained implementation.  These are my choices so far:
>
> 1. Control messages transfer over shared memory
>    Data transfers use shared memory bounce buffers
> 2. Control messages transfer over shared memory
>    Data transfers occur over CMA
>    (small transfers go with control messages)
> 3. Use XPMEM in some TBD way
>
> Some of these options are only available on Linux.  Does the portability
> of this solution matter?  FreeBSD and Solaris would fall back to using the
> network loopback device.
>
> How much concern needs to be given to security?  Should the loopback
> utility enforce RMA registrations?  Do we require processes to share a
> certain level of access, such as ptrace capability?
>
> I think we need input on this not just from the MPI community, but other
> users as well.
>
> - Sean
> _______________________________________________
> ofiwg mailing list
> ofiwg at lists.openfabrics.org
> http://lists.openfabrics.org/mailman/listinfo/ofiwg
>



-- 
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/ofiwg/attachments/20160212/4d4cf6b7/attachment.html>


More information about the ofiwg mailing list