[ofiwg] [libfabric-users] feature requests
Faraj, Daniel
daniel.faraj at hpe.com
Tue Jun 6 17:26:47 PDT 2017
My bad I did not mean device I meant the low level messaging stack like PSM and the like
Anyway, my suggestion was to do this if possible at OFI level so that all middle layers reap the benefit.
--
Daniel Faraj
HPE Performance Engineering
651.683.7605 Office
daniel.faraj at hpe.com<mailto:daniel.faraj at hpe.com>
________________________________
From: Hefty, Sean <sean.hefty at intel.com>
Sent: Tuesday, June 6, 2017 6:36:15 PM
To: Faraj, Daniel; Jeff Hammond
Cc: ofiwg at lists.openfabrics.org; libfabric-users at lists.openfabrics.org; Baron, John
Subject: RE: [libfabric-users] feature requests
> If MPI or other middle layer is to implement the multirail, why bother
> even with OFI: implement directly on the device and no need for extra
> OFI overhead.
Well, if MPI wants to write to device registers, I say go for it. Maybe write to assembly too, to avoid the extra C overhead. :)
More seriously, I'd like to start by analyzing what it would take to add multi-rail support over reliable-datagram endpoints, with the assumption that this would provide similar performance to what MPI could do. This has the added benefit that it could work with completely different networks, though I'm not sure if that's a requirement.
I guess the first thing to figure out is how addressing works when multi-rail is in use. Would we need some sort of super-address that's a union of the underlying fabric addresses?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/ofiwg/attachments/20170607/31c5d848/attachment.html>
More information about the ofiwg
mailing list