[ofiwg] DS/DA - updated slides for today
grun at cray.com
Tue Feb 2 14:26:13 PST 2016
Thanks Scott. I think I mostly agree with what you're saying. After this morning's meeting I did what I threatened to do, which is to add a rank of NVDIMMs to the client side. Doing so helps me pose the question, which should be asked for both kernel and user mode. I think you've given us the answer for kernel mode below, but let's see. Take a look at slide 6 please. The slide deck is a juiced-up version of what we worked on last fall.
From: Atchley, Scott [mailto:atchleyes at ornl.gov]
Sent: Tuesday, February 02, 2016 9:08 AM
To: Paul Grun <grun at cray.com>
Cc: ofiwg at lists.openfabrics.org
Subject: Re: [ofiwg] DS/DA - updated slides for today
Regarding slide 31 and your comment about adding yellow NVDIMM on the left side of the client, if we are talking about userspace processes, than I would definitely say yes. I assume that the userspace libfabric will have a shared memory provider that uses mmapped bounce buffer in the worst case and single-copy mechanisms such as Linux’s Cross Memory Attach (CMA) or SGI’s XPMEM. Because the processes have different address spaces and each process will end up with a copy of the data, using message queues and RMA make sense.
In the kernel, however, there is a single address space and I am not sure if having two copies is beneficial in any way. If so, then I am not sure if it makes sense to access local resources via kfabric.
> On Feb 2, 2016, at 10:54 AM, Paul Grun <grun at cray.com> wrote:
> Incorporated comments from Bernard and Stan
> Cray Inc.
> Office: (503) 620-8757
> Mobile: (503) 703-5382
> <kfabric-maintainer discussion__2016_0201.pptx>_______________________________________________
> ofiwg mailing list
> ofiwg at lists.openfabrics.org
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 283634 bytes
More information about the ofiwg