[Ofvwg] OFVWG meeting notes - 1/5/2016

Liran Liss liranl at mellanox.com
Tue Jan 5 12:42:28 PST 2016


Shachar Raindel from Mellanox and Davide Rossetti from NVidia have shown how RDMA and GPU devices can synchronize compute and IO operations.

This technology is complementary to GPUDirect, which allows direct RDMA transfers to GPU memory, and is now under review in upstream Linux.
The technology offers 2 mechanisms:

-          A way for the GPU to poll for completion

-          A way for the GPU to signal the execution of a pre-posted work requests

These 2 mechanisms can be used in a variety of use cases such as waiting for incoming data before executing a GPU kernel. Another example is to send out the results of a kernel computation, and continue computation after the results have been successfully been transmitted and acknowledged.
Initial benchmarking shows considerable reduced latency (up to 40% in simple transactional benchmarks), increased application scaling, and much lower CPU utilization on the host.

The underlying implementation comprises Verbs extensions to post un-signaled work requests, and the ability to query from the provider memory locations for reading (polling) or writing (triggering operations).
The CUDA library is extended to issue such memory operations.
Finally, another library, GDS, provides simple abstractions that implement the above operations on both the Verbs and CUDA interfaces.

Current plans are to submit the Verbs extensions for upstream review in a few weeks.
The GDS library is expected to be released in about 2 months.
The library will be open source, with either BSD or MIT license.

While the proof of concept was developed for HCA-GPU synchronization, the APIs are generic and apply to other IO devices as well.
There are no dependencies between the Verbs stack and GPU libraries.

--Liran
P.S. there will be no OFVWG meeting next week.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/ofvwg/attachments/20160105/9dcffd2c/attachment.html>


More information about the ofvwg mailing list