[libfabric-users] TX/RX data structures and data processing mode
Михаил Халилов
miharulidze at gmail.com
Fri Mar 16 06:04:06 PDT 2018
Good afternoon everyone!
My group works on implementing of new libfabric provider for our HPC
interconnect. Our current main goal is to run MPICH and OpenMPI over this
provider.
The problem is, that this NIC haven't any software and hardware rx/tx
queues for send/recv operations. We're decided to implement it on libfabric
provider-level. So, I'm looking for data structure for queue store and
processing.
I took a look in sockets provider code. As far as I understand, tx_ctx
stores pointers to all information (flags, data, src_address and etc.)
about every message to send in ring buffer, but rx_ctx stores every
rx_entry in double-linked list. What was the motivation for choosing such
data structures when implementing these queues are different used to
process tx and rx?
Maybe you can give advice on the implementation of queues or give some
useful information on this topic?
The second problem is about suitable way for progress model. For CPU
performance reasons I want to choose FI_PROGRESS_MANUAL as primary mode for
the processing of an asynchronous requests, but I do not quite understand
how an application thread provides data progress. For example, is it enough
to call fi_cq_read() from MPI implementation always when it wants to make a
progress?
I will be extremely grateful for any help and advice on these issues!
BR,
Mikhail Khalilov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/libfabric-users/attachments/20180316/6e1d0b50/attachment.html>
More information about the Libfabric-users
mailing list