[Users] infiniband rdma poor transfer bw

David McMillen davem at systemfabricworks.com
Tue Aug 28 08:03:56 PDT 2012


On Tue, Aug 28, 2012 at 9:48 AM, Gaetano Mendola <mendola at gmail.com> wrote:

> ...
>
> Indeed my Slop is a PCI-E 2.0 x4 that's why the 1500 MB/sec was what
> I'm expecting.
>

OK - with a PCIe 2.0 x4 slot you are actually doing well to be doing 1500
MB/sec.  Any other ideas for improvement cannot work.


>
> ...
>
> This what lspci say about that slot:
>
>
>
You need to use the -vv (two v characters in a row) option to lspci to see
width information.


> ...
>
> I'll try the IBV_WR_RDMA_WRITE_WITH_IMM avoid the send "DONE" message,
> as I understood with IBV_WR_RDMA_WRITE_WITH_IMM is notified, right ?
>

Yes, when the write is finished the target will get the immediate data
value.  If things are otherwise going well, and with large transfers like
you use, it isn't a significant performance difference to just follow the
rdma write with a send message.  You can queue (post) both at the same
time, since if the write has an error the send will not happen.


>
> ...
>
> Yes that's an idea, I have to be sure (as already is the case) the
> buffers are not
> continuously allocated/deallocated.
> I'll try to create an hash table buffer -> memory region to avoid those
> registration/deregistration and I'll post what I get.
>
>
It could make your life easier if you created a private allocation pool for
these buffers.  You could create a memory region to cover the entire pool,
and then anything allocated from it would be covered by that MR.

Dave
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/users/attachments/20120828/9c82fd59/attachment.html>


More information about the Users mailing list