[ofa-general] verbs/hardware question

Michael Krause krause at cup.hp.com
Thu Oct 11 14:04:38 PDT 2007


At 11:40 AM 10/11/2007, Steve Wise wrote:


>Doug Ledford wrote:
>>On Thu, 2007-10-11 at 12:39 -0500, Steve Wise wrote:
>>>Doug Ledford wrote:
>>>>So, one of the options when creating a QP is the max inline data size.
>>>>If I understand this correctly, for any send up to that size, the
>>>>payload of that send will be transmitted to the receiving side along
>>>>with the request to send.
>>>What it really means is the payload is DMA'd to the HW on the local side 
>>>in the work request itself as opposed being DMA'd down in a 2nd 
>>>transaction after the WR is DMA'd and processed.

Typically it is a series of MMIO writes coalesced and not DMA operations on 
the PCI bus.   In-line eliminates the latency associated with a MMIO write 
to trigger a DMA Read request to be generated by the device and then the 
subsequent completion(s) which is then followed by another DMA Read request 
and one or more completions.

>>>  It has no end-to-end significance.

Correct.  WR + Data in-line is a common technique used in a variety of I/O 
solutions for a number of years now.   The degree of performance gains from 
write coalesce varies by processor / chipset as well as over time.   The 
in-line itself

>>>  Other than to reduce the latency needed to transfer the data.
>>OK, that clears things up for me ;-)
>>
>>>>This reduces back and forth packet counts on
>>>>the wire in the case that the receiving side is good to go, because it
>>>>basically just responds with "OK, got it" and you're done.
>>>I don't think this is true.  Definitely not with iWARP.  INLINE is just 
>>>an optimization to push small amts of data downto the local adapter as 
>>>part of the work request, thus avoiding 2 DMA's.

Correct.  It is a local only operation.

Mike


>>>Even though you create the QP with the inline option, only WRs that pass 
>>>in the IBV_SEND_INLINE flag will do inline processing, so you can 
>>>control this functionality at a per-WR basis.
>>Hmm..that raises a question on my part.  You don't call ibv_reg_mr on
>>the wr itself, so if the data is pushed with the wr, do you still need
>>to call ibv_reg_mr on the data separately?
>
>The WR DMA'd by the HW is actually built in memory that is setup for the 
>adapter to DMA from.  Whether that is really done via ibv_reg_mr or some 
>other method is provider/vendor specific.  So the WR you pass into 
>ibv_post_send() is always copied and munged into the HW-specific memory 
>and format.  For inline sends, the data you pass in via the SGL is copied 
>into the HW-specific WR memory as well.
>
>And from the man page on ibv_post_send(), I conclude you do _not_ have to 
>register the payload memory used in an INLINE send:
>
>>        IBV_SEND_INLINE  Send data in given gather list as inline data
>>               in a send WQE.  Valid only for Send and RDMA Write.  The 
>> L_Key will not be checked.
>
>
>Steve.
>_______________________________________________
>general mailing list
>general at lists.openfabrics.org
>http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
>
>To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20071011/fb43b5b4/attachment.html>


More information about the general mailing list