[openib-general] [mthca] Creation of a SRQ with many WR (> 16K) in kernel level fails
Bernard King-Smith
wombat2 at us.ibm.com
Thu Feb 1 05:21:36 PST 2007
> ----- Message from "Or Gerlitz" <ogerlitz at voltaire.com> on Thu, 01 Feb
2007 11:17:53 +0200 -----
>
> Dotan Barak wrote:
> > I think that now, when implementation of IPoIB CM is available and SRQ
> > is being used, one may
> > need to use a SRQ with more than 16K WRs.
>
> IPoIB UD uses SRQ by nature (since RX from all peers consume buffers
> from the --only-- RQ) and lives fine with 32 buffers (or 64 you can look
> in the code). Moreover, my assumption is that
>
> pps(RC) <= pps(UC) <= pps(UD)
>
> this means that what ever number of RX buffer for UD/2K MTU which is
> "enough" to have no (or close to zero) packet loss under some traffic
> pattern, the same pattern can be served with IPoIB CM using SRQ of the
> same size.
I would expect that you will need more than 32 or 64 buffers using RC and
SRQ. With larger packets it takes longer to do receive processing on each
packet under RC. Larger packets means it takes more time to do checksum
and copy to the socket because of up to 60K or data vs. 2K. The residency
time on the receive queue will be longer. In the traffic pattern where one
adapter is receiving from many adapters over the fabric, there will be a
larger imbalance between sender rate vs. the receiving rate out of the
queue. Given a large enough TCP send and receive window for a single
socket to get peak bandwidth, muliple sockets will have more packet in
flight for a single destination at the same time in this pattern
>
> Or.
>
>
>
Bernie King-Smith
IBM Corporation
Server Group
Cluster System Performance
wombat2 at us.ibm.com (845)433-8483
Tie. 293-8483 or wombat2 on NOTES
"We are not responsible for the world we are born into, only for the world
we leave when we die.
So we have to accept what has gone before us and work to change the only
thing we can,
-- The Future." William Shatner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20070201/203b0bd6/attachment.html>
More information about the general
mailing list