[openib-general] [PATCH] splitting IPoIB CQ

Shirley Ma xma at us.ibm.com
Wed Apr 19 09:35:33 PDT 2006


Roland Dreier <rdreier at cisco.com> wrote on 04/18/2006 01:45:17 PM:

>  > > Actually, do you have some explanation for why this helps 
performance?
>  > > My intuition would be that it just generates more interrupts for 
the
>  > > same workload.
> 
>  > The only lock contension in IPoIB I saw is tx_lock. When seperating 
>  > the completion queue to have seperate completion handler. It could 
improve
>  > the performance. I didn't look at driver code, it might have some 
impact 
>  > there? 
> 
> A clever way to avoid taking the TX lock on send completions would be
> very nice, but I never saw a way to do it.  Does splitting the CQ
> reduce contention?  I don't see why that would be, since the
> contention is between sending and getting send completions.  The
> receive path of course never touches the tx_lock.

tx_lock contension will block the CQ handler to process next wiki in CQ,
which could be recv wiki or send wiki.

> By the way, are your numbers with mthca or ehca?  I don't know ehca
> very well, but at least with current mthca, all CQ events will be
> delivered on the same interrupt and hence all CQ handling will run on
> the same CPU.  So I'm puzzled why changing things from:
> 
>     -> interrupt
>     -> CQ event handler
>     -> handle all IPoIB completions
> 
> to:
> 
>     -> interrupt
>     -> TX CQ event handler
>     -> handle TX completions
>     [possibly another interrupt]
>     -> RX CQ event handler
>     -> handle RX completions
> 
> helps throughput.  It just seems like it's more CQ locking/unlocking
> and in general more work.
> 
>  - R.

If recv CQ and send CQ are at different rate, splitting CQ would reduce CQ 
locking
unlocking.

Thanks
Shirley Ma
IBM Linux Technology Center
15300 SW Koll Parkway
Beaverton, OR 97006-6063
Phone(Fax): (503) 578-7638

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20060419/bd2ae70c/attachment.html>


More information about the general mailing list