[openib-general] [PATCH] splitting IPoIB CQ

Shirley Ma xma at us.ibm.com
Tue Apr 18 08:57:25 PDT 2006


Roland,

Roland Dreier <rdreier at cisco.com> wrote on 04/17/2006 01:12:38 PM:
> Have you ever seen this hurt performance?  It seems that splitting
> receives and send CQs will increase the number of events generated and
> possibly use more CPU.
The performance gain was not free, it did cost cpu utilization 3-5% more. 
I don't have the comparison of the number of interrupts with the same 
throughout.

> Actually, do you have some explanation for why this helps performance?
> My intuition would be that it just generates more interrupts for the
> same workload.
The only lock contension in IPoIB I saw is tx_lock. When seperating 
the completion queue to have seperate completion handler. It could improve
the performance. I didn't look at driver code, it might have some impact 
there? 

I did see high interrupts and I had pached IPoIB which I mentioned before 
to have 
different NUM_WC under different workloads. It could reduce the interrupts 
N times 
for the same throughput, and gain better throughput under same cpu 
utilization. 
I am still investigating interrupts/cpu utilization/throughput issues.

> One specific question:
> 
>  > -       struct ib_wc ibwc[IPOIB_NUM_WC];
>  > +       struct ib_wc *send_ibwc;
>  > +       struct ib_wc *recv_ibwc;
> 
> Why are you changing these to be dynamically allocated outside of the
> main structure?  Is it to avoid false sharing of cachelines?
Yep, this was one of the reasons.

> It might be better to sort the whole structure so that we have all the
> common, read-mostly stuff first, then TX stuff (marked with
> ____cacheline_aligned_in_smp) and then RX stuff, also marked to be
> cacheline aligned.
> 
>  - R.
Sure. I will replace it and rerun the test to see the difference.

Thanks
Shirley Ma
IBM Linux Technology Center
15300 SW Koll Parkway
Beaverton, OR 97006-6063
Phone(Fax): (503) 578-7638
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20060418/8286f710/attachment.html>


More information about the general mailing list