[openib-general] Re: Re: [PATCH] IPoIB splitting CQ, increase both send/recv poll NUM_WC & interval

Shirley Ma xma at us.ibm.com
Sat Apr 29 16:04:09 PDT 2006


Michael,

"Michael S. Tsirkin" <mst at mellanox.co.il> wrote on 04/29/2006 03:23:51 PM:
> Quoting r. Shirley Ma <xma at us.ibm.com>:
> > Subject: Re: [openib-general] Re: Re: [PATCH] IPoIB splitting CQ,?
> increase both send/recv poll NUM_WC & interval
> > 
> > 
> > Michael,
> > 
> > smp kernel on UP result is very bad. It dropped 40% throughput.
> > up kernel on UP thoughput dropped with cpu utilization dropped 
> from 75% idle to 52% idle.
> 
> Hmm. So far it seems the approach only works well on 2 CPUs.

Did a clean 2.6.16 uniprocessor kernel build on both sides,
+ patch1 (splitting CQ & handler)
+ patch2 (tune CQ polling interval)
+ patch3 (use work queue in CQ handler) 
+ patch4 (remove tx_ring) (rx_ring removal hasn't done yet)

Without tuning, i got 1-3% throughput increase with average 10%
cpu utiilzation reduce on netserver side. W/O patches, netperf side
is 100% cpu utilization.

The best result I got so far with tunning, 25% throughput increase
+ 2-5% cpu utilization saving in netperf side.

> > I didn't see latency difference. I used TCP_RR test.
> 
> This is somewhat surprising, isn't it? One would explain the extra
> context switch to have some effect on latency, would one not?
> 
> -- 
> MST

I got around 4% latency decrease on UP with less cpu utilization.

Thanks
Shirley Ma
IBM Linux Technology Center
15300 SW Koll Parkway
Beaverton, OR 97006-6063
Phone(Fax): (503) 578-7638



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20060429/4225270e/attachment.html>


More information about the general mailing list