[openib-general] [PATCH] IB/ipoib: NAPI

Shirley Ma xma at us.ibm.com
Tue Sep 26 22:55:11 PDT 2006






"Michael S. Tsirkin" <mst at mellanox.co.il> wrote on 09/26/2006 09:59:30 PM:

> Quoting r. Shirley Ma <xma at us.ibm.com>:
> > Subject: Re: [PATCH] IB/ipoib: NAPI
> >
> > We did some touch test on ehca driver, we saw performance drop somehow.
>
> Hmm, it seems ehca still defers the completion event to a tasklet.  It
always
> seemed weird to me.  So that could be the reason - with NAPI you now get
2
> tasklet schedules, as you are actually doing part of what NAPI
does,inside the
> low level driver.  Try ripping that out and calling the event
> handler directly,
> and see what it does to performance with NAPI.
The reason for this ehca implementation is two ports/links shared one EQ.
We are implementing multiple EQs suport for one adapter now. If that works,
then we can modify the ehca code as mthca. Actually mthca has the same
problem as ehca over two links on the same adapter. Two links on the same
adapter performance are very bad, not scaled at all.

> > I strongly recommand NAPI as a configurable option in ipoib. So
> customers can turn on/off based on their configurations.
>
> I still hope ehca NAPI performance can be fixed. But if not, maybe we
should
> have the low level driver set a disable_napi flag rather than have users
play
> with module options.
>
> --
> MST
We have been working on this issue for some time. That's the reason we
didn't post our NAPI patch. Hopefully we can fix it. If we can show NAPI
performance (latency, BW, cpu utilization) are better in all cases (UP vs.
SMP, one socket vs. multiple sockets, one link vs. multiple links,
different message sizes, different socket sizes....) I will agree to turn
on NAPI as default.

Thanks
Shirley Ma
IBM Linux Technology Center
15300 SW Koll Parkway
Beaverton, OR 97006-6063
Phone(Fax): (503) 578-7638
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20060926/83783cbc/attachment.html>


More information about the general mailing list