[openib-general] [PATCH/RFC 1/2] IB: Return "maybe_missed_event" hint from ib_req_notify_cq()
xma at us.ibm.com
Tue Nov 14 12:11:23 PST 2006
Roland Dreier <rdreier at cisco.com> wrote on 11/13/2006 08:45:52 AM:
> > Sorry I was not intend to send previous email. Anyway I accidently
> > out. What I thought was there would be a problem, if the missed_event
> > always return to 1. Then this napi poll would keep forever.
> Well, it's limited by the quota that the net stack gives it, so
> there's no possibility of looping forever. However....
> > How about defer the rotting packets process later? like this:
> that seems like it is still correct.
> > With this patch, I could get NAPI + non scaling code throughput
> > from 1XXMb/s to 7XXMb/s, anyway there are some other problems I am
> > investigating now.
> But I wonder why it gives you a factor of 4 in performance?? Why does
> it make a difference? I would have thought that the rotting packet
> situation would be rare enough that it doesn't really matter for
> performance exactly how we handle it.
> What are the other problems you're investigating?
> - R.
The rotting packet situation consistently happens for ehca driver. The napi
could poll forever with your original patch. That's the reason I defer the
rotting packet process in next napi poll. It does help the performance from
1XXMb/s to 7XXMb/s, but not as expected 3XXXMb/s. With the defer rotting
packet process patch, I can see packets out of order problem in TCP layer.
Is it possible there is a race somewhere causing two napi polls in the same
time? mthca seems to use irq auto affinity, but ehca uses round-robin
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the general