[ofa-general] Re: [PATCH 2/3][NET_BATCH] net core use batching

David Miller davem at davemloft.net
Tue Oct 9 17:04:35 PDT 2007


From: jamal <hadi at cyberus.ca>
Date: Tue, 09 Oct 2007 17:56:46 -0400

> if the h/ware queues are full because of link pressure etc, you drop. We
> drop today when the s/ware queues are full. The driver txmit lock takes
> place of the qdisc queue lock etc. I am assuming there is still need for
> that locking. The filter/classification scheme still works as is and
> select classes which map to rings. tc still works as is etc.

I understand your suggestion.

We have to keep in mind, however, that the sw queue right now is 1000
packets.  I heavily discourage any driver author to try and use any
single TX queue of that size.  Which means that just dropping on back
pressure might not work so well.

Or it might be perfect and signal TCP to backoff, who knows! :-)

While working out this issue in my mind, it occured to me that we
can put the sw queue into the driver as well.

The idea is that the network stack, as in the pure hw queue scheme,
unconditionally always submits new packets to the driver.  Therefore
even if the hw TX queue is full, the driver can still queue to an
internal sw queue with some limit (say 1000 for ethernet, as is used
now).

When the hw TX queue gains space, the driver self-batches packets
from the sw queue to the hw queue.

It sort of obviates the need for mid-level queue batching in the
generic networking.  Compared to letting the driver self-batch,
the mid-level batching approach is pure overhead.

We seem to be sort of all mentioning similar ideas.  For example, you
can get the above kind of scheme today by using a mid-level queue
length of zero, and I believe this idea was mentioned by Stephen
Hemminger earlier.

I may experiment with this in the NIU driver.



More information about the general mailing list