[ofa-general] Re: [PATCH 1/4] [NET_SCHED] explict hold dev tx lock
David Miller
davem at davemloft.net
Sun Oct 7 21:51:24 PDT 2007
From: jamal <hadi at cyberus.ca>
Date: Mon, 24 Sep 2007 19:38:19 -0400
> How is the policy to define the qdisc queues locked/mapped to tx rings?
For these high performance 10Gbit cards it's a load balancing
function, really, as all of the transmit queues go out to the same
physical port so you could:
1) Load balance on CPU number.
2) Load balance on "flow"
3) Load balance on destination MAC
etc. etc. etc.
It's something that really sits logically between the qdisc and the
card, not something that is a qdisc thing.
In some ways it's similar to bonding, but using anything similar to
bonding's infrastructure (stacking devices) is way overkill for this.
And then we have the virtualization network devices where the queue
selection has to be made precisely, in order for the packet to
reach the proper destination, rather than a performance improvement.
It is also a situation where the TX queue selection is something
to be made between qdisc activity and hitting the device.
I think we will initially have to live with taking the centralized
qdisc lock for the device, get in and out of that as fast as possible,
then only take the TX queue lock of the queue selected.
After we get things that far we can try to find some clever lockless
algorithm for handling the qdisc to get rid of that hot spot.
These queue selection schemes want a common piece of generic code. A
set of load balancing algorithms, a "select TX queue by MAC with a
default fallback on no match" for virtualization, and interfaces for
both drivers and userspace to change the queue selection scheme.
More information about the general
mailing list