[ofa-general] Re: [PATCH 1/4] [NET_SCHED] explict hold dev tx lock

Stephen Hemminger shemminger at linux-foundation.org
Mon Sep 24 17:14:11 PDT 2007


On Mon, 24 Sep 2007 16:47:06 -0700
"Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr at intel.com> wrote:

> > On Mon, 2007-24-09 at 15:57 -0700, Waskiewicz Jr, Peter P wrote:
> > 
> > > I've looked at that as a candidate to use.  The lock for 
> > enqueue would 
> > > be needed when actually placing the skb into the 
> > appropriate software 
> > > queue for the qdisc, so it'd be quick.
> > 
> > The enqueue is easy to comprehend. The single device queue 
> > lock should suffice. The dequeue is interesting:
> 
> We should make sure we're symmetric with the locking on enqueue to
> dequeue.  If we use the single device queue lock on enqueue, then
> dequeue will also need to check that lock in addition to the individual
> queue lock.  The details of this are more trivial than the actual
> dequeue to make it efficient though.
> 
> > Maybe you can point me to some doc or describe to me the 
> > dequeue aspect; are you planning to have an array of txlocks 
> > per, one per ring?
> > How is the policy to define the qdisc queues locked/mapped to 
> > tx rings? 
> 
> The dequeue locking would be pushed into the qdisc itself.  This is how
> I had it originally, and it did make the code more complex, but it was
> successful at breaking the heavily-contended queue_lock apart.  I have a
> subqueue structure right now in netdev, which only has queue_state (for
> netif_{start|stop}_subqueue).  This state is checked in sch_prio right
> now in the dequeue for both prio and rr.  My approach is to add a
> queue_lock in that struct, so each queue allocated by the driver would
> have a lock per queue.  Then in dequeue, that lock would be taken when
> the skb is about to be dequeued.
> 
> The skb->queue_mapping field also maps directly to the queue index
> itself, so it can be unlocked easily outside of the context of the
> dequeue function.  The policy would be to use a spin_trylock() in
> dequeue, so that dequeue can still do work if enqueue or another dequeue
> is busy.  And the allocation of qdisc queues to device queues is assumed
> to be one-to-one (that's how the qdisc behaves now).
> 
> I really just need to put my nose to the grindstone and get the patches
> together and to the list...stay tuned.
> 
> Thanks,
> -PJ Waskiewicz
> -


Since we are redoing this, is there any way to make the whole TX path
more lockless?  The existing model seems to be more of a monitor than
a real locking model.
-- 
Stephen Hemminger <shemminger at linux-foundation.org>



More information about the general mailing list