[ofa-general] Re: [PATCH 3/9 Rev3] [sched] Modify qdisc_run to support batching

Evgeniy Polyakov johnpol at 2ka.mipt.ru
Wed Aug 8 05:14:02 PDT 2007


On Wed, Aug 08, 2007 at 03:01:45PM +0530, Krishna Kumar (krkumar2 at in.ibm.com) wrote:
> +static inline int get_skb(struct net_device *dev, struct Qdisc *q,
> +			  struct sk_buff_head *blist, struct sk_buff **skbp)
> +{
> +	if (likely(!blist || (!skb_queue_len(blist) && qdisc_qlen(q) <= 1))) {
> +		return likely((*skbp = dev_dequeue_skb(dev, q)) != NULL);
> +	} else {
> +		int max = dev->tx_queue_len - skb_queue_len(blist);
> +		struct sk_buff *skb;
> +
> +		while (max > 0 && (skb = dev_dequeue_skb(dev, q)) != NULL)
> +			max -= dev_add_skb_to_blist(skb, dev);
> +
> +		*skbp = NULL;
> +		return 1;	/* we have atleast one skb in blist */
> +	}
> +}

Same here - is it possible to get a list in one go instead of pulling
one-by-one, since it forces quite a few additional unneded lock
get/releases. What about dev_dequeue_number_skb(dev, q, num), which will 
grab the lock and move a list of skbs from one queue to provided head.

> @@ -158,7 +198,10 @@ static inline int qdisc_restart(struct n
>  	/* And release queue */
>  	spin_unlock(&dev->queue_lock);
>  
> -	ret = dev_hard_start_xmit(skb, dev);
> +	if (likely(skb))
> +		ret = dev_hard_start_xmit(skb, dev);
> +	else
> +		ret = dev->hard_start_xmit_batch(dev);

Perfectionism says that having array of two functions and calling one of
them via array_func_pointer[!!skb] will be much faster. Just a though.
It is actually much faster than if/else on x86 at least.

-- 
	Evgeniy Polyakov



More information about the general mailing list