[openib-general] Dropping NETIF_F_SG since no checksum feature.

Michael Krause krause at cup.hp.com
Wed Oct 11 11:21:43 PDT 2006


At 02:46 AM 10/11/2006, Michael S. Tsirkin wrote:
>Quoting r. David Miller <davem at davemloft.net>:
> > Subject: Re: Dropping NETIF_F_SG since no checksum feature.
> >
> > From: "Michael S. Tsirkin" <mst at mellanox.co.il>
> > Date: Wed, 11 Oct 2006 11:05:04 +0200
> >
> > > So, it seems that if I set NETIF_F_SG but clear NETIF_F_ALL_CSUM,
> > > data will be copied over rather than sent directly.
> > > So why does dev.c have to force set NETIF_F_SG to off then?
> >
> > Because it's more efficient to copy into a linear destination
> > buffer of an SKB than page sub-chunks when doing checksum+copy.
> >
>
>Thanks for the explanation.
>Obviously its true as long as you can allocate the skb that big.
>I think you won't realistically be able to get 64K in a
>linear SKB on a busy system, though, is not that right?
>
>OTOH, having large MTU (e.g. 64K) helps performance a lot since it reduces 
>receive side processing overhead.

One thing to keep in mind is while it may help performance in a 
micro-benchmark, the system performance or the QoS impacts to other flows 
can be negatively impacted depending upon implementation.  For example, 
consider multiple messages interleaving (heaven help implementations that 
are not able to interleave multiple messages) on either the transmit or 
receive HCA / RNIC and how the time-to-completion of any message is 
extended out in time as a result of the interleave.  The effective 
throughput in terms of useful units of work can be lower as a result.   The 
same effect can be observed when there are a significant number connections 
in a device being simultaneously processed.

Also, if the copy-checksum is not performed on the processor where the 
application resides, then the performance can also be negatively impacted 
(want to have the right cache hot when initiated or concluded).  While the 
aggregate computational performance of systems may be increasing at a 
significant rate (set aside the per core vs. aggregate core debate), the 
memory performance gains are much less.  If you examine the longer term 
trends, there may be a flattening out of memory performance improvements by 
2009/10 without some major changes in the way controllers and subsystems 
are designed.

Mike 






More information about the general mailing list