[ofa-general] Re: parallel networking

Michael Krause krause at cup.hp.com
Tue Oct 9 07:59:17 PDT 2007


At 06:53 PM 10/8/2007, Jeff Garzik wrote:
>David Miller wrote:
>>From: Jeff Garzik <jeff at garzik.org>
>>Date: Mon, 08 Oct 2007 10:22:28 -0400
>>
>>>In terms of overall parallelization, both for TX as well as RX, my gut 
>>>feeling is that we want to move towards an MSI-X, multi-core friendly 
>>>model where packets are LIKELY to be sent and received by the same set 
>>>of [cpus | cores | packages | nodes] that the [userland] processes 
>>>dealing with the data.
>>The problem is that the packet schedulers want global guarantees
>>on packet ordering, not flow centric ones.
>>That is the issue Jamal is concerned about.
>
>Oh, absolutely.
>
>I think, fundamentally, any amount of cross-flow resource management done 
>in software is an obstacle to concurrency.
>
>That's not a value judgement, just a statement of fact.

Correct.


>"traffic cops" are intentional bottlenecks we add to the process, to 
>enable features like priority flows, filtering, or even simple socket 
>fairness guarantees.  Each of those bottlenecks serves a valid purpose, 
>but at the end of the day, it's still a bottleneck.
>
>So, improving concurrency may require turning off useful features that 
>nonetheless hurt concurrency.

Software needs to get out of the main data path - another fact of life.



>>The more I think about it, the more inevitable it seems that we really
>>might need multiple qdiscs, one for each TX queue, to pull this full
>>parallelization off.
>>But the semantics of that don't smell so nice either.  If the user
>>attaches a new qdisc to "ethN", does it go to all the TX queues, or
>>what?
>>All of the traffic shaping technology deals with the device as a unary
>>object.  It doesn't fit to multi-queue at all.
>
>Well the easy solutions to networking concurrency are
>
>* use virtualization to carve up the machine into chunks
>
>* use multiple net devices
>
>Since new NIC hardware is actively trying to be friendly to 
>multi-channel/virt scenarios, either of these is reasonably 
>straightforward given the current state of the Linux net stack.  Using 
>multiple net devices is especially attractive because it works very well 
>with the existing packet scheduling.
>
>Both unfortunately impose a burden on the developer and admin, to force 
>their apps to distribute flows across multiple [VMs | net devs].

Not the most optimal approach.

>The third alternative is to use a single net device, with SMP-friendly 
>packet scheduling.  Here you run into the problems you described "device 
>as a unary object" etc. with the current infrastructure.
>
>With multiple TX rings, consider that we are pushing the packet scheduling 
>from software to hardware...  which implies
>* hardware-specific packet scheduling
>* some TC/shaping features not available, because hardware doesn't support it

For a number of years now, we have designed interconnects to support a 
reasonable range of arbitration capabilities among hardware resource 
sets.  With reasonable classification by software to identify a hardware 
resource sets (ideally interpretation of the application's view of its 
priority combined with policy management software that determines how that 
should map among competing application views), one can eliminate most of 
the CPU cycles spent into today's implementations.   I and others presented 
a number of these concepts many years ago during the development which 
eventually led to IB and iWARP.

- Each resource set can be assigned to a unique PCIe function or a function 
group to enable function / group arbitration to the PCIe link.

- Each resource set can be assigned to a unique PCIe TC and with improved 
ordering hints (coming soon) can be used to eliminate false ordering 
dependencies.

- Each resource set can be assigned to a unique IB TC / SL or iWARP 802.1p 
to signal priority.  These can then be used to program respective link 
arbitration as well as path selection to enable multi-path load balancing.

- Many IHV have picked up on the arbitration capabilities and extended them 
as shown years ago by a number of us to enable resource set arbitration and 
a variety of QoS based policies.  If software defines a reasonable (i.e. 
small) number of management and control knobs, then these can be easily 
mapped to most h/w implementations.   Some of us are working on how to do 
this for virtualized environments and I expect these to be applicable to 
all environments in the end.

One other key item to keep in mind is that unless there is contention in 
the system, the majority of the QoS mechanisms are meaningless and in a 
very large percentage of customer environments, they simply don't scale 
with device and interconnect performance.   Many applications in fact 
remain processor / memory constrained and therefore do not stress the I/O 
subsystem or the external interconnects making most of the software 
mechanisms rather moot in real customer environments.   Simple truth is it 
is nearly always cheaper to over-provision the I/O / interconnects than to 
use the software approach which while quite applicable in many environments 
for the 1 Gbps and below speeds, generally has less meaning / value in the 
10 moving to 40 moving to 100 Gbps environments.   Does not really matter 
whether one believes in protocol off-load or protocol on-load, the 
interconnects will be able to handle all commercial workloads and perhaps 
all but the most extreme HPC (even there one might contend that any 
software intermediary would be discarded in favor of reducing OS / kernel 
overhead from the main data path).  This isn't to say that software has no 
role to play only that role needs to shift from main data path overhead to 
one of policy shaping and programming of h/w based arbitration.   This will 
hold true for both virtualized and non-virtualized environments.

Mike 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20071009/b67ec079/attachment.html>


More information about the general mailing list