[openib-general] How do we prevent starvation say between TCP over IPOIB / and SRP traffic ?

Caitlin Bestler caitlinb at broadcom.com
Wed Apr 19 10:30:55 PDT 2006


openib-general-bounces at openib.org wrote:
> Hi Rick,
> 
> On 4/19/06, Richard Frank <Richard.Frank at oracle.com> wrote:
>> Some application level protocols - require higher QoS levels than
>> others - for various communication and I/O operations.
>> 
>> For example, cluster inter-node health msgs have fixed latency
>> requirements that if exceeded may result in unexpected node removals
>> from the cluster. 
>> 
>> Are there any mechanisms available to the client process to manage
>> the QoS level for the various supported ULPs
>> (SDP,TCP,UDP,RDS,SRP,iSER,etc) either at the ULP level or some
>> combination of process and ULP - or perhaps even at the connection
>> level ? 
> 
> IB has the concept of Virtual Lanes (VL) at the hardware
> level, and Service Levels (SL) at the software level.  There
> are always 16 SLs that map to however many VLs are supported
> by the hardware.  IB hardware has at a minimum 2 VLs - VL0
> and VL15, the latter being reserved for QP0 management
> traffic (for configuring the fabric).
> 
> A module parameter to each ULP could assign it an SL to
> achieve the prioritization you are looking for.  There could
> even be a limit to the SLs available to user-mode, enforced
> by the kernel for connected QPs, though I don't know if the same can
> be said for UD QPs. 
> 
> The SM configures the SL to VL mappings for each node, which
> causes somewhat of a problem - you don't know exactly what VL
> any particular SL is mapped to.  Hardware that doesn't
> support all VLs could have multiple SLs mapped to any given
> VL.  This means that if you pick SL0 for SRP and SL1 for
> IPoIB, both of those *may* map to VL0.
> 

Any given fabric will have solutions to this. The question 
is how the user of OpenFabrics ties their QPs and connections
to the fabric-specific traffic management.




More information about the general mailing list