[ofa-general][PATCH] mlx4_core: Multi Protocol support

Liran Liss liranl at mellanox.co.il
Thu Apr 17 05:59:48 PDT 2008


>  > +	if (vector == 0) {
>  > +		vector = priv->eq_table.last_comp_eq %
>  > +			priv->eq_table.num_comp_eqs + 1;
>  > +		priv->eq_table.last_comp_eq = vector;
>  > +	}
> 
> The current IB code is written assuming that 0 is a normal completion
> vector I think.  Making 0 be a special "round robin" value is a pretty
> big change of policy.
> 

This is a change in policy that was unknown and not configured
anywhere...
Generally, distributing the interrupt load (and the software interrupt
handling associated with it) among all CPUs is a good thing, especially
when the ULPs using theses interrupts are unrelated.
For example, distributing TCP flows among multiple cores is important
for 10GE devices to sustain wire-speed with lots of connections.

So, for applications that don't care how many vectors are there and
which vector they want to use, we should support some VECTOR_ANY value
that enables mlx4_core to optimize and load balance the interrupt load.

A round-robin scheme seems like a good start. We could also initially
make the VECTOR_ANY policy a module parameter (i.e., use either CPU0 or
round-robin) until we obtain more experience with actual deployments.

As for the VECTOR_ANY value, we can make it 0 (good for "porting" all
existing ULPs and user-apps but doesn't match the CPU numbering, which
is zero based) or some other designated value, e.g., 0xff (will require
modifying all ULPs that don't use specific vectors).

Any preferences?

--Liran






More information about the general mailing list