[ofa-general] Re: IB/core: Add creation flags to QPs
Eli Cohen
eli at mellanox.co.il
Wed Mar 26 04:48:38 PDT 2008
I don't think it is a good idea to mix enumerated types with bitmasks
since bitmasks are wasting too much of the size of the enum (which
depends on the CPU architecture). I like creation flags more also
because it allows more freedom in choosing the semantics of the flag and
not worrying if it fits in the category as a kind of "qp type".
On Wed, 2008-03-26 at 13:02 +0200, Or Gerlitz wrote:
> Roland Dreier wrote:
> > > @@ -505,6 +509,7 @@ struct ib_qp_init_attr {
> > > enum ib_sig_type sq_sig_type;
> > > enum ib_qp_type qp_type;
> > > u8 port_num; /* special QP types only */
> > > + enum qp_create_flags create_flags;
> > > };
> >
> > I'm dubious about this. It seems to me like everything (including the
> > mlx4 low-level driver changes for LSO) would be simpler to implement
> > and use if we just extend the qp_type to include IB_QPT_UD_LSO.
> Roland, All,
>
> How about making the qp_type field a bit mask, such that in the ipoib
> LSO use case it would be (UD | LSO) and in the ipoib ehca case (UD |
> LL), etc. The bit mask change would also be propagated up to libibverbs
> and be defined in a way such that it preserves backward compatibility re
> the qp_type field for users of libibverbs that did not changed their code.
>
> I don't think it would make the XRC merge harder, and it would be very
> helpful in deploying the "block loopback" feature of the connectx, which
> in ofed 1.3 was implemented system wide (set or unset, see the patch
> below) and now can be set per app per qp, so IPoIB would create its qp
> (UD | BL[=block-loopback] | LSO) when running over connectx.
>
> > mlx4: enable discarding/passing multicast loopback packets by FW/HW.
> >
> > Multicast (and broadcast) loopback is handled by the network stack
> > meaning that
> > all MC or BC packets that need to return into receive sockets on the
> > machine are
> > duplicated and put in the rx path handling by the ip network stack.
> > The HCA loops all multicast outgoing packets so that any attached QP
> > can get
> > these multicast packets as well.
> >
> > The IPoIB module needs to discard all those self QP loopback packets
> > and does
> > so by comparing the SLID and QPN.
> >
> > This patch controls the ConnectX HCA multicast packets block loopback
> > (blck_lb) for self QP.
> >
> > The patch is designed to enable or disable blocking of all multicast
> > packets on self QP
> > in FW/HW on all QPs created on the ConnectX HCA.
> >
> > Inter QP multicast packets on the relevant HCA will still be delivered.
> >
> > The /sys/module/mlx4_core/block_loopback attribute controls the policy
> > flag.
> > Its default value is blocking-enabled (non-zero).
> > The flag can be read and set/unset through sysfs.
> >
> > Signed-off-by: Alex Rosenbaum <alexr at voltaire.com>
> > Signed-off-by: Merav Havuv <meravh at voltaire.com>
> > Signed-off-by: Jack Morgenstein <jackm at dev.mellanox.co.il>
> >
> > Index: ofed_kernel/drivers/net/mlx4/mcg.c
> > ===================================================================
> > --- ofed_kernel.orig/drivers/net/mlx4/mcg.c 2007-12-05
> > 10:34:53.519969000 +0200
> > +++ ofed_kernel/drivers/net/mlx4/mcg.c 2008-02-19
> > 08:45:33.257352000 +0200
> > @@ -206,13 +206,14 @@ int mlx4_multicast_attach(struct mlx4_de
> > }
> >
> > for (i = 0; i < members_count; ++i)
> > - if (mgm->qp[i] == cpu_to_be32(qp->qpn)) {
> > + if ((be32_to_cpu(mgm->qp[i]) & MGM_QPN_MASK) == qp->qpn) {
> > mlx4_dbg(dev, "QP %06x already a member of MGM\n", qp->qpn);
> > err = 0;
> > goto out;
> > }
> >
> > - mgm->qp[members_count++] = cpu_to_be32(qp->qpn);
> > + mgm->qp[members_count++] = cpu_to_be32((qp->qpn & MGM_QPN_MASK) |
> > + (!!mlx4_blck_lb << MGM_BLCK_LB_BIT));
> > mgm->members_count = cpu_to_be32(members_count);
> >
> > err = mlx4_WRITE_MCG(dev, index, mailbox);
> > @@ -287,7 +288,7 @@ int mlx4_multicast_detach(struct mlx4_de
> >
> > members_count = be32_to_cpu(mgm->members_count);
> > for (loc = -1, i = 0; i < members_count; ++i)
> > - if (mgm->qp[i] == cpu_to_be32(qp->qpn))
> > + if ((be32_to_cpu(mgm->qp[i]) & MGM_QPN_MASK) == qp->qpn)
> > loc = i;
> >
> > if (loc == -1) {
> > Index: ofed_kernel/drivers/net/mlx4/main.c
> > ===================================================================
> > --- ofed_kernel.orig/drivers/net/mlx4/main.c 2008-02-19
> > 08:38:33.145870000 +0200
> > +++ ofed_kernel/drivers/net/mlx4/main.c 2008-02-19
> > 08:42:17.836566000 +0200
> > @@ -59,6 +59,10 @@ MODULE_PARM_DESC(debug_level, "Enable de
> >
> > #endif /* CONFIG_MLX4_DEBUG */
> >
> > +int mlx4_blck_lb=1;
> > +module_param_named(block_loopback, mlx4_blck_lb, int, 0644);
> > +MODULE_PARM_DESC(block_loopback, "Block multicast loopback packets if
> > > 0");
> > +
> > #ifdef CONFIG_PCI_MSI
> >
> > static int msi_x = 1;
> > Index: ofed_kernel/drivers/net/mlx4/mlx4.h
> > ===================================================================
> > --- ofed_kernel.orig/drivers/net/mlx4/mlx4.h 2008-02-19
> > 08:38:31.356932000 +0200
> > +++ ofed_kernel/drivers/net/mlx4/mlx4.h 2008-02-19
> > 08:42:17.840568000 +0200
> > @@ -106,6 +106,10 @@ extern int mlx4_debug_level;
> > #define mlx4_warn(mdev, format, arg...) \
> > dev_warn(&mdev->pdev->dev, format, ## arg)
> >
> > +#define MGM_QPN_MASK 0x00FFFFFF
> > +#define MGM_BLCK_LB_BIT 30
> > +extern int mlx4_blck_lb;
> > +
> > struct mlx4_bitmap {
> > u32 last;
> > u32 top;
>
>
>
>
>
>
>
More information about the general
mailing list