[ofa-general] IPOIB CM (NOSRQ) patch for review

Pradeep Satyanarayana pradeep at us.ibm.com
Mon Mar 12 12:13:36 PDT 2007


Roland Dreier <rdreier at cisco.com> wrote on 03/09/2007 03:10:44 PM:

>  > +EXTRA_CFLAGS += -DIPOIB_CM_NOSRQ
> 
> This type of compile-time selection is obviously unacceptable for
> anything that we actually merge upstream.  What is needed is for the
> IPoIB driver to decide at runtime what to do, so that on a system with
> multiple different types of HCAs installed, IPoIB CM uses SRQs on the
> HCAs that support SRQs, and does not use SRQs on the HCAs that don't.

I dug through the spec and found that ib_query_device() tells one if the 
HCA supports SRQ or not. Is that what you had in mind?

> 
> Not to mention the fact that basically mixing together two different
> implementations with a liberal sprinkling of #ifdef IPOIB_CM_NOSRQ
> makes the code basically unreadable and unmaintainable.

One way to alleviate this problem would be to duplicate mainly the 
receive side functions and name them something like xyz_nosrq(). 
However, there will still be many instances of

if(SRQ)
        xyz();
else
        xyz_nosrq();

Is that a better solution than the #ifdef IPOIB_CM_NOSRQ? On the other 
hand,
this will add duplicate code and may pose some maintainability issues in 
the
future. I would like to understand as to which one is the preferred 
approach.

> 
>  > +       /* This is missing in Michael's code -Do we need this */
> 
> seems like it would be easy to answer this question -- just try it
> without the change.  And I think the answer is no, there's no reason
> to move QPs that are not used to send data to the RTS state.

Yes, you are right. I should have dropped this before submitting the 
patch.


> 
>  > void ipoib_cm_handle_rx_wc(struct net_device *dev, struct ib_wc *wc)
> 
>  > +#ifdef IPOIB_CM_NOSRQ
>  > +        spin_lock_irqsave(&priv->lock, flags);
>  > +        list_for_each_entry(rx_ptr, &priv->cm.passive_ids, list)
>  > +                if(qp_num == rx_ptr->qp->qp_num) {
>  > +                       found = 1;
>  > +                        break;
>  > +               }
>  > +        spin_unlock_irqrestore(&priv->lock, flags);
> 
> This seems crazy -- you do a linear search through a list of QPs
> (which potentially has 100s of entries) for every receive completion!
> Just the spinlock alone is something we would want to avoid in the hot
> performance path.

I envisaged the NOSRQ case for small clusters only. Othewise, this may be 
a
memory hog and affect (other) application performance.

Pradeep
pradeep at us.ibm.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20070312/1f532711/attachment.html>


More information about the general mailing list