[ofa-general] [PATCH 2/3] RDMA/cxgb3: fix page shift calculation in build_phys_page_list()
Bryan S Rosenburg
rosnbrg at us.ibm.com
Mon Feb 11 08:07:17 PST 2008
Steve Wise <swise at opengridcomputing.com> wrote on 02/11/2008 10:39:38 AM:
> Bryan, I assume you sign-off on this patch?
Steve, yes, I "sign-off" on this patch, although I posted it just for
discussion. I haven't tested (or even compiled) it for cxgb3. Let me
know if you want me to do anything more formal.
- Bryan
>
>
> Bryan S Rosenburg wrote:
> >
> > On Mon Jan 21 12:39:36 PST 2008, Steve Wise wrote:
> > > RDMA/cxgb3: fix page shift calculation in build_phys_page_list()
> > >
> > > The existing logic incorrectly maps this buffer list:
> > >
> > > 0: addr 0x10001000, size 0x1000
> > > 1: addr 0x10002000, size 0x1000
> > >
> > > To this bogus page list:
> > >
> > > 0: 0x10000000
> > > 1: 0x10002000
> > >
> > > The shift calculation must also take into account the address of
the
> > first
> > > entry masked by the page_mask as well as the last address+size
rounded
> > > up to the next page size.
> >
> > I think the problem can still occur, even with the patch, if the
buffer
> > list has just one entry.
> >
> > A single entry (addr 0x10001000, size 0x2000) will get converted to
page
> > address 0x10000000 with a page size of 0x4000. The patch as it stands
> > doesn't address the single buffer case, but in fact it allows the
> > subsequent single-buffer special case to be eliminated entirely.
> > Because the mask now includes the (page adjusted) starting and ending
> > addresses, the general case works for the single buffer case as well:
> >
> >
================================================================================
> >
> >
> > diff --git a/drivers/infiniband/hw/cxgb3/iwch_mem.c
> > b/drivers/infiniband/hw/cxgb3/iwch_mem.c
> > index 73bfd16..b8797c6 100644
> > --- a/drivers/infiniband/hw/cxgb3/iwch_mem.c
> > +++ b/drivers/infiniband/hw/cxgb3/iwch_mem.c
> > @@ -136,14 +136,8 @@ int build_phys_page_list(struct ib_phys_buf
> > *buffer_list,
> >
> > /* Find largest page shift we can use to cover buffers */
> > for (*shift = PAGE_SHIFT; *shift < 27; ++(*shift))
> > - if (num_phys_buf > 1) {
> > - if ((1ULL << *shift) & mask)
> > - break;
> > - } else
> > - if (1ULL << *shift >=
> > - buffer_list[0].size +
> > - (buffer_list[0].addr & ((1ULL << *shift)
- 1)))
> > - break;
> > + if ((1ULL << *shift) & mask)
> > + break;
> >
> > buffer_list[0].size += buffer_list[0].addr & ((1ULL << *shift)
- 1);
> > buffer_list[0].addr &= ~0ull << *shift;
> >
================================================================================
> >
> >
> > Don't try this without applying Steve's patch first.
> >
> > Incidentally, I've been tracking down exactly the bug that Steve
fixed,
> > but in mthca_reg_phys_mr() rather than in the cxgb3
> > build_phys_page_list(). I'll submit a patch for mthca, unless someone
> > else applies Steve's fix there soon.
> >
> > - Bryan Rosenburg - IBM Research
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20080211/54ef8ee9/attachment.html>
More information about the general
mailing list