<br><tt><font size=2>Steve Wise <swise@opengridcomputing.com> wrote
on 02/11/2008 10:39:38 AM:<br>
> Bryan, I assume you sign-off on this patch?</font></tt>
<br>
<br><tt><font size=2>Steve, yes, I "sign-off" on this patch,
although I posted it just for discussion. I haven't tested (or even
compiled) it for cxgb3. Let me know if you want me to do anything
more formal.</font></tt>
<br>
<br><tt><font size=2>- Bryan</font></tt>
<br><tt><font size=2><br>
> <br>
> <br>
> Bryan S Rosenburg wrote:<br>
> > <br>
> > On Mon Jan 21 12:39:36 PST 2008, Steve Wise wrote:<br>
> > > RDMA/cxgb3: fix page shift calculation in build_phys_page_list()<br>
> > ><br>
> > > The existing logic incorrectly maps this buffer list:<br>
> > ><br>
> > > 0: addr 0x10001000, size 0x1000<br>
> > > 1: addr 0x10002000, size 0x1000<br>
> > ><br>
> > > To this bogus page list:<br>
> > ><br>
> > > 0: 0x10000000<br>
> > > 1: 0x10002000<br>
> > ><br>
> > > The shift calculation must also take into account
the address of the <br>
> > first<br>
> > > entry masked by the page_mask as well as the last
address+size rounded<br>
> > > up to the next page size.<br>
> > <br>
> > I think the problem can still occur, even with the patch, if
the buffer <br>
> > list has just one entry.<br>
> > <br>
> > A single entry (addr 0x10001000, size 0x2000) will get converted
to page <br>
> > address 0x10000000 with a page size of 0x4000. The patch
as it stands <br>
> > doesn't address the single buffer case, but in fact it allows
the <br>
> > subsequent single-buffer special case to be eliminated entirely.
<br>
> > Because the mask now includes the (page adjusted) starting
and ending <br>
> > addresses, the general case works for the single buffer case
as well:<br>
> > <br>
> > ================================================================================
<br>
> > <br>
> > <br>
> > diff --git a/drivers/infiniband/hw/cxgb3/iwch_mem.c <br>
> > b/drivers/infiniband/hw/cxgb3/iwch_mem.c<br>
> > index 73bfd16..b8797c6 100644<br>
> > --- a/drivers/infiniband/hw/cxgb3/iwch_mem.c<br>
> > +++ b/drivers/infiniband/hw/cxgb3/iwch_mem.c<br>
> > @@ -136,14 +136,8 @@ int build_phys_page_list(struct ib_phys_buf
<br>
> > *buffer_list,<br>
> > <br>
> > /* Find largest page shift we can
use to cover buffers */<br>
> > for (*shift = PAGE_SHIFT; *shift
< 27; ++(*shift))<br>
> > - if (num_phys_buf
> 1) {<br>
> > -
if ((1ULL << *shift) & mask)<br>
> > -
break;<br>
> > - } else<br>
> > -
if (1ULL << *shift >=<br>
> > -
buffer_list[0].size +<br>
> > -
(buffer_list[0].addr & ((1ULL <<
*shift) - 1)))<br>
> > -
break;<br>
> > + if ((1ULL
<< *shift) & mask)<br>
> > +
break;<br>
> > <br>
> > buffer_list[0].size += buffer_list[0].addr
& ((1ULL << *shift) - 1);<br>
> > buffer_list[0].addr &= ~0ull
<< *shift;<br>
> > ================================================================================
<br>
> > <br>
> > <br>
> > Don't try this without applying Steve's patch first.<br>
> > <br>
> > Incidentally, I've been tracking down exactly the bug that Steve
fixed, <br>
> > but in mthca_reg_phys_mr() rather than in the cxgb3 <br>
> > build_phys_page_list(). I'll submit a patch for mthca,
unless someone <br>
> > else applies Steve's fix there soon.<br>
> > <br>
> > - Bryan Rosenburg - IBM Research<br>
> > <br>
> > <br>
> <br>
</font></tt>