[ofa-general] [PATCH 2/3] RDMA/cxgb3: fix page shift calculation in build_phys_page_list()
Bryan S Rosenburg
rosnbrg at us.ibm.com
Wed Jan 30 12:41:33 PST 2008
Roland Dreier <rdreier at cisco.com> wrote on 01/30/2008 03:05:04 PM:
> Looks almost identical to what I came up with (below), except:
>
> > + shift = __ffs((unsigned long) ((mask & ((1ull<<31)-1)) |
(1ull<<31)));
>
> I think there's no reason to mask off the top bit of mask if we're
> going to set it immediately;
I was actually masking off the top 33 bits before setting bit 31. I agree
that it's unnecessary, but I wanted it to be clear that the possible
truncation from 64 to 32 bits (on 32-bit target machines) is okay. I'd
have left off the masking and the cast if the kernel provided something
like __ffsll() or __ffs64() that would take 64-bit arguments even on
32-bit machines, but it doesn't as far as I can tell. You've changed mask
itself into an unsigned long:
> - u64 mask;
> + unsigned long mask;
so the truncation (on 32-bit machines) is now happening as the mask is
constructed. In this case the truncation is okay, because ultimately you
only care about the low-order bits anyway, but implicit truncations always
raise a red flag for me.
In any case, I think your patch does the job and does it efficiently.
- Bryan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20080130/9c5f72c3/attachment.html>
More information about the general
mailing list