[ewg] ofa_1_5_kernel 20091104-0200 daily build status
Jeff Becker
Jeffrey.C.Becker at nasa.gov
Thu Nov 19 09:26:17 PST 2009
Hi.
Vladimir Sokolovsky wrote:
> Brian J. Murrell wrote:
>
>> On Thu, 2009-11-19 at 16:28 +0200, Vladimir Sokolovsky wrote:
>>
>> You recall that this was a change you suggested in response to the
>> previous failure:
>>
>> http://www.mail-archive.com/ewg@lists.openfabrics.org/msg07773.html
>>
>> which Jeff Becker is also reporting here:
>>
>> http://www.mail-archive.com/ewg@lists.openfabrics.org/msg07854.html
>>
>> Jeff includes more detail about the commit that actually broke this:
>>
>> committer Jack Morgenstein <ja... at dev.mellanox.co.il>
>> Thu, 11 Jun 2009 13:17:33 +0000 (16:17 +0300)
>> commit 1f462241bd18d9b5727ddea90459e7763b69e11c
>> backports: 2.6.16_sles10_sp2: patches and add-ons based on kernel 2.6.18
>> backport
>>
>>
Jon Mason suggested adding a "#ifndef ipv6_addr_loopback" around the
function definition in
kernel_addons/backport/2.6.16_sles10_sp2/include/net/ipv6.h . I'll look
into this today.
-jeff
>>> Then this kernel requires another backport directory based on 2.6.16_sles10_sp2 under
>>> kernel_patches/backport/ and kernel_addons/backport/ with corresponding change in ofed_scripts/get_backport_dir.sh
>>> (E.g. 2.6.16_sles10_sp2_lustre).
>>>
>> ^^^^^^
>> This breakage has got nothing to do with Lustre, per Jeff's report. In
>> any case, creating an entire new and mostly duplicate backport for a
>> single change that is simply not portable seems like a sledgehammer of a
>> solution, no?
>>
>>
>>> Please prepare backports for this kernel and I'll apply it to the OFED's 1.5 kernel git tree.
>>>
>> I don't think I am going to have to time to prepare an entire backport
>> (nor do I believe it's the correct solution) for this problem, but it
>> seems it must be fixed before GA per Jeff's independent report of the
>> same failure or anyone using this newer SLES10_SP2 kernel will run into
>> this problem.
>>
>> Cheers,
>> b.
>>
>>
>
> Ok, I will provide some solution next week.
>
> Regards,
> Vladimir
>
>
>
More information about the ewg
mailing list