[ewg] Infiniband Interoperability

richard@informatix-sol.com richard at informatix-sol.com
Wed Jun 30 23:13:30 PDT 2010


If you only have vanilla opensm, then everything is fine. It will ensure only 1 is running and status reports correctly. The problem is in mixed environments.  The vendors only test in their own environment. You need to decide which to use and explicitly disable all others.

Richard

----- Reply message -----
From: "Ira Weiny" <weiny2 at llnl.gov>
Date: Wed, Jun 30, 2010 19:56
Subject: [ewg] Infiniband Interoperability
To: "richard at informatix-sol.com" <richard at informatix-sol.com>
Cc: "ewg at lists.openfabrics.org" <ewg at lists.openfabrics.org>, "Matt Breitbach" <matthewb at flash.shanje.com>


On Wed, 30 Jun 2010 11:13:50 -0700
"richard at informatix-sol.com" <richard at informatix-sol.com> wrote:

> I'm still suspicious that you have more than one SM running. Mellonex switches have it enabled by default.
> It's common that ARP requests, as caused by ping, will result in multicast group activity.
> Infiniband creates these on demand and tears them down if there are no current members. There is no broadcast address. It uses a dedicated MC group.
> They all seem to originate to LID 6 so you can trace the source.
> 
> If you have ports at non optimal speeds, try toggling their enable state. This often fixes it.

One other way of checking for SM's is to use the console in OpenSM.  The "status" command will lists SM's it sees and who is currently master.

As for the network config could you send the iblinkinfo output?  I would be curious to see it.

Thanks,
Ira

> 
> Richard
> 
> ----- Reply message -----
> From: "Matt Breitbach" <matthewb at flash.shanje.com>
> Date: Wed, Jun 30, 2010 15:33
> Subject: [ewg] Infiniband Interoperability
> To: <ewg at lists.openfabrics.org>
> 
> Well, let me throw out a little about the environment :
> 
> 
> 
> We are running one SuperMicro 4U system with a Mellanox InfiniHost III EX
> card w/ 128MB RAM.  This box is the OpenSolaris box.  It's running the
> OpenSolaris Infiniband stack, but no SM.  Both ports are cabled to the IB
> Switch to ports 1 and 2.
> 
> 
> 
> The other systems are in a SuperMicro Bladecenter.  The switch in the
> BladeCenter is an InfiniScale III switch with 10 internal ports and 10
> external ports.
> 
> 
> 
> 3 blades are connected with Mellanox ConnectX Mezzanine cards.  1 blade is
> connected with an InfiniHost III EX Mezzanine card.
> 
> 
> 
> One of the blades is running CentOS and the 1.5.1 OFED release.  OpenSM is
> running on that system, and is the only SM running on the network.  This
> blade is using a ConnectX Mezzanine card.
> 
> 
> 
> One blade is running Windows 2008 with the latest OFED drivers installed.
> It is using an InfiniHost III EX Mezzanine card.
> 
> 
> 
> One blade is running Windows 2008 R2 with the latest OFED drivers installed.
> It is using an ConnectX Mezzanine card.
> 
> 
> 
> One blade has been switching between Windows 2008 R2 and CentOS with Xen.
> Windows 2008 is running the latest OFED drivers, CentOS is running the 1.5.2
> RC2.  That blade is using a ConnectX Mezzanine card.
> 
> 
> 
> All of the firmware has been updated on the Mezzanine cards, the PCI-E
> InfiniHost III EX card, and the switch.  All of the Windows boxes are
> configured to use Connected mode.  I have not changed any other settings on
> the Linux boxes.
> 
> 
> 
> As of right now, the network seems stable.  I've been running pings for the
> last 12 hours, and nothing has dropped.
> 
> 
> 
> I did notice in the OpenSM log though some odd entries that I do not believe
> belong there.
> 
> 
> 
> Jun 30 06:56:26 832438 [B5723B90] 0x02 -> log_notice: Reporting Generic
> Notice type:3 num:67 (Mcast group deleted) from LID:6
> GID:ff12:1405:ffff::3333:1:2
> 
> Jun 30 06:57:53 895990 [B5723B90] 0x02 -> log_notice: Reporting Generic
> Notice type:3 num:66 (New mcast group created) from LID:6
> GID:ff12:1405:ffff::3333:1:2
> 
> Jun 30 07:18:06 770861 [B6124B90] 0x02 -> log_notice: Reporting Generic
> Notice type:3 num:67 (Mcast group deleted) from LID:6
> GID:ff12:1405:ffff::3333:1:2
> 
> Jun 30 07:19:14 835273 [B5723B90] 0x02 -> log_notice: Reporting Generic
> Notice type:3 num:66 (New mcast group created) from LID:6
> GID:ff12:1405:ffff::3333:1:2
> 
> 
> 
> 
> 
> I would not think that new mcast groups should be created or deleted when
> there are no new adapters being added to the network, especially in this
> small of a network.  Is it odd to see those messages?
> 
> 
> 
> Also, I have a warning when I run ibdiagnet - "Suboptimal rate for group.
> Lowest member rate: 20Gbps > group-rate: 10gbps"
> 
> 
> 
> I also have a few things that I'm concerned about in the "PM Counters Info"
> section of ibdiagnet as follows :
> 
> 
> 
> -W- lid=0x0003 guid=0x003048ffffa12591 dev=47396 Port=1
> 
>      Performance Monitor counter     : Value
> 
>      symbol_error_counter            : 0xffff (overflow)
> 
> -W- lid=0x0004 guid=0x0002c9020029a492 dev=25208 MT25208/P2
> 
>      Performance Monitor counter     : Value
> 
>      symbol_error_counter            : 0xffff (overflow)
> 
> -W- lid=0x0003 guid=0x003048ffffa12591 dev=47396 Port=18
> 
>      Performance Monitor counter     : Value
> 
>      symbol_error_counter            : 0xffff (overflow)
> 
>      port_xmit_constraint_errors     : 0xff (overflow)
> 
> -W- lid=0x0003 guid=0x003048ffffa12591 dev=47396 Port=19
> 
>      Performance Monitor counter     : Value
> 
>      symbol_error_counter            : 0xffff (overflow)
> 
> 
> 
> I'm not sure if those are bad or not, and if they would point to any sort of
> problem, but that's what I'm seeing.
> 
> 
> 
> Hopefully this gives
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/ewg/attachments/20100701/3ef92dea/attachment.html>


More information about the ewg mailing list