[ewg] Infiniband Interoperability
richard@informatix-sol.com
richard at informatix-sol.com
Wed Jun 30 23:04:45 PDT 2010
This looks consistent across multiple systems. I'd suspect that maybe parts are only rated at 10g, particuarly if you are using CX4's.
It all looks pretty old stock you are trying to use. We used to have lots of cable issues in the past. I rarely see this now with modern cables, even at 40g.
Reset your fabric counters so you can see the rate increases. Large rates of symbol errors will cause the interfaces to downgrade. I have start of day scripts that do this across all switches in my 6 clusters.
Richard
----- Reply message -----
From: "Matt Breitbach" <matthewb at flash.shanje.com>
Date: Wed, Jun 30, 2010 20:09
Subject: [ewg] Infiniband Interoperability
To: "'Ira Weiny'" <weiny2 at llnl.gov>, <richard at informatix-sol.com>
Cc: <ewg at lists.openfabrics.org>
Switch 0x003048ffffa12591 MT47396 Infiniscale-III Mellanox Technologies:
3 1[ ] ==( 4X 5.0 Gbps Active/ LinkUp)==> 1 1[ ]
"MT25208 InfiniHostEx Mellanox Technologies" ( )
3 2[ ] ==( 4X 5.0 Gbps Active/ LinkUp)==> 4 2[ ]
"MT25208 InfiniHostEx Mellanox Technologies" ( )
3 3[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 4[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 5[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 6[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 7[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 8[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 9[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 10[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 11[ ] ==( 4X 5.0 Gbps Active/ LinkUp)==> 7 1[ ]
"MT25218 InfiniHostEx Mellanox Technologies" ( )
3 12[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 13[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 14[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 15[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 16[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 17[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 18[ ] ==( 4X 5.0 Gbps Active/ LinkUp)==> 6 1[ ]
"ibcontrol HCA-1" ( )
3 19[ ] ==( 4X 5.0 Gbps Active/ LinkUp)==> 2 1[ ]
"xen1 HCA-1" ( )
3 20[ ] ==( 4X 5.0 Gbps Active/ LinkUp)==> 5 1[ ]
"MT25408 ConnectX Mellanox Technologies" ( )
3 21[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 22[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 23[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
3 24[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ]
"" ( )
-----Original Message-----
From: Ira Weiny [mailto:weiny2 at llnl.gov]
Sent: Wednesday, June 30, 2010 1:57 PM
To: richard at informatix-sol.com
Cc: Matt Breitbach; ewg at lists.openfabrics.org
Subject: Re: [ewg] Infiniband Interoperability
On Wed, 30 Jun 2010 11:13:50 -0700
"richard at informatix-sol.com" <richard at informatix-sol.com> wrote:
> I'm still suspicious that you have more than one SM running. Mellonex
switches have it enabled by default.
> It's common that ARP requests, as caused by ping, will result in multicast
group activity.
> Infiniband creates these on demand and tears them down if there are no
current members. There is no broadcast address. It uses a dedicated MC
group.
> They all seem to originate to LID 6 so you can trace the source.
>
> If you have ports at non optimal speeds, try toggling their enable state.
This often fixes it.
One other way of checking for SM's is to use the console in OpenSM. The
"status" command will lists SM's it sees and who is currently master.
As for the network config could you send the iblinkinfo output? I would be
curious to see it.
Thanks,
Ira
>
> Richard
>
> ----- Reply message -----
> From: "Matt Breitbach" <matthewb at flash.shanje.com>
> Date: Wed, Jun 30, 2010 15:33
> Subject: [ewg] Infiniband Interoperability
> To: <ewg at lists.openfabrics.org>
>
> Well, let me throw out a little about the environment :
>
>
>
> We are running one SuperMicro 4U system with a Mellanox InfiniHost III EX
> card w/ 128MB RAM. This box is the OpenSolaris box. It's running the
> OpenSolaris Infiniband stack, but no SM. Both ports are cabled to the IB
> Switch to ports 1 and 2.
>
>
>
> The other systems are in a SuperMicro Bladecenter. The switch in the
> BladeCenter is an InfiniScale III switch with 10 internal ports and 10
> external ports.
>
>
>
> 3 blades are connected with Mellanox ConnectX Mezzanine cards. 1 blade is
> connected with an InfiniHost III EX Mezzanine card.
>
>
>
> One of the blades is running CentOS and the 1.5.1 OFED release. OpenSM is
> running on that system, and is the only SM running on the network. This
> blade is using a ConnectX Mezzanine card.
>
>
>
> One blade is running Windows 2008 with the latest OFED drivers installed.
> It is using an InfiniHost III EX Mezzanine card.
>
>
>
> One blade is running Windows 2008 R2 with the latest OFED drivers
installed.
> It is using an ConnectX Mezzanine card.
>
>
>
> One blade has been switching between Windows 2008 R2 and CentOS with Xen.
> Windows 2008 is running the latest OFED drivers, CentOS is running the
1.5.2
> RC2. That blade is using a ConnectX Mezzanine card.
>
>
>
> All of the firmware has been updated on the Mezzanine cards, the PCI-E
> InfiniHost III EX card, an
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/ewg/attachments/20100701/566b5a01/attachment.html>
More information about the ewg
mailing list