[Users] Troubles with InfiniBand QDR MT26428 VPI and VMware ESXi 5.5

German Anders ganders at despegar.com
Wed Dec 9 05:58:35 PST 2015


Hi all,

I've some troubles with the installation and configuration of InfiniBand
with ESXi 5.5 in a HP Enclosure blade with QDR:


/opt # *esxcfg-nics --list*
Name      PCI           Driver      Link Speed     Duplex MAC Address
MTU    Description
vmnic0    0000:04:00.00 elxnet      Up   10000Mbps Full   9c:b6:54:74:ce:0c
1500   Emulex Corporation HP NC551i Dual Port FlexFabric 10Gb Converged
Network Adapter
vmnic1    0000:04:00.01 elxnet      Up   10000Mbps Full   9c:b6:54:74:ce:10
1500   Emulex Corporation HP NC551i Dual Port FlexFabric 10Gb Converged
Network Adapter
vmnic2    0000:05:00.00 elxnet      Up   10000Mbps Full   9c:b6:54:74:ce:14
1500   Emulex Corporation HP NC551i Dual Port FlexFabric 10Gb Converged
Network Adapter
vmnic3    0000:05:00.01 elxnet      Up   10000Mbps Full   9c:b6:54:74:ce:18
1500   Emulex Corporation HP NC551i Dual Port FlexFabric 10Gb Converged
Network Adapter

*vmnic_ib0 0000:41:00.00             Down 0Mbps     Half
f4:52:14:dd:30:65 1500   Mellanox Technologies MT26428 [ConnectX VPI -
10GigE / IB QDR, PCIe 2.0 5GT/s]vmnic_ib1 0000:41:00.00             Down
0Mbps     Half   f4:52:14:dd:30:66 1500   Mellanox Technologies MT26428
[ConnectX VPI - 10GigE / IB QDR, PCIe 2.0 5GT/s]*


/opt # *esxcli network nic get --nic-name=vmnic_ib0*
   Advertised Auto Negotiation: false
   Advertised Link Modes: 10000baseT/Full
   Auto Negotiation: false
   Cable Type: FIBRE
   Current Message Level: 0
   Driver Info:
         Bus Info: 0000:41:00.0
         Driver: ib_ipoib
         Firmware Version: 2.9.1530
         Version: 1.8.2.0
   Link Detected: false
   Link Status: Down
   Name: vmnic_ib0
   PHYAddress: 0
   Pause Autonegotiate: false
   Pause RX: false
   Pause TX: false
   Supported Ports:
   Supports Auto Negotiation: false
   Supports Pause: true
   Supports Wakeon: false
   Transceiver: internal
   Wakeon: None
/opt #


/opt # *esxcli software vib list | grep Mellanox*
net-ib-cm                      1.8.2.0-1OEM.500.0.0.472560
Mellanox         PartnerSupported  2015-12-03
net-ib-core                    1.8.2.0-1OEM.500.0.0.472560
Mellanox         PartnerSupported  2015-12-03
net-ib-ipoib                   1.8.2.0-1OEM.500.0.0.472560
Mellanox         PartnerSupported  2015-12-03
net-ib-mad                     1.8.2.0-1OEM.500.0.0.472560
Mellanox         PartnerSupported  2015-12-03
net-ib-sa                      1.8.2.0-1OEM.500.0.0.472560
Mellanox         PartnerSupported  2015-12-03
net-ib-umad                    1.8.2.0-1OEM.500.0.0.472560
Mellanox         PartnerSupported  2015-12-03
net-mlx4-core                  1.8.2.0-1OEM.500.0.0.472560
Mellanox         PartnerSupported  2015-12-03
net-mlx4-ib                    1.8.2.0-1OEM.500.0.0.472560
Mellanox         PartnerSupported  2015-12-03
scsi-ib-srp                    1.8.2.0-1OEM.500.0.0.472560
Mellanox         PartnerSupported  2015-12-03


Anyone know what could be the problem and send me some hints in order to
try to bring up these connections? The IBSW on the Enclosure works fine,
since I've other blades installed with Ubuntu and the ports are up and
running, but the Blade that has the ESXi 5.5 is not working at all. Any
advice will be really appreciated.

Thanks in advance,

*German*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/users/attachments/20151209/9815c2d8/attachment.html>


More information about the Users mailing list