[Users] Question about Infiniband Bond Conf Ubuntu BLc & Topology
German Anders
ganders at despegar.com
Fri Nov 13 03:58:24 PST 2015
Hi all,
I'm really new to IB and I want to know about a possible configuration
regarding a Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB
QDR / 10GigE] mezzanine card with two ports:
# ibstat
CA 'mlx4_0'
CA type: *MT26428*
Number of ports: 2
Firmware version: 2.9.1530
Hardware version: b0
Node GUID: 0xf452140300dd3294
System image GUID: 0xf452140300dd3297
* Port 1:*
State: Active
Physical state: LinkUp
Rate: 40
Base lid: 32
LMC: 0
SM lid: 2
Capability mask: 0x02510868
Port GUID: 0xf452140300dd3295
Link layer: InfiniBand
* Port 2:*
State: Active
Physical state: LinkUp
Rate: 40
Base lid: 33
LMC: 0
SM lid: 2
Capability mask: 0x02510868
Port GUID: 0xf452140300dd3296
Link layer: InfiniBand
on a HP Blade system with Ubuntu 14.04 kernel 3.13.0-63-generic. Is there
any special consideration to create a Bond interface here? to use both
ports 1 and 2? Any configuration examples out there to share? Or is any
other better use? The scheme is like this:
[image: Imágenes integradas 1]
Any advise regarding this configuration? improves and recommendations will
be really appreciated, also if a topology change is better than this,
please point it out. Also notice that the Blade servers correct info is
(Ubuntu + Mellanox Technologies MT26428)
Thanks in advance,
Best,
*German*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/users/attachments/20151113/908fb375/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 105819 bytes
Desc: not available
URL: <http://lists.openfabrics.org/pipermail/users/attachments/20151113/908fb375/attachment.png>
More information about the Users
mailing list