[ofa-general] Bandwidth of performance with multirail IB

Peter Kjellstrom cap at nsc.liu.se
Tue Feb 24 00:41:53 PST 2009


On Tuesday 24 February 2009, Jie Cai wrote:
> I have implemented a uDAPL program to measure the bandwidth on IB with
> multirail connections.
>
> The HCA used in the cluster is Mellanox ConnectX HCA. Each HCA has two
> ports.
>
> The program utilize the two port on each node of cluster to build
> multirail IB connections.
>
> The peak bandwidth I can get is ~ 1.3 GB/s (not bi-directional), which
> is almost the same as single rail connections.

Assuming you have a 2.5 GT/s pci-express x8 that speed is a result of the bus 
not being able to keep up with the HCA. Since the bus is holding even a 
single DDR IB port back you see no improvement with two ports.

To fully drive a DDR IB port you need either 16x pci-express 2.5 GT/s or a 8x 
5 GT/s. For one QDR or two DDR you'll need even more...

/Peter

> Does anyone have similar experience?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20090224/2723f05b/attachment.sig>


More information about the general mailing list