[ofa-general] Bandwidth of performance with multirail IB

Jie Cai Jie.Cai at cs.anu.edu.au
Thu Feb 26 22:29:47 PST 2009


Hi Peter,

A question on implementation multi-rail with uDAPL connections.

What I did is open 2 IAs (corresponding to the 2 ports on HCAs) on each 
node.
Then create one EP for each IA, and connect those EPs to the corresponding
EP at other node.

Then data been transferred via both EP-connections.

I have been notice that there's a MULTIPATH connection flag for dapl, 
but I did
not use it. What's the use of it?

Cheers,
Jie


-- 
Mr. Jie Cai




Peter Kjellstrom wrote:
> On Tuesday 24 February 2009, Jie Cai wrote:
>   
>> I have implemented a uDAPL program to measure the bandwidth on IB with
>> multirail connections.
>>
>> The HCA used in the cluster is Mellanox ConnectX HCA. Each HCA has two
>> ports.
>>
>> The program utilize the two port on each node of cluster to build
>> multirail IB connections.
>>
>> The peak bandwidth I can get is ~ 1.3 GB/s (not bi-directional), which
>> is almost the same as single rail connections.
>>     
>
> Assuming you have a 2.5 GT/s pci-express x8 that speed is a result of the bus 
> not being able to keep up with the HCA. Since the bus is holding even a 
> single DDR IB port back you see no improvement with two ports.
>
> To fully drive a DDR IB port you need either 16x pci-express 2.5 GT/s or a 8x 
> 5 GT/s. For one QDR or two DDR you'll need even more...
>
> /Peter
>
>   
>> Does anyone have similar experience?
>>     



More information about the general mailing list