[ofa-general] Bandwidth of performance with multirail IB

Jie Cai Jie.Cai at cs.anu.edu.au
Tue Feb 24 16:44:08 PST 2009


Peter Kjellstrom wrote:
> On Tuesday 24 February 2009, Jie Cai wrote:
>   
>> I have implemented a uDAPL program to measure the bandwidth on IB with
>> multirail connections.
>>
>> The HCA used in the cluster is Mellanox ConnectX HCA. Each HCA has two
>> ports.
>>
>> The program utilize the two port on each node of cluster to build
>> multirail IB connections.
>>
>> The peak bandwidth I can get is ~ 1.3 GB/s (not bi-directional), which
>> is almost the same as single rail connections.
>>     
>
> Assuming you have a 2.5 GT/s pci-express x8 that speed is a result of the bus 
> not being able to keep up with the HCA. Since the bus is holding even a 
> single DDR IB port back you see no improvement with two ports.
>
>   
I do connect HCA in a 16x pci-e slot on each node.
However, I am trying to drive 2 ports simultaneously.

The workstation i am using is Sun Ultra 24,
and the HCA is Mellanox ConnectX  MHGH28-XTC.
The data for the HCA and Ultra 24 is

MHGH28-XTC 
IB ports: Dual Copper 4X 20Gb/s 
Host Bus: PCIe 2.0 2.5GT/s

Ultra 24 workstation:

1333 MHz frontside bus with DDR2 memory support upto (10.67 GB per 
second bandwidth)
PCI Express Slots

    * Two full-length x16 Gen-2 slots (where the HCA has been connected to)
    * One full-length x8 slot
    * One full-length x1 slot

So, it may not be the problem of bottleneck in bus.


> To fully drive a DDR IB port you need either 16x pci-express 2.5 GT/s or a 8x 
> 5 GT/s. For one QDR or two DDR you'll need even more...
>
>   

The  pci-e slot in Ultra 24 is PCI Express Gen2 x16. The data transfer 
rate is
5 Gpbs.

Will this be sufficient to drive the 2 ddr ports on MHGH28-XTC ?

Or is there any other possible reasons?
> /Peter
>
>   
>> Does anyone have similar experience?
>>     



More information about the general mailing list