[openib-general] Multi-port HCA
Bernard King-Smith
wombat2 at us.ibm.com
Tue Oct 3 07:35:47 PDT 2006
John,
Who's adapter (manufacturer) are you using? It is usually an adapter
implementation or driver issue that occures when you cannot scale across
multiple links. The fact that you don't scale up from one link, but it
appears they share a fixed bandwidth across N links means that there is a
driver or stack issue. At one time I think that IPoIB and maybe other IB
drivers used only one event queue across multiple links which would be a
bottleneck. We added code in the IBM EHCA driver to get round this
bottleneck.
Are your measurements using MPI or IP. Are you using separate
tasks/sockets per link and using different subnets if using IP?
Bernie King-Smith
IBM Corporation
Server Group
Cluster System Performance
wombat2 at us.ibm.com (845)433-8483
Tie. 293-8483 or wombat2 on NOTES
"We are not responsible for the world we are born into, only for the world
we leave when we die.
So we have to accept what has gone before us and work to change the only
thing we can,
-- The Future." William Shatner
john t" <johnt1johnt2 at gmail.com> wrote on 10/03/2006 09:42:24 AM:
>
> Hi,
>
> I have two HCA cards, each having two ports and each connected to a
> separate PCI-E x8 slot.
>
> Using one HCA port I get end to end BW of 11.6 Gb/sec (uni-direction
RDMA).
> If I use two ports of the same HCA or different HCA, I get between 5
> to 6.5 Gb/sec point-to-point BW on each port. BW on each port
> further reduces if I use more ports. I am not able to understand
> this behaviour. Is there any limitation on max. BW that a system can
> provide? Does the available BW get divided among multiple HCA ports
> (which means having multiple ports will not increase the BW)?
>
>
> Regards,
> John T
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20061003/bcf5227f/attachment.html>
More information about the general
mailing list