[openib-general] Multi-port HCA

Shannon V. Davidson svdavidson at charter.net
Thu Oct 5 07:45:41 PDT 2006


John,

In our testing with dual port Mellanox SDR HCAs, we found that not all 
PCI-express implementations are equal.  Depending on the PCIe chipset, 
we measured unidirectional SDR dual-rail bandwidth ranging from 
1100-1500 MB/sec and bidirectional SDR dual-rail bandwidth ranging from 
1570-2600 MB/sec.  YMMV, but had good luck with Intel and Nvidia 
chipsets, and less success with the Broadcom Serverworks HT-1000 and 
HT-2000 chipsets. My last report (in June 2006) was that Broadcom was 
working to improve their PCI-express performance.

Regards,
Shannon

john t wrote:
> Hi Bernard,
>  
> I had a configuration issue. I fixed it and now I get same BW (i.e. 
> around 10 Gb/sec) on each port provided I use ports on different HCA 
> cards. If I use two ports of the same HCA card then BW gets divided 
> between these two ports. I am using Mellanox HCA cards and doing 
> simple send/recv using uverbs.
>  
> Do you think it could be an issue with Mallanox driver or could it be 
> due to system/PCI-E limitation.
>  
> Regards,
> John T.
>
>  
> On 10/3/06, *Bernard King-Smith* <wombat2 at us.ibm.com 
> <mailto:wombat2 at us.ibm.com>> wrote:
>
>
>     John,
>
>     Who's adapter (manufacturer) are you using? It is usually an
>     adapter implementation or driver issue that occures when you
>     cannot scale across multiple links. The fact that you don't scale
>     up from one link, but it appears they share a fixed bandwidth
>     across N links means that there is a driver or stack issue. At one
>     time I think that IPoIB and maybe other IB drivers used only one
>     event queue across multiple links which would be a bottleneck. We
>     added code in the IBM EHCA driver to get round this bottleneck.
>
>     Are your measurements using MPI or IP. Are you using separate
>     tasks/sockets per link and using different subnets if using IP?
>
>     Bernie King-Smith  
>     IBM Corporation
>     Server Group
>     Cluster System Performance  
>     wombat2 at us.ibm.com <mailto:wombat2 at us.ibm.com>    (845)433-8483
>     Tie. 293-8483 or wombat2 on NOTES
>
>     "We are not responsible for the world we are born into, only for
>     the world we leave when we die.
>     So we have to accept what has gone before us and work to change
>     the only thing we can,
>     -- The Future." William Shatner
>
>     john t" < johnt1johnt2 at gmail.com <mailto:johnt1johnt2 at gmail.com>>
>     wrote on 10/03/2006 09:42:24 AM:
>
>     >
>     > Hi,
>     >  
>     > I have two HCA cards, each having two ports and each connected to a
>     > separate PCI-E x8 slot.
>     >  
>     > Using one HCA port I get end to end BW of 11.6 Gb/sec
>     (uni-direction RDMA).
>     > If I use two ports of the same HCA or different HCA, I get between 5
>     > to 6.5 Gb/sec point-to-point BW on each port. BW on each port
>     > further reduces if I use more ports. I am not able to understand
>     > this behaviour. Is there any limitation on max. BW that a system can
>     > provide? Does the available BW get divided among multiple HCA ports
>     > (which means having multiple ports will not increase the BW)?
>     >  
>     >  
>     > Regards,
>     > John T
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> openib-general mailing list
> openib-general at openib.org
> http://openib.org/mailman/listinfo/openib-general
>
> To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general


-- 
____________________________________________

Shannon V. Davidson <svdavidson at charter.net>
Senior Software Engineer            Raytheon
636-479-7465 office         443-383-0331 fax
____________________________________________


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20061005/a4ca26e4/attachment.html>


More information about the general mailing list