[ewg] dissappointing IPoIB performance

richard Croucher Richard.croucher at informatix-sol.com
Mon Feb 20 12:20:18 PST 2012


There is nothing special about this specific configuration.  Both
measurements were made on the same physical server and same OS config,
so that can be ruled out as a factor for the difference.  I've seen
similar results on many other systems as well.  They'd been getting
closer and closer for some time, but now  10G Ethernet is lower latency
than InfiniBand.   I'm sure you are aware as well, that there are
similar results comparing VERB level programs on RoCE and InfiniBand,
which show that Ethernet has the edge.   I'm a big fan of InfiniBand but
it has to deliver the results.
-- 

Richard Croucher
www.informatix-sol.com
+44-7802-213901 

On Mon, 2012-02-20 at 17:03 +0000, Gilad Shainer wrote:
> Richard,
> 
>  
> 
> Critical missing is the setup information. What is the server, CPU
> etc. Can you please provide?
> 
>  
> 
> Gilad
> 
>  
> 
>  
> 
> 
> From: ewg-bounces at lists.openfabrics.org
> [mailto:ewg-bounces at lists.openfabrics.org] On Behalf Of richard
> Croucher
> Sent: Monday, February 20, 2012 8:50 AM
> To: ewg at lists.openfabrics.org
> Subject: [ewg] dissappointing IPoIB performance
> 
> 
> 
>  
> 
> I've been undertaking some internal QA testing of the Mellanox CX3's.
> 
> An observation that I've seen for some time and is most likely to do
> with the IPoIB implementation rather than the HCA is that the latency
> of IPoIB is getting increasingly poor in comparison with the standard
> kernel TCP/IP stack over 10g Ethernet.
> 
> If we look at the results for the CX3's running both 10G Ethernet and
> 40G InfiniBand, on same serve hardware,  I get the following median
> latency with my test setup.  Results are with my own test program so
> are only meaningful as a comparison with the other configurations
> running the same test.
> 
> Running OFED 1.5.3 and RH 6.0
> 
> IPoIB (connected)   TCP 33.67 uS    (switchless)
> IPoIB (datagram)    TCP 31.63 uS    (switchless)
> IPoIB (connected)  UDP 24.78 uS    (switchless)
> IPoIB (datagram)  UDP 24.28 uS     (switchless)
> IPoIB (connected)  UDP 25.37 uS    (1 hop) between ports on same
> switch
> IPoIB (connected)  TCP 34.48 uS     (1 hop)
> 10G Ethernet      UDP 24.04uS     (2 hops) across a LAG connected pair
> of Ethernet switches
> 10G Ethernet      TCP  34.59 uS    (2 hops)
> 
> The Mellanox Ethernet drivers are tuned for low latency rather than
> throughput, but I would have hoped that given the 4x extra bandwidth
> available it would have helped the InfiniBand drivers outperform.
> 
> I've seen similar results for CX2 .    10G ethernet is increasingly
> looking like the better option for low latency, particularly with the
> current generation of low latency Ethernet switches.  Switchless
> Ethernet has been better for some time than switchless InfiniBand, but
> it now looks to be the case in switched environments as well.  I think
> this reflects that there has been a lot of effort tweaking and tuning
> TCP/IP over Ethernet and its low level drivers, with very little
> activity on the IPoIB front.  Unless we see improvements here it will
> get increasingly difficult to justify InfiniBand deployments.
> 
> 
> 
> 
> --
> 
> Richard Croucher
> www.informatix-sol.com
> +44-7802-213901 
> 
> 
> 
>  
> 
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/ewg/attachments/20120220/531890a3/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 5147 bytes
Desc: not available
URL: <http://lists.openfabrics.org/pipermail/ewg/attachments/20120220/531890a3/attachment.bin>


More information about the ewg mailing list