[ofa-general] scp performance over IPoIB

Scott Weitzenkamp (sweitzen) sweitzen at cisco.com
Wed Sep 12 14:21:04 PDT 2007


What does "cat /sys/class/net/ib0/mode" report?  If "datagram", you need
to run "echo connected >  /sys/class/net/ib0/mode", then you can raise
the MTU.

Scott Weitzenkamp
SQA and Release Manager
Server Virtualization Business Unit
Cisco Systems
 

> -----Original Message-----
> From: general-bounces at lists.openfabrics.org 
> [mailto:general-bounces at lists.openfabrics.org] On Behalf Of 
> Sufficool, Stanley
> Sent: Wednesday, September 12, 2007 2:15 PM
> To: Arlin Davis; general
> Subject: RE: [ofa-general] scp performance over IPoIB
> 
> How exactly do you set the MTU for ipoib? 
> 
> I am running the latest unpatched git branch of vofed kernel 
> 1.2.5 and I
> get "SIOCSIFMTU: Invalid argument" when I try ifconfig ib0 mtu 65520.
> Anything above the preset 2044 returns this issue.
> 
> 
> 
> -----Original Message-----
> From: general-bounces at lists.openfabrics.org
> [mailto:general-bounces at lists.openfabrics.org] On Behalf Of Rick Jones
> Sent: Wednesday, September 12, 2007 11:28 AM
> To: Arlin Davis
> Cc: general; Davis,Arlin R
> Subject: Re: [ofa-general] scp performance over IPoIB
> 
> Arlin Davis wrote:
> > Rick Jones wrote:
> > 
> >> Davis, Arlin R wrote:
> >>
> >>> Can someone explain why scp performance over IPoIB would be 10x 
> >>> slower then on GBE? The netperf numbers look normal.
> >>
> >>
> >>
> >> So, you could try tweaking the MTU on the IPoIB interfaces.
> >>
> >>
> > 
> > Rick,
> > 
> > Thanks for the suggestion. Looks like we may need to change the 
> > default MTU for IPoIB. It would be interesting to see results from 
> > other distributions.
> > 
> > (Woodcrest, Xeon 5160, DDR, RHEL4U4)
> > 
> > MTU      SCP       NetPerf
> > 
> > 1024    41 MB/s    151 MB/s
> > 2048    50 MB/s    313 MB/s
> > 4096    50 MB/s    485 MB/s
> > 8192    50 MB/s    641 MB/s
> > 16384   25 MB/s    761 MB/s
> > 32768   50 MB/s    700 MB/s
> > 65520   8  MB/s    440 MB/s
> 
> I'm actually a triffle surprised that netperf was affected by 
> the 65520
> MTU - I'm guessing you were using all defaults, which on "linux" IIRC
> means netperf was making 16KB (K == 1024) sends.  I suspect 
> that if you
> were to make 64K sends from netperf (test specific -m 64K) that the
> numbers for 64420 might be better.
> 
> I'm really shakey on scp behaviour knowledge, but suspect that perhaps
> with the "HPN" (High Performance Network) patches in place (check the
> archives pointed-to by: 
> https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev) 
> it might be
> possible to get good SCP performance out of a 65520 byte MTU.  I'm
> _guessing" that by default scp isn't trying to put-out > 65520 bytes
> worth of data in the sum of its sends with its own windowing 
> and so gets
> hit by issues with Nagle.  Ie it is doing write, write, read and the
> second write at least is sub-MSS.  Some strace tracing of the scp
> transfer could confirm/deny that hypothesis.
> 
> So, it may not be necessary to shrink the MTU.
> 
> rick jones
> _______________________________________________
> general mailing list
> general at lists.openfabrics.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
> 
> To unsubscribe, please visit
> http://openib.org/mailman/listinfo/openib-general
> _______________________________________________
> general mailing list
> general at lists.openfabrics.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
> 
> To unsubscribe, please visit 
> http://openib.org/mailman/listinfo/openib-general
> 



More information about the general mailing list