[ofa-general] IPoIB Connected Mode Throughput Question
Wittawat Tantisiriroj
wtantisi at cs.cmu.edu
Thu Feb 28 13:32:42 PST 2008
Thank Eli,
I haven't thought that CPU was our bottleneck. So, I focused on
tuning TCP and kernel parameters. However, it is the case. With iperf, I
can create two threads simultaneously and use both cores to handle the
network traffic. As the results, it yields much better throughput ~850
MB/s.
Sadly, this would likely to be a huge issue with TCP when 10GE
becomes widely use.
Wittawat
On 02/28/2008 05:32 AM, Eli Cohen wrote:
> On Wed, 2008-02-27 at 17:39 -0500, Wittawat Tantisiriroj wrote:
>
>> Hi,
>> We have set up a small InfiniBand cluster to do several network
>> storage experiments over IPoIB. However, we had the problem getting a
>> good throughput with IPoIB-connected mode. So, we followed the same
>> benchmark that Michael S. Tsirkin did in
>> "http://lists.openfabrics.org/pipermail/general/2006-November/029500.html".
>> We realize that even with a simple scenario we still get only ~620MB/s
>> throughput from IPoIB-CM. I searched around with Google, but I cannot
>> find any information regarding IPoIB-CM throughput.
>>
>> My question is:
>>
>> - Is this throughput typical/normal in most system?
>>
> You can't say there is a typical result for this check -- it depends on
> the "strength" of your system. One thing you can do is watch how much of
> the CPU is used - you can use htop for that (it gives you per CPU
> utilization). If you use 100% CPU than stronger machines will give
> higher results.
>
> On my systems (AMD @2.4 Ghz) / mt25204 I get:
> [root at sw186 ~]# netperf -H 11.4.3.185 -f M
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.4.3.185
> (11.4.3.185) port 0 AF_INET
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. MBytes/sec
>
> 87380 16384 16384 10.01 755.20
>
> On other systems with Arbel (mt25218) I get 980 MB/s etc.
>> Is there any necessary tweak tuning with TCP, ib_ipoib or kernel
>> parameters in order to get ~900 MB/s? I have tried to TCP buffer size,
>> but it still does not improve the throughput.
>>
>> - Should I use a OFED distribution instead of a standard built-in with a
>> standard kernel? (I hope it does not matter)
>>
>> System
>> =====
>> Processor: Intel(R) Pentium(R) D CPU 3.00GHz
>> Memory: 4GB
>> OS: Debian Etch with 2.6.24.2 kernel
>>
>> Network
>> ======
>> Network card: Mellanox MT25204 (InfiniHost III Lx HCA) (4x, 10Gbps)
>> Switch: Mellanox Gazelle (MTS9600) 96 ports with 4X (10 Gb/s) each
>> Network Software Stack: Standard IPoIB built-in with the 2.6.24.2 kernel
>> IPoIB Configuration: Connected mode with MTU=65520
>>
>> Benchmark
>> ========
>> Server: ib265
>> # ifconfig ib0 mtu 65520
>> # netserver
>>
>> Client: ib266
>> # ifconfig ib0 mtu 65520
>> # netperf -H ib265 -f M
>>
>> TCP STREAM TEST to ib265
>> Recv Send Send
>> Socket Socket Message Elapsed
>> Size Size Size Time Throughput
>> bytes bytes bytes secs. MBytes/sec
>> 87380 16384 16384 10.01 620.27
>>
>> Thank in advance,
>> Wittawat
>> _______________________________________________
>> general mailing list
>> general at lists.openfabrics.org
>> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
>>
>> To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
>>
> _______________________________________________
> general mailing list
> general at lists.openfabrics.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
>
> To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
More information about the general
mailing list