[ofa-general] IPoIB Connected Mode Throughput Question

Wittawat Tantisiriroj wtantisi at cs.cmu.edu
Wed Feb 27 14:39:54 PST 2008


Hi,
    We have set up a small InfiniBand cluster to do several network 
storage experiments over IPoIB. However, we had the problem getting a 
good throughput with IPoIB-connected mode. So, we followed the same 
benchmark that Michael S. Tsirkin did in 
"http://lists.openfabrics.org/pipermail/general/2006-November/029500.html". 
We realize that even with a simple scenario we still get only ~620MB/s 
throughput from IPoIB-CM. I searched around with Google, but I cannot 
find any information regarding IPoIB-CM throughput.

My question is:

- Is this throughput typical/normal in most system?

- Is there any necessary tweak tuning with TCP, ib_ipoib or kernel 
parameters in order to get ~900 MB/s? I have tried to TCP buffer size, 
but it still does not improve the throughput.

- Should I use a OFED distribution instead of a standard built-in with a 
standard kernel? (I hope it does not matter)

System
=====
Processor: Intel(R) Pentium(R) D CPU 3.00GHz
Memory: 4GB
OS: Debian Etch with 2.6.24.2 kernel

Network
======
Network card: Mellanox MT25204 (InfiniHost III Lx HCA) (4x, 10Gbps)
Switch: Mellanox Gazelle (MTS9600) 96 ports with 4X (10 Gb/s) each
Network Software Stack: Standard IPoIB built-in with the 2.6.24.2 kernel
IPoIB Configuration: Connected mode with MTU=65520

Benchmark
========
Server: ib265
# ifconfig ib0 mtu 65520
# netserver

Client: ib266
# ifconfig ib0 mtu 65520
# netperf -H ib265 -f M

TCP STREAM TEST to ib265
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    MBytes/sec 
 87380  16384  16384    10.01     620.27

Thank in advance,
Wittawat



More information about the general mailing list