[openib-general] ip over ib throughtput
Paul Baxter
paul.baxter at dsl.pipex.com
Tue Jan 4 11:47:34 PST 2005
> Quoting r. Michael S. Tsirkin (mst at mellanox.co.il) "[openib-general] ip
> over ib throughtput":
>> Hi!
>> What kind of performance do people see with ip over ib on gen2?
>> I see about 100Mbyte/sec at 99% CPU utilisation on send,
>> on an express card, Xeon 2.8GHz, SSE doorbells enabled.
>
> At 159 Mbyte/sec without SSE doorbells.
> The test is netperf-2.2pl5, BTW.
Didn't see any response to Michael's numbers other than another mail today
from Josh England saying 'great numbers till it dies'.
Are these results considered good on a 4x link? I realise there is a
significant TCPIP overhead, but is this significantly better or worse than
the vendor-specific equivalents using different drivers?
What (short term?) plans are there for implementing a higher performance
link that can show better transfer rates.
I'm interested in any bottlenecks this might reveal in the current kernel
submission and understanding in broad handfuls what sort of
optimisation/stabilisation period will be necessary before I can look to
using openib in high bandwidth message transfers. Is more functionality or
performance optimisation the goal for the next 6 months.
I've largely written off SDP going higher than ~300MB/s even with high CPU
utilisation. (Assumptions: License concerns make it unlikely to be one of
the first things openib tackle. Not trivial to implement it with zero copy
or asynchronous I/O on Linux)
Am I right in thinking that the ib_verbs layer direct or ideally with
MPI/uDAPL will be my best bet in the next 6 months for showing a portable
vendor-neutral implementation which might achieve 600MB/s transfers or
slightly lower but with <25% cpu utilisation on PCIe.
Paul
More information about the general
mailing list