[openib-general] How about ib_send_page() ?

Jeff Carr jcarr at linuxmachines.com
Mon May 16 18:55:38 PDT 2005


Roland Dreier wrote:
>     Jeff> (side note: it would seem IPoIB could be re-written to
>     Jeff> dramatically improve it's performance).
> 
> Out of curiousity, what would the rewrite change to obtain better
> performance?

Could (or would it help if) the MTU was increased to something much 
larger than 2044?

Kernel NFS transfer of a cached 1GB file (some, but not lots, of 
overhead for NFS itself) hitting 1.2Gbits/sec:

# dd if=1GB_file of=/dev/null bs=1M
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 6.661947 seconds (157397829 bytes/sec)

Server 30% idle, client 0%.

client ifconfig start / end:
           RX packets:1248583 errors:0 dropped:0 overruns:0 frame:0
           TX packets:4083901 errors:0 dropped:0 overruns:0 carrier:0
           RX packets:1792589 errors:0 dropped:0 overruns:0 frame:0
           TX packets:4115907 errors:0 dropped:0 overruns:0 carrier:0

server ifconfig start / end;
           RX packets:4083900 errors:0 dropped:0 overruns:0 frame:0
           TX packets:1248584 errors:0 dropped:0 overruns:0 carrier:0
           RX packets:4115906 errors:0 dropped:0 overruns:0 frame:0
           TX packets:1792590 errors:0 dropped:0 overruns:0 carrier:0

So, about 500k server TX packets (2K/packet) & client 32k TX packets.

This transfer generates about 100k interrupts. So, this test does almost 
as many packets/sec as netperf & half the interrupts at about half the 
speed of netperf. So, perhaps a bigger MTU and/or bigger chunking of 
transmits per interrupt would improve the performance.

Then again, a raw dd of a memory cached 1GB file only hits ~1GB/sec. The 
kernel takes quite a lot of processing time for this. Still, one might 
hope smartly configured NFS or NBD could hit 500MB/sec between machines 
when going over IB purely between RAM.

Jeff



More information about the general mailing list