[ofa-general] [SDP]How does buffer size affect SDP bandwidth performance?
linyao
linyao at ict.ac.cn
Tue Jul 15 05:54:44 PDT 2008
Hi, all
When I use netperf to test SDP bandwidth performance, I was confused with a problem. My test results show that the bandwidth performance is better while the send and receive buffer size is smaller. How does it explain?
How dose buffer size affect SDP bandwidth performance?
The table below is the result of my test. The second column is the result with 16M send buffer and 8M receive buffer. The third column is the result with 256K send buffer and 256K receive buffer. The last column is the result with 16K send buffer and 87380B receive buffer.
Package Size(Byte) Bandwidth(Mbps)
128 684.51 682.55 725.43
256 1095.85 979.69 945.82
512 1498.62 1527.56 2474.6
1024 3191.12 4210.58 5147.91
2048 3994.1 5356.29 7212.09
4096 4546.15 6208.94 8127.58
8192 5962.7 7260.09 8621.43
16384 5100.91 6364.46 6575.32
32768 7446.83 6870.21 8889.46
65536 6617.12 7044.84 8992.61
131072 6587.27 6909.3 8867.24
My test is on two nodes. Each node is a DELL SC430 PC server, which has one 2.8GHz Intel Pentium-4 processor and 1GB DDR II-400 main Memory. These two nodes are interconnected with both Gigabit Ethernet and Mellanox In?niBand DDR HCAs.
Each node runs RedHat AS 4.0 with Linux kernel 2.6.20. I use OpenFabrics OFED-1.2 and Firmware 1.2.0. Netperf has two options "-s" and "-S" to set the size of socket send buffer and socket receive buffer. The former is for local and the latter is for remote. I used these two options to set different size values for socket send and receive buffer in the local and remote. I think they represent the SDP send and receive buffer size, right?
Any reply would be appreciated! Thank you!
2008-07-15
linyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20080715/0f0917ce/attachment.html>
More information about the general
mailing list