<br><tt><font size=2>"Michael S. Tsirkin"<mst@mellanox.co.il>
wrote on on Wed, 29 Nov 2006 16:00:16 +0200 -----</font></tt>
<br><tt><font size=2>> <br>
> To:</font></tt>
<br><tt><font size=2>> <br>
> openib-general@openib.org</font></tt>
<br><tt><font size=2>> <br>
> Subject:</font></tt>
<br><tt><font size=2>> <br>
> [openib-general] IPoIB CM</font></tt>
<br><tt><font size=2>> <br>
> Hi!<br>
> Wanted to show you guys the IPoIB connected mode code I've written<br>
> in the last couple of weeks. I put it at ~mst/linux-2.6/.git ipoib_cm_branch.<br>
> With this code, I'm able to get 800MByte/sec or more with netperf<br>
> without options on a Mellanox 4x back-to-back DDR system.<br>
</font></tt>
<br><tt><font size=2>These are very good results close to what I expected.
However see some tuning suggestions below.</font></tt>
<br>
<br><tt><font size=2>> <br>
> This is still "work in progress", but comments are welcome.<br>
> <br>
> Here's a short description of what I have so far:<br>
> <br>
> a. The code's here:<br>
> git://staging.openfabrics.org/~mst/linux-2.6/.git ipoib_cm_branch<br>
> This is based on 2.6.19-rc6, so<br>
> ~>git diff v2.6.19-rc6..ipoib_cm_branch<br>
> will show what I have done so far.<br>
> Note this currently includes the patch <br>
> 073ae841d6a5098f7c6e17fc1f329350d950d1ce<br>
> which will be cleaned out when next I rebase against Linus.<br>
> <br>
> b. How to activate:<br>
> Server:<br>
> #modprobe ib_ipoib<br>
> #/sbin/ifconfig ib0 mtu 65520<br>
> #./netperf-2.4.2/src/netserver<br>
> <br>
> Client:<br>
> #modprobe ib_ipoib<br>
> #/sbin/ifconfig ib0 mtu 65520<br>
> #./netperf-2.4.2/src/netperf -H 11.4.3.68 -f M<br>
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET
to 11.4.3.<br>
> 68 (11.4.3.68)<br>
> port 0 AF_INET : demo<br>
> Recv Send Send<br>
> Socket Socket Message Elapsed<br>
> Size Size Size Time
Throughput<br>
> bytes bytes bytes secs.
MBytes/sec<br>
> <br>
> 87380 16384 16384 10.01
891.21<br>
</font></tt>
<br><tt><font size=2>With a MTU of 64K, why are you using such small send
and receive socket sizes and message size? Can you try setting the send
and receive socket sizes to 512K and the send message size to 128K. This
way you send 2 packets per socket write and can receive up to 8 packets
in the socket buffers. These are typical sizes I have used on other network
adapters using a MTU of 64K. </font></tt>
<br>
<br><tt><font size=2>> <br>
> c. TODO list<br>
> 1. Clean up stale connections<br>
> 2. Clean up ipoib_neigh (move all new fields to ipoib_cm_tx)<br>
> 3. Add IPOIB_CM config option, make it depend on EXPERIMENTAL<br>
> 4. S/G support<br>
> 5. Make CM use same CQ IPoIB uses for UD<br>
> <br>
> d. Limitations<br>
> UDP multicast and UDP connections to IPoIB UD mode<br>
> currently don't work since we get packets that are too large to<br>
> send over a UD QP.<br>
> As a work around, one can now create separate interfaces<br>
> for use with CM and UD mode.<br>
> <br>
> e. Some notes on code<br>
> 1. SRQ is used for scalability to large cluster sizes<br>
> 2. Only RC connections are used (UC does not support SRQ now)<br>
> 3. Retry count is set to 0 since spec draft warns against retries<br>
> 4. Each connection is used for data transfers in only 1 direction,<br>
> so each connection is either active(TX) or passive (RX).<br>
> 2 sides that want to communicate create 2 connections.<br>
> 5. Each active (TX) connection has a separate CQ for send completions
-<br>
> this keeps the code simple without CQ resize and other
tricks<br>
> <br>
> I'm looking at ways to limit the path mtu<br>
> for these connections, to make it work.<br>
> <br>
> -- <br>
> MST<br>
> <br>
</font></tt>
<br><font size=2 face="sans-serif"><br>
Bernie King-Smith <br>
IBM Corporation<br>
Server Group<br>
Cluster System Performance <br>
wombat2@us.ibm.com (845)433-8483<br>
Tie. 293-8483 or wombat2 on NOTES <br>
<br>
"We are not responsible for the world we are born into, only for the
world we leave when we die.<br>
So we have to accept what has gone before us and work to change the only
thing we can,<br>
-- The Future." William Shatner</font>