Hi Eli,<br><br>Thanks for sharing the results with us. It is great to see the reduction in Interrupts. Could you please specify the netperf test specifications [message size; socket size]. Wondering what the numbers would be if we use large socket and message sizes [128K & 64K respectively]. The reason for the request is to make sure we are not hitting any TCP related bottleneck while comparing NAPI vs. no NAPI cases. Please let me know what you think.
<br><br>Thanks,<br>harish<br><br><div><span class="gmail_quote">On 9/21/06, <b class="gmail_sendername">Eli cohen</b> <<a href="mailto:eli@dev.mellanox.co.il">eli@dev.mellanox.co.il</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi,<br><br>I have a draft implementation of NAPI in ipoib and got the following<br>results:<br><br>System descriptions<br>===================<br>Quad CPU E64T 2.4 Ghz<br>4 GB RAM<br>MT25204 Sinai HCA<br><br>I used netperf for benchmarking, the BW test ran for 600 seconds with 8
<br>clients and 8 servers.<br><br>The results I received are bellow:<br><br>netperf TCP_STREAM:<br> BW [MByte/sec] clients side [irqs/sec] server side [irqs/sec]<br> -------------- ----------------------- ----------------------
<br>without NAPI: 506 86441 66311<br>with NAPI: 550 6830 13600<br><br><br>netperf TCP_RR:<br> rate [tran/sec]<br> ---------------
<br>without NAPI: 39600<br>with NAPI: 39470<br><br><br><br>Please note this is still under work and we plan to do more tests and<br>measure on other devices.<br><br><br>_______________________________________________
<br>openib-general mailing list<br><a href="mailto:openib-general@openib.org">openib-general@openib.org</a><br><a href="http://openib.org/mailman/listinfo/openib-general">http://openib.org/mailman/listinfo/openib-general</a>
<br><br>To unsubscribe, please visit <a href="http://openib.org/mailman/listinfo/openib-general">http://openib.org/mailman/listinfo/openib-general</a><br><br></blockquote></div><br>