[openfabrics-ewg] [openib-general] Multicast traffic performace of OFED 1.0 ipoib

Moni Levy monil at voltaire.com
Thu Aug 3 02:03:33 PDT 2006


Mike,
On 8/2/06, Michael Krause <krause at cup.hp.com> wrote:
>
>
> Is the performance being measured on an identical topology and hardware set
> as before?  Multicast by its very nature is sensitive to topology, hardware
> components used (buffer depth, latency, etc.) and workload occurring within
> the fabric.  Loss occurs as a function of congestion or lack of forward
> progress resulting in a timeout and thus a toss of a packet.   If the
> hardware is different or the settings chosen are changed, then the results
> would be expected to change.
>
> It is not clear what you hope to achieve with such tests as there will be
> other workloads flowing over the fabric which will create random HOL
> blocking which can result in packet loss.  Multicast workloads should be
> tolerant of such loss.
>
> Mike

I'm sorry about not beeing clear. My intention in the last sentance
was that we got the better (120k-140k PPS) results with our
proprietary IB stack and not with a previous openib snapshot. The
tests were run on the same setup, which by the way was dedicated only
to that traffic. I' m aware of the network implications of the test, I
was looking for hints of improvements needed in the ipoib
implementation.

-- Moni


>
>
>
>
> At 04:30 AM 8/2/2006, Moni Levy wrote:
>
> Hi,
>     we are doing some performance testing of multicast traffic over
> ipoib. The tests are performed by using iperf on dual 1.6G AMD PCI-X
> servers with PCI-X Tavor cards with 3.4.FW.  Below are the command the
> may be used to run the test.
>
> Iperf server:
> route add -net 224.0.0.0 netmask 240.0.0.0 dev ib0
> /home/qa/testing-tools/iperf-2.0.2/iperf -us -B 224.4.4.4 -i 1
>
> Iperf client:
> route add -net 224.0.0.0 netmask 240.0.0.0 dev ib0
> /home/qa/testing-tools/iperf-2.0.2/iperf -uc 224.4.4.4 -i 1 -b 100M -t
> 400 -l 100
>
> We are looking for the max PPT rate (100 byte packets size) without
> losses, by changing the BW parameter and looking at the point where we
> get no losses reported. The best results we received were around 50k
> PPS. I remember that we got some 120k-140k packets of the same size
> running without losses.
>
> We are going to look into it and try to see where is the time spent,
> but any ideas are welcome.
>
> Best regards,
> Moni
>
> _______________________________________________
> openib-general mailing list
> openib-general at openib.org
> http://openib.org/mailman/listinfo/openib-general
>
> To unsubscribe, please visit
> http://openib.org/mailman/listinfo/openib-general
>
>
>
>
> At 04:30 AM 8/2/2006, Moni Levy wrote:
>
> Hi,
>     we are doing some performance testing of multicast traffic over
> ipoib. The tests are performed by using iperf on dual 1.6G AMD PCI-X
> servers with PCI-X Tavor cards with 3.4.FW.  Below are the command the
> may be used to run the test.
>
> Iperf server:
> route add -net 224.0.0.0 netmask 240.0.0.0 dev ib0
> /home/qa/testing-tools/iperf-2.0.2/iperf -us -B 224.4.4.4 -i 1
>
> Iperf client:
> route add -net 224.0.0.0 netmask 240.0.0.0 dev ib0
> /home/qa/testing-tools/iperf-2.0.2/iperf -uc 224.4.4.4 -i 1 -b 100M -t
> 400 -l 100
>
> We are looking for the max PPT rate (100 byte packets size) without
> losses, by changing the BW parameter and looking at the point where we
> get no losses reported. The best results we received were around 50k
> PPS. I remember that we got some 120k-140k packets of the same size
> running without losses.
>
> We are going to look into it and try to see where is the time spent,
> but any ideas are welcome.
>
> Best regards,
> Moni
>
> _______________________________________________
> openib-general mailing list
> openib-general at openib.org
> http://openib.org/mailman/listinfo/openib-general
>
> To unsubscribe, please visit
> http://openib.org/mailman/listinfo/openib-general
>
> _______________________________________________
> openfabrics-ewg mailing list
> openfabrics-ewg at openib.org
> http://openib.org/mailman/listinfo/openfabrics-ewg
>
>
>




More information about the ewg mailing list