[openib-general] Re: netperf for RDS needed
Leonid Arsh
leonida at voltaire.com
Sun Apr 30 05:47:15 PDT 2006
Ranjit,
we run it on dual CPU Intel(R) Xeon(TM) CPU 3.00GHz x86_64,
with hyper-threading enabled (two hyper-threads on every CPU)
We run IBED-1.0-rc3 on both machines.
One machine runs SUSE Linux Enterprise Server 10 Beta8,
kernel 2.6.16-rc6-git1-4-smp;
The second one runs Red Hat Enterprise Linux AS release 4 (Nahant Update 1)
with kernel 2.6.15 from kernel.org.
We get approx. the same dmesg output on the sever side, doesn't matter
which machine is server.
The test without RDS was done over IPoIB (the same run line, but without
'-r' command line switch.)
Regards,
Leonid
Ranjit Pandit wrote:
> On 4/27/06, Leonid Arsh <leonida at voltaire.com> wrote:
>
>> Ranjit
>> thank you for the patch again. I applied it and succeeded to run.
>> Looks very nice.
>>
>> This are the results with for RDS :
>> Socket Message Elapsed Messages
>> Size Size Time Okay Errors Throughput
>> bytes bytes secs # # 10^6bits/sec
>> 262144 8192 10.01 653574 1 4280.59
>> 118784 10.01 653574 4280.59
>>
>> This are the results without RDS:
>> Socket Message Elapsed Messages
>> Size Size Time Okay Errors Throughput
>> bytes bytes secs # # 10^6bits/sec
>> 262144 8192 10.00 356180 0 2333.90
>> 118784 10.00 211005 1382.63
>>
>>
>
> What kind of systems are you running on, cpu and memory?
>
> Are the results without-RDS on IPoIB?
>
> The second line of the output is more interesting as it shows the
> "useful" b/w (as seen by the receiver) and therefore accounts for any
> lost/dropped pkts.
>
> Rds shows 3x improvement on recvr side b/w (4289.59 Vs 1382.63).
>
>
>> During the run we get error messages in dmesg on the server side.
>> Have you seen anything like this?
>> Please see the dmesg output below:
>>
>
> What kernel are you on?
> 32bit or 64bit system?
>
> I will see if I can reproduce it.
>
>
>>
>> swapper: page allocation failure. order:1, mode:0x20
>>
>> Call Trace: <IRQ> <ffffffff801572ae>{__alloc_pages+662}
>> <ffffffff801184c7>{smp_apic_timer_interrupt+54}
>> <ffffffff8010e63c>{apic_timer_interrupt+132}
>> <ffffffff8015a0ff>{cache_grow+288}
>> <ffffffff8015a4ef>{cache_alloc_refill+419}
>> <ffffffff80159fb2>{kmem_cache_alloc+87}
>> <ffffffff8824c01c>{:ib_rds:rds_alloc_buf+16}
>> <ffffffff8824c0f1>{:ib_rds:rds_alloc_recv_buffer+12}
>> <ffffffff8824b377>{:ib_rds:rds_post_new_recv+23}
>> <ffffffff8824bfc3>{:ib_rds:rds_recv_completion+85}
>> <ffffffff88249b6f>{:ib_rds:rds_cq_callback+87}
>> <ffffffff8814882b>{:ib_mthca:mthca_eq_int+119}
>> <ffffffff801102d8>{do_IRQ+50} <ffffffff8010de1e>{ret_from_intr+0}
>> <ffffffff88148b45>{:ib_mthca:mthca_tavor_interrupt+91}
>> <ffffffff80151bd5>{handle_IRQ_event+41}
>> <ffffffff80151ca2>{__do_IRQ+156}
>> <ffffffff801102d3>{do_IRQ+45} <ffffffff8010de1e>{ret_from_intr+0}
>> <EOI> <ffffffff8010be87>{mwait_idle+54}
>> <ffffffff8010be37>{cpu_idle+93}
>> <ffffffff8052733d>{start_secondary+1131}
>> ______________________________________________
>>
More information about the general
mailing list