[ofa-general] SRP/mlx4 interrupts throttling performance

Vladislav Bolkhovitin vst at vlnb.net
Fri Oct 24 11:16:14 PDT 2008


Cameron Harr wrote:
> Cameron Harr wrote:
>>
>> Vladislav Bolkhovitin wrote:
>>> Cameron Harr wrote:
>>>> Vladislav Bolkhovitin wrote:
>>>>> Cameron Harr wrote:
>>>>>> Vladislav Bolkhovitin wrote:
>>>>>>> I guess, you use a regular caching IO? The lowest packet size it 
>>>>>>> can produce is a PAGE_SIZE (4K). Target can't change it. You can 
>>>>>>> have lower packets only with O_DIRECT or sg interface. But I'm 
>>>>>>> not sure it will be performance effective.
>>>>>> I do everything with Direct IO, which is automatic when using the 
>>>>>> BLOCKIO method in SCST.
>>>>> I meant on initiator(s), not on the target.
>>>>>
>>>> Sorry - but yes, I always run the benchmark apps with direct IO
>>> Then, there's one more reason why we should find out the cause of 
>>> such a big variation between runs. Can you repeat all the tests with 
>>> the latest SCST SVN trunk/ including SRPT driver with each run for at 
>>> least few minutes?
>> From a little testing, the updated SCST tree doesn't work with the 
>> OFED-1.3.1 SRP stack, though I have gotten it working with the 
>> infiniband drivers in the normal distribution kernel. Shall I use 
>> those modules?
> 
> Ok, I've done some testing with elevator=noop, with scst_threads=[123] 
> and srpt thread=[01]. I ran with both 4k blocks and 512B blocks, random 
> writes with 60s per test. Unfortunately, it looks like I can't seem to 
> reproduce the numbers I had before - I believe the reporting mechanism I 
> used earlier (script that uses /proc/diskstats) gave me invalid results. 
> This time I have calculated iops straight from the FIO results. One 
> interesting note is that in almost every case srpt thread=1 gives better 
> performance.

Strange, indeed.

Do you use the latest SVN trunk?

Did you use the real drives or NULLIO?

What is your FIO script?

How do you calculate IOPS rate?

It would be interesting to know "vmstat 1" and "top d1" output during 
runs. Top should show stats for all CPUs, not only aggregate value.

> type=randwrite  bs=4k   drives=1 scst_threads=1 srptthread=0 iops=51134.20
> type=randwrite  bs=4k   drives=1 scst_threads=1 srptthread=1 iops=63461.86
> type=randwrite  bs=4k   drives=1 scst_threads=2 srptthread=0 iops=52383.10
> type=randwrite  bs=4k   drives=1 scst_threads=2 srptthread=1 iops=54065.52
> type=randwrite  bs=4k   drives=1 scst_threads=3 srptthread=0 iops=48827.27
> type=randwrite  bs=4k   drives=1 scst_threads=3 srptthread=1 iops=52703.82
> type=randwrite  bs=4k   drives=2 scst_threads=1 srptthread=0 iops=64619.11
> type=randwrite  bs=4k   drives=2 scst_threads=1 srptthread=1 iops=62605.09
> type=randwrite  bs=4k   drives=2 scst_threads=2 srptthread=0 iops=67961.56
> type=randwrite  bs=4k   drives=2 scst_threads=2 srptthread=1 iops=78884.72
> type=randwrite  bs=4k   drives=2 scst_threads=3 srptthread=0 iops=70340.04
> type=randwrite  bs=4k   drives=2 scst_threads=3 srptthread=1 iops=76253.60
> type=randwrite  bs=4k   drives=3 scst_threads=1 srptthread=0 iops=53777.02
> type=randwrite  bs=4k   drives=3 scst_threads=1 srptthread=1 iops=64661.21
> type=randwrite  bs=4k   drives=3 scst_threads=2 srptthread=0 iops=91073.05
> type=randwrite  bs=4k   drives=3 scst_threads=2 srptthread=1 iops=90127.98
> type=randwrite  bs=4k   drives=3 scst_threads=3 srptthread=0 iops=92012.13
> type=randwrite  bs=4k   drives=3 scst_threads=3 srptthread=1 iops=96848.61
> type=randwrite  bs=512  drives=1 scst_threads=1 srptthread=0 iops=55040.20
> type=randwrite  bs=512  drives=1 scst_threads=1 srptthread=1 iops=62057.33
> type=randwrite  bs=512  drives=1 scst_threads=2 srptthread=0 iops=60237.05
> type=randwrite  bs=512  drives=1 scst_threads=2 srptthread=1 iops=63465.54
> type=randwrite  bs=512  drives=1 scst_threads=3 srptthread=0 iops=58716.01
> type=randwrite  bs=512  drives=1 scst_threads=3 srptthread=1 iops=60089.11
> type=randwrite  bs=512  drives=2 scst_threads=1 srptthread=0 iops=64978.41
> type=randwrite  bs=512  drives=2 scst_threads=1 srptthread=1 iops=64018.47
> type=randwrite  bs=512  drives=2 scst_threads=2 srptthread=0 iops=78128.56
> type=randwrite  bs=512  drives=2 scst_threads=2 srptthread=1 iops=94561.47
> type=randwrite  bs=512  drives=2 scst_threads=3 srptthread=0 iops=82526.52
> type=randwrite  bs=512  drives=2 scst_threads=3 srptthread=1 iops=105874.51
> type=randwrite  bs=512  drives=3 scst_threads=1 srptthread=0 iops=56730.70
> type=randwrite  bs=512  drives=3 scst_threads=1 srptthread=1 iops=62147.04
> type=randwrite  bs=512  drives=3 scst_threads=2 srptthread=0 iops=87507.15
> type=randwrite  bs=512  drives=3 scst_threads=2 srptthread=1 iops=95781.40
> type=randwrite  bs=512  drives=3 scst_threads=3 srptthread=0 iops=91645.99
> type=randwrite  bs=512  drives=3 scst_threads=3 srptthread=1 iops=114164.39
> 
> 
> 




More information about the general mailing list