[Scst-devel] [ofa-general] SRP/mlx4 interrupts throttling performance

Vladislav Bolkhovitin vst at vlnb.net
Tue Jan 13 03:58:27 PST 2009


Cameron Harr, on 01/13/2009 02:56 AM wrote:
> Vladislav Bolkhovitin wrote:
>>>> I think srptthread=0 performs worse in this case, because with it 
>>>> part of processing done in SIRQ, but seems scheduler make it be done 
>>>> on the same CPU as fct0-worker, which does the data transfer to your 
>>>> SSD device job. And this thread is always consumes about 100% CPU, 
>>>> so it has less CPU time, hence less overall performance.
>>>>
>>>> So, try to affine fctX-worker, SCST threads and SIRQ processing on 
>>>> different CPUs and check again. You can affine threads using utility 
>>>> from 
>>>> http://www.kernel.org/pub/linux/kernel/people/rml/cpu-affinity/, how 
>>>> to affine IRQ see Documentation/IRQ-affinity.txt in your kernel tree. 
> 
> I ran with the two fct-worker threads pinned to cpus 7,8, the scsi_tgt 
> threads pinned to cpus 4, 5 or 6 and irqbalance pinned on cpus 1-3. I 
> wasn't sure if I should play with the 8 ksoftirqd procs, since there is 
> one process per cpu. From these results, I don't see a big difference, 

Hmm, you sent me before the following results:

type=randwrite  bs=4k   drives=1 scst_threads=1 srptthread=1 iops=54934.31
type=randwrite  bs=4k   drives=1 scst_threads=1 srptthread=0 iops=50199.90
type=randwrite  bs=4k   drives=1 scst_threads=2 srptthread=1 iops=51510.68
type=randwrite  bs=4k   drives=1 scst_threads=2 srptthread=0 iops=49951.89
type=randwrite  bs=4k   drives=1 scst_threads=3 srptthread=1 iops=51924.17
type=randwrite  bs=4k   drives=1 scst_threads=3 srptthread=0 iops=49874.57
type=randwrite  bs=4k   drives=2 scst_threads=1 srptthread=1 iops=79680.42
type=randwrite  bs=4k   drives=2 scst_threads=1 srptthread=0 iops=74504.65
type=randwrite  bs=4k   drives=2 scst_threads=2 srptthread=1 iops=78558.77
type=randwrite  bs=4k   drives=2 scst_threads=2 srptthread=0 iops=75224.25
type=randwrite  bs=4k   drives=2 scst_threads=3 srptthread=1 iops=75411.52
type=randwrite  bs=4k   drives=2 scst_threads=3 srptthread=0 iops=73238.46

I see quite a big improvement. For instance, for drives=1 scst_threads=1 
srptthread=1 case it is 36%. Or, do you use different hardware, so those 
results can't be compared?

> but would still give srpt thread=1 a slight performance advantage.

At this level CPU caches starting playing essential role. To get the 
maximum performance the commands processing of each command should use 
the same CPU L2+ cache(s), i.e. be done on the same physical CPU, but on 
different cores. Most likely, affinity assigned by you was worse, than 
the scheduler decisions. What's your CPU configuration? Please send me 
the top/vmstat output during tests from the target as well as your dmesg 
from the target just after it's booted.

> type=randwrite  bs=4k   drives=1 scst_threads=1 srptthread=1 iops=74990.87
> type=randwrite  bs=4k   drives=2 scst_threads=1 srptthread=1 iops=84005.58
> type=randwrite  bs=4k   drives=1 scst_threads=2 srptthread=1 iops=72369.04
> type=randwrite  bs=4k   drives=2 scst_threads=2 srptthread=1 iops=91147.19
> type=randwrite  bs=4k   drives=1 scst_threads=3 srptthread=1 iops=70463.27
> type=randwrite  bs=4k   drives=2 scst_threads=3 srptthread=1 iops=91755.24
> type=randwrite  bs=4k   drives=1 scst_threads=1 srptthread=0 iops=68000.68
> type=randwrite  bs=4k   drives=2 scst_threads=1 srptthread=0 iops=87982.08
> type=randwrite  bs=4k   drives=1 scst_threads=2 srptthread=0 iops=73380.33
> type=randwrite  bs=4k   drives=2 scst_threads=2 srptthread=0 iops=87223.54
> type=randwrite  bs=4k   drives=1 scst_threads=3 srptthread=0 iops=70918.08
> type=randwrite  bs=4k   drives=2 scst_threads=3 srptthread=0 iops=88843.35
> 
> 
> ------------------------------------------------------------------------------
> This SF.net email is sponsored by:
> SourcForge Community
> SourceForge wants to tell your story.
> http://p.sf.net/sfu/sf-spreadtheword
> _______________________________________________
> Scst-devel mailing list
> Scst-devel at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scst-devel
> 




More information about the general mailing list