[ofa-general] SRP/mlx4 interrupts throttling performance
Vu Pham
vuhuong at mellanox.com
Fri Oct 3 15:57:54 PDT 2008
Cameron Harr wrote:
> Vu Pham wrote:
>>
>>> Alternatively, is there anything in the SCST layer I should tweak. I'm
>>> still running rev 245 of that code (kinda old, but works with OFED
>>> 1.3.1
>>> w/o hacks).
>>>
>>
>> What is the mode (pass thru, blockio...)?
> blockio
>> What is the scst_threads=<xx> parameters?
> Default, which I believe is #cpus
With blockio I get the best performance + stability with scst_threads=1
>>
>>>
>>>>
>>>>
>>>> My target server (with DAS) contains 8 2.8 GHz CPU cores and can
>>>> sustain over 200K IOPs locally, but only around 73K IOPs over SRP.
>>
>> Is this number from one initiator or multiple?
> One initiator. At first I thought it might be a limitation of the SRP,
> and added a second initiator, but the aggregate performance of the two
> was about equal to that of a single initiator.
Try again with scst_threads=1. I expect that you can get ~140K with two
initiators
>
>>
>>>> Looking at /proc/interrupts, I see that the mlx_core (comp) device
>>>> is pushing about 135K Int/s on 1 of 2 CPUs. All CPUs are enabled
>>>> for that PCI-E slot, but it only ever uses 2 of the CPUs, and only
>>>> 1 at a time. None of the other CPUs has an interrupt rate more than
>>>> about 40-50K/s.
>>>>
>>
>> The number of interrupt can be cut down if there are more completions
>> to be processed by sw. ie. please test with multiple QPs between one
>> initiator vs. your target and multiple initiators vs. your target
>>
> A couple questions here on my side. How would more QP connections
> reduce interrupts? It seems like they'd still need to come through the
> same mlx device, causing the same number or more, of interrupts. More
> importantly thought, how would one increase the number of QPs between
> and initiator and target? I did have my ib_srpt threads up, would that
> be comparable?
ib_srpt process completions in event callback handler. With more QPs
there are more completions pending per interrupt instead of one
completion event per interrupt.
You can have multiple QPs between initiator vs. target by using
different initiator_id_ext ie.
echo id_ext=xxx,ioc_guid=yyy,....initiator_ext=1 >
/sys/class/infiniband_srp/.../add_target
echo id_ext=xxx,ioc_guid=yyy,....initiator_ext=2 >
/sys/class/infiniband_srp/.../add_target
echo id_ext=xxx,ioc_guid=yyy,....initiator_ext=3 >
/sys/class/infiniband_srp/.../add_target
...
For example you see /dev/sda, sdb through first connection/qp
and you have sdc, sdd through second connection/qp
Then you can do I/Os to sda and sdd thru different QPs
-vu
More information about the general
mailing list