[ofa-general] SRP/mlx4 interrupts throttling performance

Cameron Harr cameron at harr.org
Wed Oct 8 15:30:33 PDT 2008


Cameron Harr wrote:
>>>
>>> Also a little disconcerting is that my average request size on the 
>>> target has gotten larger. I'm always writing 512B packets, and when 
>>> I run on one initiator, the average reqsz is around 600-800B. When I 
>>> add an initiator, the average reqsz basically doubles and is now 
>>> around 1200 - 1600B. I'm specifying direct IO in the test and scst 
>>> is configured as blockio (and thus direct IO), but it appears 
>>> something is cached at some point and seems to be coalesced when 
>>> another initiator is involved. Does this seem odd or normal? This 
>>> shows true whether the initiators are writing to different 
>>> partitions on the same LUN or the same LUN with no partitions. 

I've been doing some testing trying to determine why my average req sz 
is bloated beyond the 512B packets I'm sending. It appears to me to be 
caused by heavy utilization of the middleware: SRPT or SCST. As I add 
processes on an initiator, the ave req sz goes up, and really jumps when 
I have more than 2 processes (running on 1 or 2 initiators) or if I'm 
writing to the same target LUN. My hunch is that the calculation of the 
ave req sz over a 1s interval is skewed due to some requests having to 
wait for either the IB layer or the SCST layer.

Thinking that perhaps the srpt_thread was a cause, I turned off 
threading there, but that caused the packet sizing to be much more wild 
- never dropping to 512B and growing to as much as 4KB. Using the 
default deadline scheduler as opposed to the default cfq scheduler 
didn't seem to make a difference.

Cameron



More information about the general mailing list