[openib-general] [PATCH 1/2] mthca support for max_map_per_fmr device attribute

Talpey, Thomas Thomas.Talpey at netapp.com
Tue May 23 04:30:33 PDT 2006


Doesn't this change only *increase* the window of vulnerability
which FMRs suffer? I.e. when you say "dirty", you mean "still mapped",
right?

Tom.

At 07:11 AM 5/23/2006, Or Gerlitz wrote:
>Or Gerlitz wrote:
>> The max fmr remaps device attribute is not set by the driver, so the generic 
>> fmr_pool uses a default of 32. Enlaring this quantity would make the 
>amortized 
>> cost of remaps lower. With the current mthca "default profile" on 
>memfull HCA 
>> 17 bits are used for MPT addressing so an FMR can be remapped 2^15 - 
>1 >> 32 times.
>
>Actually, the bigger (than unmap amortized cost) problem i was facing 
>with the unmap count being very low is the following: say my app 
>publishes N credits and serving each credit consumes one FMR, so my app 
>implementation created the pool with 2N FMRs and set the watermark to N.
>
>When "requests" come fast enough, there's a window in time when there's 
>an unmapping of N FMRs running at batch, but out of the remaining N FMRs 
>some are already dirty and can't be used to serve a credit. So the app 
>fails temporally... So, setting the watermark to 0.5N might solve this, 
>but since enlarging the number of remaps is trivial, i'd like to do it 
>first.
>
>The app i am talking about is a SCSI LLD (eg iSER, SRP) where each SCSI 
>command consumes one FMR and the LLD posts to the SCSI ML how many 
>commands can be issued in parallel.
>
>Or.
>
>_______________________________________________
>openib-general mailing list
>openib-general at openib.org
>http://openib.org/mailman/listinfo/openib-general
>
>To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
>
>




More information about the general mailing list