[ofa-general] Re: Demand paging for memory regions

Steve Wise swise at opengridcomputing.com
Tue Feb 12 17:45:47 PST 2008


Jason Gunthorpe wrote:
> On Tue, Feb 12, 2008 at 05:01:17PM -0800, Christoph Lameter wrote:
>> On Tue, 12 Feb 2008, Jason Gunthorpe wrote:
>>
>>> Well, certainly today the memfree IB devices store the page tables in
>>> host memory so they are already designed to hang onto packets during
>>> the page lookup over PCIE, adding in faulting makes this time
>>> larger.
>> You really do not need a page table to use it. What needs to be maintained 
>> is knowledge on both side about what pages are currently shared across 
>> RDMA. If the VM decides to reclaim a page then the notification is used to 
>> remove the remote entry. If the remote side then tries to access the page 
>> again then the page fault on the remote side will stall until the local 
>> page has been brought back. RDMA can proceed after both sides again agree 
>> on that page now being sharable.
> 
> The problem is that the existing wire protocols do not have a
> provision for doing an 'are you ready' or 'I am not ready' exchange
> and they are not designed to store page tables on both sides as you
> propose. The remote side can send RDMA WRITE traffic at any time after
> the RDMA region is established. The local side must be able to handle
> it. There is no way to signal that a page is not ready and the remote
> should not send.
> 
> This means the only possible implementation is to stall/discard at the
> local adaptor when a RDMA WRITE is recieved for a page that has been
> reclaimed. This is what leads to deadlock/poor performance..
>

If the events are few and far between then this model is probably ok. 
For iWARP, it means TCP retransmit and slow start and all that, but if 
its an infrequent event, then its ok if it helps the host better manage 
memory.

Maybe... ;-)


Steve.



More information about the general mailing list