[Openib-windows] Windows DMA model
Leonid Keller
leonid at mellanox.co.il
Tue Jan 24 06:28:13 PST 2006
> -----Original Message-----
> From: Jan Bottorff [mailto:jbottorff at xsigo.com]
> Sent: Tuesday, January 24, 2006 3:45 AM
> To: Fab Tillier; Leonid Keller
> Cc: openib-windows at openib.org
> Subject: RE: [Openib-windows] Windows DMA model
>
> Hi,
>
> My understanding is MmMapLockedPagesSpecifyCache was defined
> because there was no way to assure the match between
> underlying cache properties and what was set in the page
> table entries when creating a new mapping.
> Someplace (a MS KB article?) this is explained in detail.
>
> If you look in the DDK under MEMORY_CACHING_TYPE for the 3rd
> parameter to MmMapLockedPagesSpecifyCache, is says "Processor
> translation buffers cache virtual to physical address
> translations. These translation buffers allow many virtual
> addresses to map a single physical address.
> However, only one caching behavior is allowed for any given
> physical address translation. Therefore, if a driver maps two
> different virtual address ranges to the same physical
> address, it must ensure that it specifies the same caching
> behavior for both. Otherwise, the processor behavior is
> undefined with unpredictable system results."
>
> This certainly sounds like if you have memory with a virtual
> address, you can't call MmMapLockedPagesSpecifyCache to
> create an alternative mapping with a different cache
> behavior. Pretty much all the allocation calls return a
> virtual address to describe the allocation, so any call to
> MmMapLockedPagesSpecifyCache needs to use a matching cache behavior.
> The end result is you can't just arbitrarily turn some memory
> into cached or uncached memory. The Intel processor manual
> also warns about mismatched mappings being invalid.
I was afraid of such case that's why I've suggested an API, that is just
asking for buffer non-cachability, but is ready to work with chachable
buffers.
>
> - Jan
>
>
> > -----Original Message-----
> > From: Fab Tillier [mailto:ftillier at silverstorm.com]
> > Sent: Monday, January 23, 2006 3:57 PM
> > To: Jan Bottorff; Leonid Keller
> > Cc: openib-windows at openib.org
> > Subject: RE: [Openib-windows] Windows DMA model
> >
> > > From: Jan Bottorff [mailto:jbottorff at xsigo.com]
> > > Sent: Monday, January 23, 2006 3:48 PM
> > >
> > > > In my understanding, a buffer, allocated by malloc, is
> > cachable, and
> > > > we want to MAKE it unchacable both for CPU and DMA. I
> > guess, that in
> > > > your understanding a Common Buffer is already a SUCH
> one (i.e. -
> > > > twice unchachable).
> > >
> > > It may be impossible to do this. I believe at boot time memory is
> > > chopped up into regions and the processor MTRR registers are
> > > programmed with the memory properties (like caching or not)
> > for each
> > > region. Most of memory is in a region that is fully
> > > cacheable/prefetchable. Memory reserved for common memory
> > MAY be in a
> > > different region, although exactly what properties are set
> > will depend
> > > on things like the I/O bridge architecture.
> > >
> > > There also are bits in the page tables that indicate caching, and
> > > these MUST match the properties of the underlying memory
> (allocated
> > > from some MTRR defined region). If the MTRR and page tables
> > don't have
> > > matching attributes, I believe the Intel processor manual
> describes
> > > processor behaivor as undefined.
> >
> > Calls like MmMapLockedPagesSpecifyCache let you specify cache
> > behavior, so I would expect the OS to be able to handle
> runtime cache
> > behavior specification.
> >
> > > It's not just a matter of dreaming up any API that we
> want, it's a
> > > matter of defining API's which are implementable on
> current, recent
> > > past, and future processors based on underlying hardware
> > constraints.
> >
> > I totally agree. From my queries into Microsoft, they are aware of
> > the deficiencies of the current DMA APIs for RDMA and similar
> > hardware, and are working on a solution. I don't know more
> than that,
> > however.
> >
> > - Fab
> >
> >
> >
>
More information about the ofw
mailing list