[ofa-general] Re: iommu dma mapping alignment requirements
Benjamin Herrenschmidt
benh at au1.ibm.com
Thu Dec 20 15:49:45 PST 2007
> Sounds good. Thanks!
>
> Note, that these smaller sub-host-page-sized mappings might pollute the
> address space causing full aligned host-page-size maps to become
> scarce... Maybe there's a clever way to keep those in their own segment
> of the address space?
We already have a large vs. small split in the iommu virtual space to
alleviate this (though it's not a hard constraint, we can still get
into the "other" side if the default one is full).
Try that patch and let me know:
Index: linux-work/arch/powerpc/kernel/iommu.c
===================================================================
--- linux-work.orig/arch/powerpc/kernel/iommu.c 2007-12-21 10:39:39.000000000 +1100
+++ linux-work/arch/powerpc/kernel/iommu.c 2007-12-21 10:46:18.000000000 +1100
@@ -278,6 +278,7 @@ int iommu_map_sg(struct iommu_table *tbl
unsigned long flags;
struct scatterlist *s, *outs, *segstart;
int outcount, incount, i;
+ unsigned int align;
unsigned long handle;
BUG_ON(direction == DMA_NONE);
@@ -309,7 +310,11 @@ int iommu_map_sg(struct iommu_table *tbl
/* Allocate iommu entries for that segment */
vaddr = (unsigned long) sg_virt(s);
npages = iommu_num_pages(vaddr, slen);
- entry = iommu_range_alloc(tbl, npages, &handle, mask >> IOMMU_PAGE_SHIFT, 0);
+ align = 0;
+ if (IOMMU_PAGE_SHIFT < PAGE_SHIFT && (vaddr & ~PAGE_MASK) == 0)
+ align = PAGE_SHIFT - IOMMU_PAGE_SHIFT;
+ entry = iommu_range_alloc(tbl, npages, &handle,
+ mask >> IOMMU_PAGE_SHIFT, align);
DBG(" - vaddr: %lx, size: %lx\n", vaddr, slen);
@@ -572,7 +577,7 @@ dma_addr_t iommu_map_single(struct iommu
{
dma_addr_t dma_handle = DMA_ERROR_CODE;
unsigned long uaddr;
- unsigned int npages;
+ unsigned int npages, align;
BUG_ON(direction == DMA_NONE);
@@ -580,8 +585,13 @@ dma_addr_t iommu_map_single(struct iommu
npages = iommu_num_pages(uaddr, size);
if (tbl) {
+ align = 0;
+ if (IOMMU_PAGE_SHIFT < PAGE_SHIFT &&
+ ((unsigned long)vaddr & ~PAGE_MASK) == 0)
+ align = PAGE_SHIFT - IOMMU_PAGE_SHIFT;
+
dma_handle = iommu_alloc(tbl, vaddr, npages, direction,
- mask >> IOMMU_PAGE_SHIFT, 0);
+ mask >> IOMMU_PAGE_SHIFT, align);
if (dma_handle == DMA_ERROR_CODE) {
if (printk_ratelimit()) {
printk(KERN_INFO "iommu_alloc failed, "
More information about the general
mailing list