Message ID | 1362372609-877-1-git-send-email-iamjoonsoo.kim@lge.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, 4 Mar 2013, Joonsoo Kim wrote: > In kmap_atomic(), kmap_high_get() is invoked for checking already > mapped area. In __flush_dcache_page() and dma_cache_maint_page(), > we explicitly call kmap_high_get() before kmap_atomic() > when cache_is_vipt(), so kmap_high_get() can be invoked twice. > This is useless operation, so remove one. > > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Nicolas Pitre <nico@linaro.org> > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c > index c7e3759..b7711be 100644 > --- a/arch/arm/mm/dma-mapping.c > +++ b/arch/arm/mm/dma-mapping.c > @@ -822,16 +822,16 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset, > if (PageHighMem(page)) { > if (len + offset > PAGE_SIZE) > len = PAGE_SIZE - offset; > - vaddr = kmap_high_get(page); > - if (vaddr) { > - vaddr += offset; > - op(vaddr, len, dir); > - kunmap_high(page); > - } else if (cache_is_vipt()) { > - /* unmapped pages might still be cached */ > + if (cache_is_vipt()) { > vaddr = kmap_atomic(page); > op(vaddr + offset, len, dir); > kunmap_atomic(vaddr); > + } else { > + vaddr = kmap_high_get(page); > + if (vaddr) { > + op(vaddr + offset, len, dir); > + kunmap_high(page); > + } > } > } else { > vaddr = page_address(page) + offset; > diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c > index 1c8f7f5..e6a03d0 100644 > --- a/arch/arm/mm/flush.c > +++ b/arch/arm/mm/flush.c > @@ -170,15 +170,18 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) > if (!PageHighMem(page)) { > __cpuc_flush_dcache_area(page_address(page), PAGE_SIZE); > } else { > - void *addr = kmap_high_get(page); > - if (addr) { > - __cpuc_flush_dcache_area(addr, PAGE_SIZE); > - kunmap_high(page); > - } else if (cache_is_vipt()) { > - /* unmapped pages might still be cached */ > + void *addr; > + > + if (cache_is_vipt()) { > addr = kmap_atomic(page); > __cpuc_flush_dcache_area(addr, PAGE_SIZE); > kunmap_atomic(addr); > + } else { > + addr = kmap_high_get(page); > + if (addr) { > + __cpuc_flush_dcache_area(addr, PAGE_SIZE); > + kunmap_high(page); > + } > } > } > > -- > 1.7.9.5 >
On Mon, Mar 04, 2013 at 01:50:09PM +0900, Joonsoo Kim wrote: > In kmap_atomic(), kmap_high_get() is invoked for checking already > mapped area. In __flush_dcache_page() and dma_cache_maint_page(), > we explicitly call kmap_high_get() before kmap_atomic() > when cache_is_vipt(), so kmap_high_get() can be invoked twice. > This is useless operation, so remove one. > > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c > index c7e3759..b7711be 100644 > --- a/arch/arm/mm/dma-mapping.c > +++ b/arch/arm/mm/dma-mapping.c > @@ -822,16 +822,16 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset, > if (PageHighMem(page)) { > if (len + offset > PAGE_SIZE) > len = PAGE_SIZE - offset; > - vaddr = kmap_high_get(page); > - if (vaddr) { > - vaddr += offset; > - op(vaddr, len, dir); > - kunmap_high(page); > - } else if (cache_is_vipt()) { > - /* unmapped pages might still be cached */ > + if (cache_is_vipt()) { This should be: if (cache_is_vipt_nonaliasing()) to make it _explicit_ that this technique is only for non-aliasing VIPT caches (this doesn't work on any other of our cache types.) Yes, I know we don't support highmem with VIPT aliasing caches - but still, we should ensure that this is self-documented in this code. Same for arch/arm/mm/flush.c
Hello, Russell. On Thu, Mar 07, 2013 at 01:26:23PM +0000, Russell King - ARM Linux wrote: > On Mon, Mar 04, 2013 at 01:50:09PM +0900, Joonsoo Kim wrote: > > In kmap_atomic(), kmap_high_get() is invoked for checking already > > mapped area. In __flush_dcache_page() and dma_cache_maint_page(), > > we explicitly call kmap_high_get() before kmap_atomic() > > when cache_is_vipt(), so kmap_high_get() can be invoked twice. > > This is useless operation, so remove one. > > > > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > > > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c > > index c7e3759..b7711be 100644 > > --- a/arch/arm/mm/dma-mapping.c > > +++ b/arch/arm/mm/dma-mapping.c > > @@ -822,16 +822,16 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset, > > if (PageHighMem(page)) { > > if (len + offset > PAGE_SIZE) > > len = PAGE_SIZE - offset; > > - vaddr = kmap_high_get(page); > > - if (vaddr) { > > - vaddr += offset; > > - op(vaddr, len, dir); > > - kunmap_high(page); > > - } else if (cache_is_vipt()) { > > - /* unmapped pages might still be cached */ > > + if (cache_is_vipt()) { > > This should be: > if (cache_is_vipt_nonaliasing()) > > to make it _explicit_ that this technique is only for non-aliasing VIPT > caches (this doesn't work on any other of our cache types.) Yes, I > know we don't support highmem with VIPT aliasing caches - but still, > we should ensure that this is self-documented in this code. > > Same for arch/arm/mm/flush.c Okay. I will re-work and will send v2 soon. > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index c7e3759..b7711be 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -822,16 +822,16 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset, if (PageHighMem(page)) { if (len + offset > PAGE_SIZE) len = PAGE_SIZE - offset; - vaddr = kmap_high_get(page); - if (vaddr) { - vaddr += offset; - op(vaddr, len, dir); - kunmap_high(page); - } else if (cache_is_vipt()) { - /* unmapped pages might still be cached */ + if (cache_is_vipt()) { vaddr = kmap_atomic(page); op(vaddr + offset, len, dir); kunmap_atomic(vaddr); + } else { + vaddr = kmap_high_get(page); + if (vaddr) { + op(vaddr + offset, len, dir); + kunmap_high(page); + } } } else { vaddr = page_address(page) + offset; diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 1c8f7f5..e6a03d0 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -170,15 +170,18 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) if (!PageHighMem(page)) { __cpuc_flush_dcache_area(page_address(page), PAGE_SIZE); } else { - void *addr = kmap_high_get(page); - if (addr) { - __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_high(page); - } else if (cache_is_vipt()) { - /* unmapped pages might still be cached */ + void *addr; + + if (cache_is_vipt()) { addr = kmap_atomic(page); __cpuc_flush_dcache_area(addr, PAGE_SIZE); kunmap_atomic(addr); + } else { + addr = kmap_high_get(page); + if (addr) { + __cpuc_flush_dcache_area(addr, PAGE_SIZE); + kunmap_high(page); + } } }
In kmap_atomic(), kmap_high_get() is invoked for checking already mapped area. In __flush_dcache_page() and dma_cache_maint_page(), we explicitly call kmap_high_get() before kmap_atomic() when cache_is_vipt(), so kmap_high_get() can be invoked twice. This is useless operation, so remove one. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>