Message ID | 20181109082448.150302-2-drinkcat@chromium.org (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables | expand |
On 11/9/18 9:24 AM, Nicolas Boichat wrote: > Some callers, namely iommu/io-pgtable-arm-v7s, expect the physical > address returned by kmem_cache_alloc with GFP_DMA parameter to be > a 32-bit address. > > Instead of adding a separate SLAB_CACHE_DMA32 (and then audit > all the calls to check if they require memory from DMA or DMA32 > zone), we simply allocate SLAB_CACHE_DMA cache in DMA32 region, > if CONFIG_ZONE_DMA32 is set. > > Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32") > Signed-off-by: Nicolas Boichat <drinkcat@chromium.org> > --- > include/linux/slab.h | 13 ++++++++++++- > mm/slab.c | 2 +- > mm/slub.c | 2 +- > 3 files changed, 14 insertions(+), 3 deletions(-) > > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 918f374e7156f4..390afe90c5dec0 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -30,7 +30,7 @@ > #define SLAB_POISON ((slab_flags_t __force)0x00000800U) > /* Align objs on cache lines */ > #define SLAB_HWCACHE_ALIGN ((slab_flags_t __force)0x00002000U) > -/* Use GFP_DMA memory */ > +/* Use GFP_DMA or GFP_DMA32 memory */ > #define SLAB_CACHE_DMA ((slab_flags_t __force)0x00004000U) > /* DEBUG: Store the last owner for bug hunting */ > #define SLAB_STORE_USER ((slab_flags_t __force)0x00010000U) > @@ -126,6 +126,17 @@ > #define ZERO_OR_NULL_PTR(x) ((unsigned long)(x) <= \ > (unsigned long)ZERO_SIZE_PTR) > > +/* > + * When ZONE_DMA32 is defined, have SLAB_CACHE_DMA allocate memory with > + * GFP_DMA32 instead of GFP_DMA, as this is what some of the callers > + * require (instead of duplicating cache for DMA and DMA32 zones). > + */ > +#ifdef CONFIG_ZONE_DMA32 > +#define SLAB_CACHE_DMA_GFP GFP_DMA32 > +#else > +#define SLAB_CACHE_DMA_GFP GFP_DMA > +#endif AFAICS this will break e.g. x86 which can have both ZONE_DMA and ZONE_DMA32, and now you would make kmalloc(__GFP_DMA) return objects from ZONE_DMA32 instead of __ZONE_DMA, which can break something. Also I'm probably missing the point of this all. In patch 3 you use __get_dma32_pages() thus __get_free_pages(__GFP_DMA32), which uses alloc_pages, thus the page allocator directly, and there's no slab caches involved. It makes little sense to involve slab for page table allocations anyway, as those tend to be aligned to a page size (or high-order page size). So what am I missing?
On Fri, Nov 9, 2018 at 6:43 PM Vlastimil Babka <vbabka@suse.cz> wrote: > > On 11/9/18 9:24 AM, Nicolas Boichat wrote: > > Some callers, namely iommu/io-pgtable-arm-v7s, expect the physical > > address returned by kmem_cache_alloc with GFP_DMA parameter to be > > a 32-bit address. > > > > Instead of adding a separate SLAB_CACHE_DMA32 (and then audit > > all the calls to check if they require memory from DMA or DMA32 > > zone), we simply allocate SLAB_CACHE_DMA cache in DMA32 region, > > if CONFIG_ZONE_DMA32 is set. > > > > Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32") > > Signed-off-by: Nicolas Boichat <drinkcat@chromium.org> > > --- > > include/linux/slab.h | 13 ++++++++++++- > > mm/slab.c | 2 +- > > mm/slub.c | 2 +- > > 3 files changed, 14 insertions(+), 3 deletions(-) > > > > diff --git a/include/linux/slab.h b/include/linux/slab.h > > index 918f374e7156f4..390afe90c5dec0 100644 > > --- a/include/linux/slab.h > > +++ b/include/linux/slab.h > > @@ -30,7 +30,7 @@ > > #define SLAB_POISON ((slab_flags_t __force)0x00000800U) > > /* Align objs on cache lines */ > > #define SLAB_HWCACHE_ALIGN ((slab_flags_t __force)0x00002000U) > > -/* Use GFP_DMA memory */ > > +/* Use GFP_DMA or GFP_DMA32 memory */ > > #define SLAB_CACHE_DMA ((slab_flags_t __force)0x00004000U) > > /* DEBUG: Store the last owner for bug hunting */ > > #define SLAB_STORE_USER ((slab_flags_t __force)0x00010000U) > > @@ -126,6 +126,17 @@ > > #define ZERO_OR_NULL_PTR(x) ((unsigned long)(x) <= \ > > (unsigned long)ZERO_SIZE_PTR) > > > > +/* > > + * When ZONE_DMA32 is defined, have SLAB_CACHE_DMA allocate memory with > > + * GFP_DMA32 instead of GFP_DMA, as this is what some of the callers > > + * require (instead of duplicating cache for DMA and DMA32 zones). > > + */ > > +#ifdef CONFIG_ZONE_DMA32 > > +#define SLAB_CACHE_DMA_GFP GFP_DMA32 > > +#else > > +#define SLAB_CACHE_DMA_GFP GFP_DMA > > +#endif > > AFAICS this will break e.g. x86 which can have both ZONE_DMA and > ZONE_DMA32, and now you would make kmalloc(__GFP_DMA) return objects > from ZONE_DMA32 instead of __ZONE_DMA, which can break something. Oh, I was not aware that both ZONE_DMA and ZONE_DMA32 can be defined at the same time. I guess the test should be inverted, something like this (can be simplified...): #ifdef CONFIG_ZONE_DMA #define SLAB_CACHE_DMA_GFP GFP_DMA #elif defined(CONFIG_ZONE_DMA32) #define SLAB_CACHE_DMA_GFP GFP_DMA32 #else #define SLAB_CACHE_DMA_GFP GFP_DMA // ? #endif > Also I'm probably missing the point of this all. In patch 3 you use > __get_dma32_pages() thus __get_free_pages(__GFP_DMA32), which uses > alloc_pages, thus the page allocator directly, and there's no slab > caches involved. __get_dma32_pages fixes level 1 page allocations in the patch 3. This change fixes level 2 page allocations (kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA)), by transparently remapping GFP_DMA to an underlying ZONE_DMA32. The alternative would be to create a new SLAB_CACHE_DMA32 when CONFIG_ZONE_DMA32 is defined, but then I'm concerned that the callers would need to choose between the 2 (GFP_DMA or GFP_DMA32...), and also need to use some ifdefs (but maybe that's not a valid concern?). > It makes little sense to involve slab for page table > allocations anyway, as those tend to be aligned to a page size (or > high-order page size). So what am I missing? Level 2 tables are ARM_V7S_TABLE_SIZE(2) => 1kb, so we'd waste 3kb if we allocated a full page. Thanks,
On 11/9/18 12:57 PM, Nicolas Boichat wrote: > On Fri, Nov 9, 2018 at 6:43 PM Vlastimil Babka <vbabka@suse.cz> wrote: >> Also I'm probably missing the point of this all. In patch 3 you use >> __get_dma32_pages() thus __get_free_pages(__GFP_DMA32), which uses >> alloc_pages, thus the page allocator directly, and there's no slab >> caches involved. > > __get_dma32_pages fixes level 1 page allocations in the patch 3. > > This change fixes level 2 page allocations > (kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA)), by transparently > remapping GFP_DMA to an underlying ZONE_DMA32. > > The alternative would be to create a new SLAB_CACHE_DMA32 when > CONFIG_ZONE_DMA32 is defined, but then I'm concerned that the callers > would need to choose between the 2 (GFP_DMA or GFP_DMA32...), and also > need to use some ifdefs (but maybe that's not a valid concern?). > >> It makes little sense to involve slab for page table >> allocations anyway, as those tend to be aligned to a page size (or >> high-order page size). So what am I missing? > > Level 2 tables are ARM_V7S_TABLE_SIZE(2) => 1kb, so we'd waste 3kb if > we allocated a full page. Oh, I see. Well, I think indeed the most transparent would be to support SLAB_CACHE_DMA32. The callers of kmem_cache_zalloc() would then need not add anything special to gfp, as that's stored internally upon kmem_cache_create(). Of course SLAB_BUG_MASK would no longer have to treat __GFP_DMA32 as unexpected. It would be unexpected when passed to kmalloc() which doesn't have special dma32 caches, but for a cache explicitly created to allocate from ZONE_DMA32, I don't see why not. I'm somewhat surprised that there wouldn't be a need for this earlier, so maybe I'm still missing something. > Thanks, >
diff --git a/include/linux/slab.h b/include/linux/slab.h index 918f374e7156f4..390afe90c5dec0 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -30,7 +30,7 @@ #define SLAB_POISON ((slab_flags_t __force)0x00000800U) /* Align objs on cache lines */ #define SLAB_HWCACHE_ALIGN ((slab_flags_t __force)0x00002000U) -/* Use GFP_DMA memory */ +/* Use GFP_DMA or GFP_DMA32 memory */ #define SLAB_CACHE_DMA ((slab_flags_t __force)0x00004000U) /* DEBUG: Store the last owner for bug hunting */ #define SLAB_STORE_USER ((slab_flags_t __force)0x00010000U) @@ -126,6 +126,17 @@ #define ZERO_OR_NULL_PTR(x) ((unsigned long)(x) <= \ (unsigned long)ZERO_SIZE_PTR) +/* + * When ZONE_DMA32 is defined, have SLAB_CACHE_DMA allocate memory with + * GFP_DMA32 instead of GFP_DMA, as this is what some of the callers + * require (instead of duplicating cache for DMA and DMA32 zones). + */ +#ifdef CONFIG_ZONE_DMA32 +#define SLAB_CACHE_DMA_GFP GFP_DMA32 +#else +#define SLAB_CACHE_DMA_GFP GFP_DMA +#endif + #include <linux/kasan.h> struct mem_cgroup; diff --git a/mm/slab.c b/mm/slab.c index 2a5654bb3b3ff3..8810daa052dcdc 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2121,7 +2121,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags) cachep->flags = flags; cachep->allocflags = __GFP_COMP; if (flags & SLAB_CACHE_DMA) - cachep->allocflags |= GFP_DMA; + cachep->allocflags |= SLAB_CACHE_DMA_GFP; if (flags & SLAB_RECLAIM_ACCOUNT) cachep->allocflags |= __GFP_RECLAIMABLE; cachep->size = size; diff --git a/mm/slub.c b/mm/slub.c index e3629cd7aff164..fdd05323e54cbd 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3575,7 +3575,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) s->allocflags |= __GFP_COMP; if (s->flags & SLAB_CACHE_DMA) - s->allocflags |= GFP_DMA; + s->allocflags |= SLAB_CACHE_DMA_GFP; if (s->flags & SLAB_RECLAIM_ACCOUNT) s->allocflags |= __GFP_RECLAIMABLE;
Some callers, namely iommu/io-pgtable-arm-v7s, expect the physical address returned by kmem_cache_alloc with GFP_DMA parameter to be a 32-bit address. Instead of adding a separate SLAB_CACHE_DMA32 (and then audit all the calls to check if they require memory from DMA or DMA32 zone), we simply allocate SLAB_CACHE_DMA cache in DMA32 region, if CONFIG_ZONE_DMA32 is set. Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32") Signed-off-by: Nicolas Boichat <drinkcat@chromium.org> --- include/linux/slab.h | 13 ++++++++++++- mm/slab.c | 2 +- mm/slub.c | 2 +- 3 files changed, 14 insertions(+), 3 deletions(-)