Message ID | 20210505200610.13943-3-longman@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: memcg/slab: Fix objcg pointer array handling problem | expand |
On Wed, May 05, 2021 at 04:06:09PM -0400, Waiman Long wrote: > There are currently two problems in the way the objcg pointer array > (memcg_data) in the page structure is being allocated and freed. > > On its allocation, it is possible that the allocated objcg pointer > array comes from the same slab that requires memory accounting. If this > happens, the slab will never become empty again as there is at least > one object left (the obj_cgroup array) in the slab. > > When it is freed, the objcg pointer array object may be the last one > in its slab and hence causes kfree() to be called again. With the > right workload, the slab cache may be set up in a way that allows the > recursive kfree() calling loop to nest deep enough to cause a kernel > stack overflow and panic the system. > > One way to solve this problem is to split the kmalloc-<n> caches > (KMALLOC_NORMAL) into two separate sets - a new set of kmalloc-<n> > (KMALLOC_NORMAL) caches for unaccounted objects only and a new set of > kmalloc-cg-<n> (KMALLOC_CGROUP) caches for accounted objects only. All > the other caches can still allow a mix of accounted and unaccounted > objects. > > With this change, all the objcg pointer array objects will come from > KMALLOC_NORMAL caches which won't have their objcg pointer arrays. So > both the recursive kfree() problem and non-freeable slab problem are > gone. > > Since both the KMALLOC_NORMAL and KMALLOC_CGROUP caches no longer have > mixed accounted and unaccounted objects, this will slightly reduce the > number of objcg pointer arrays that need to be allocated and save a bit > of memory. On the other hand, creating a new set of kmalloc caches does > have the effect of reducing cache utilization. So it is properly a wash. > > The new KMALLOC_CGROUP is added between KMALLOC_NORMAL and > KMALLOC_RECLAIM so that the first for loop in create_kmalloc_caches() > will include the newly added caches without change. > > Suggested-by: Vlastimil Babka <vbabka@suse.cz> > Signed-off-by: Waiman Long <longman@redhat.com> > Reviewed-by: Shakeel Butt <shakeelb@google.com> Acked-by: Roman Gushchin <guro@fb.com>
On 5/5/21 10:06 PM, Waiman Long wrote: > There are currently two problems in the way the objcg pointer array > (memcg_data) in the page structure is being allocated and freed. > > On its allocation, it is possible that the allocated objcg pointer > array comes from the same slab that requires memory accounting. If this > happens, the slab will never become empty again as there is at least > one object left (the obj_cgroup array) in the slab. > > When it is freed, the objcg pointer array object may be the last one > in its slab and hence causes kfree() to be called again. With the > right workload, the slab cache may be set up in a way that allows the > recursive kfree() calling loop to nest deep enough to cause a kernel > stack overflow and panic the system. > > One way to solve this problem is to split the kmalloc-<n> caches > (KMALLOC_NORMAL) into two separate sets - a new set of kmalloc-<n> > (KMALLOC_NORMAL) caches for unaccounted objects only and a new set of > kmalloc-cg-<n> (KMALLOC_CGROUP) caches for accounted objects only. All > the other caches can still allow a mix of accounted and unaccounted > objects. > > With this change, all the objcg pointer array objects will come from > KMALLOC_NORMAL caches which won't have their objcg pointer arrays. So > both the recursive kfree() problem and non-freeable slab problem are > gone. > > Since both the KMALLOC_NORMAL and KMALLOC_CGROUP caches no longer have > mixed accounted and unaccounted objects, this will slightly reduce the > number of objcg pointer arrays that need to be allocated and save a bit > of memory. On the other hand, creating a new set of kmalloc caches does > have the effect of reducing cache utilization. So it is properly a wash. > > The new KMALLOC_CGROUP is added between KMALLOC_NORMAL and > KMALLOC_RECLAIM so that the first for loop in create_kmalloc_caches() > will include the newly added caches without change. > > Suggested-by: Vlastimil Babka <vbabka@suse.cz> > Signed-off-by: Waiman Long <longman@redhat.com> > Reviewed-by: Shakeel Butt <shakeelb@google.com> A last nitpick: the new caches -cg should perhaps not be created when cgroup_memory_nokmem == true because kmemcg was disabled by the respective boot param.
On 5/5/21 5:41 PM, Vlastimil Babka wrote: > On 5/5/21 10:06 PM, Waiman Long wrote: >> There are currently two problems in the way the objcg pointer array >> (memcg_data) in the page structure is being allocated and freed. >> >> On its allocation, it is possible that the allocated objcg pointer >> array comes from the same slab that requires memory accounting. If this >> happens, the slab will never become empty again as there is at least >> one object left (the obj_cgroup array) in the slab. >> >> When it is freed, the objcg pointer array object may be the last one >> in its slab and hence causes kfree() to be called again. With the >> right workload, the slab cache may be set up in a way that allows the >> recursive kfree() calling loop to nest deep enough to cause a kernel >> stack overflow and panic the system. >> >> One way to solve this problem is to split the kmalloc-<n> caches >> (KMALLOC_NORMAL) into two separate sets - a new set of kmalloc-<n> >> (KMALLOC_NORMAL) caches for unaccounted objects only and a new set of >> kmalloc-cg-<n> (KMALLOC_CGROUP) caches for accounted objects only. All >> the other caches can still allow a mix of accounted and unaccounted >> objects. >> >> With this change, all the objcg pointer array objects will come from >> KMALLOC_NORMAL caches which won't have their objcg pointer arrays. So >> both the recursive kfree() problem and non-freeable slab problem are >> gone. >> >> Since both the KMALLOC_NORMAL and KMALLOC_CGROUP caches no longer have >> mixed accounted and unaccounted objects, this will slightly reduce the >> number of objcg pointer arrays that need to be allocated and save a bit >> of memory. On the other hand, creating a new set of kmalloc caches does >> have the effect of reducing cache utilization. So it is properly a wash. >> >> The new KMALLOC_CGROUP is added between KMALLOC_NORMAL and >> KMALLOC_RECLAIM so that the first for loop in create_kmalloc_caches() >> will include the newly added caches without change. >> >> Suggested-by: Vlastimil Babka <vbabka@suse.cz> >> Signed-off-by: Waiman Long <longman@redhat.com> >> Reviewed-by: Shakeel Butt <shakeelb@google.com> > A last nitpick: the new caches -cg should perhaps not be created when > cgroup_memory_nokmem == true because kmemcg was disabled by the respective boot > param. > It is a nice to have feature. However, the nokmem kernel parameter isn't used that often. The cgroup_memory_nokmem variable is private to memcontrol.c and is not directly accessible. I will take a look on that, but it will be a follow-on patch. I am not planning to change the current patchset unless there are other issues coming up. Cheers, Longman
On 5/5/21 10:06 PM, Waiman Long wrote: > There are currently two problems in the way the objcg pointer array > (memcg_data) in the page structure is being allocated and freed. > > On its allocation, it is possible that the allocated objcg pointer > array comes from the same slab that requires memory accounting. If this > happens, the slab will never become empty again as there is at least > one object left (the obj_cgroup array) in the slab. > > When it is freed, the objcg pointer array object may be the last one > in its slab and hence causes kfree() to be called again. With the > right workload, the slab cache may be set up in a way that allows the > recursive kfree() calling loop to nest deep enough to cause a kernel > stack overflow and panic the system. > > One way to solve this problem is to split the kmalloc-<n> caches > (KMALLOC_NORMAL) into two separate sets - a new set of kmalloc-<n> > (KMALLOC_NORMAL) caches for unaccounted objects only and a new set of > kmalloc-cg-<n> (KMALLOC_CGROUP) caches for accounted objects only. All > the other caches can still allow a mix of accounted and unaccounted > objects. > > With this change, all the objcg pointer array objects will come from > KMALLOC_NORMAL caches which won't have their objcg pointer arrays. So > both the recursive kfree() problem and non-freeable slab problem are > gone. > > Since both the KMALLOC_NORMAL and KMALLOC_CGROUP caches no longer have > mixed accounted and unaccounted objects, this will slightly reduce the > number of objcg pointer arrays that need to be allocated and save a bit > of memory. On the other hand, creating a new set of kmalloc caches does > have the effect of reducing cache utilization. So it is properly a wash. > > The new KMALLOC_CGROUP is added between KMALLOC_NORMAL and > KMALLOC_RECLAIM so that the first for loop in create_kmalloc_caches() > will include the newly added caches without change. > > Suggested-by: Vlastimil Babka <vbabka@suse.cz> > Signed-off-by: Waiman Long <longman@redhat.com> > Reviewed-by: Shakeel Butt <shakeelb@google.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> I still believe the cgroup.memory=nokmem parameter should be respected, otherwise the caches are not only created, but also used. I offer this followup for squashing into your patch if you and Andrew agree: ----8<---- From c87378d437d9a59b8757033485431b4721c74173 Mon Sep 17 00:00:00 2001 From: Vlastimil Babka <vbabka@suse.cz> Date: Thu, 6 May 2021 17:53:21 +0200 Subject: [PATCH] mm: memcg/slab: don't create kmalloc-cg caches with cgroup.memory=nokmem The caches should not be created when kmemcg is disabled on boot, otherwise they are also filled by kmalloc(__GFP_ACCOUNT) allocations. When booted with cgroup.memory=nokmem, link the kmalloc_caches[KMALLOC_CGROUP] entries to KMALLOC_NORMAL entries instead. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> --- mm/internal.h | 5 +++++ mm/memcontrol.c | 2 +- mm/slab_common.c | 9 +++++++-- 3 files changed, 13 insertions(+), 3 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index ef5f336f59bd..b2d60b3403c7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -135,6 +135,11 @@ extern void putback_lru_page(struct page *page); */ extern pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address); +/* + * in mm/memcontrol.c: + */ +extern bool cgroup_memory_nokmem; + /* * in mm/page_alloc.c */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 5e3b4f23b830..b9ec01f2b4f6 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -83,7 +83,7 @@ DEFINE_PER_CPU(struct mem_cgroup *, int_active_memcg); static bool cgroup_memory_nosocket; /* Kernel memory accounting disabled? */ -static bool cgroup_memory_nokmem; +bool cgroup_memory_nokmem; /* Whether the swap controller is active */ #ifdef CONFIG_MEMCG_SWAP diff --git a/mm/slab_common.c b/mm/slab_common.c index bbaf41a7c77e..363f90215401 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -832,10 +832,15 @@ void __init setup_kmalloc_cache_index_table(void) static void __init new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) { - if (type == KMALLOC_RECLAIM) + if (type == KMALLOC_RECLAIM) { flags |= SLAB_RECLAIM_ACCOUNT; - else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) + } else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) { + if (cgroup_memory_nokmem) { + kmalloc_caches[type][idx] = kmalloc_caches[KMALLOC_NORMAL][idx]; + return; + } flags |= SLAB_ACCOUNT; + } kmalloc_caches[type][idx] = create_kmalloc_cache( kmalloc_info[idx].name[type],
On Thu, May 6, 2021 at 9:00 AM Vlastimil Babka <vbabka@suse.cz> wrote: > > > On 5/5/21 10:06 PM, Waiman Long wrote: > > There are currently two problems in the way the objcg pointer array > > (memcg_data) in the page structure is being allocated and freed. > > > > On its allocation, it is possible that the allocated objcg pointer > > array comes from the same slab that requires memory accounting. If this > > happens, the slab will never become empty again as there is at least > > one object left (the obj_cgroup array) in the slab. > > > > When it is freed, the objcg pointer array object may be the last one > > in its slab and hence causes kfree() to be called again. With the > > right workload, the slab cache may be set up in a way that allows the > > recursive kfree() calling loop to nest deep enough to cause a kernel > > stack overflow and panic the system. > > > > One way to solve this problem is to split the kmalloc-<n> caches > > (KMALLOC_NORMAL) into two separate sets - a new set of kmalloc-<n> > > (KMALLOC_NORMAL) caches for unaccounted objects only and a new set of > > kmalloc-cg-<n> (KMALLOC_CGROUP) caches for accounted objects only. All > > the other caches can still allow a mix of accounted and unaccounted > > objects. > > > > With this change, all the objcg pointer array objects will come from > > KMALLOC_NORMAL caches which won't have their objcg pointer arrays. So > > both the recursive kfree() problem and non-freeable slab problem are > > gone. > > > > Since both the KMALLOC_NORMAL and KMALLOC_CGROUP caches no longer have > > mixed accounted and unaccounted objects, this will slightly reduce the > > number of objcg pointer arrays that need to be allocated and save a bit > > of memory. On the other hand, creating a new set of kmalloc caches does > > have the effect of reducing cache utilization. So it is properly a wash. > > > > The new KMALLOC_CGROUP is added between KMALLOC_NORMAL and > > KMALLOC_RECLAIM so that the first for loop in create_kmalloc_caches() > > will include the newly added caches without change. > > > > Suggested-by: Vlastimil Babka <vbabka@suse.cz> > > Signed-off-by: Waiman Long <longman@redhat.com> > > Reviewed-by: Shakeel Butt <shakeelb@google.com> > > Reviewed-by: Vlastimil Babka <vbabka@suse.cz> > > I still believe the cgroup.memory=nokmem parameter should be respected, > otherwise the caches are not only created, but also used. I offer this followup > for squashing into your patch if you and Andrew agree: > > ----8<---- > From c87378d437d9a59b8757033485431b4721c74173 Mon Sep 17 00:00:00 2001 > From: Vlastimil Babka <vbabka@suse.cz> > Date: Thu, 6 May 2021 17:53:21 +0200 > Subject: [PATCH] mm: memcg/slab: don't create kmalloc-cg caches with > cgroup.memory=nokmem > > The caches should not be created when kmemcg is disabled on boot, otherwise > they are also filled by kmalloc(__GFP_ACCOUNT) allocations. When booted with > cgroup.memory=nokmem, link the kmalloc_caches[KMALLOC_CGROUP] entries to > KMALLOC_NORMAL entries instead. > > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Yes this makes sense: Reviewed-by: Shakeel Butt <shakeelb@google.com>
On Thu, May 06, 2021 at 06:00:16PM +0200, Vlastimil Babka wrote: > > On 5/5/21 10:06 PM, Waiman Long wrote: > > There are currently two problems in the way the objcg pointer array > > (memcg_data) in the page structure is being allocated and freed. > > > > On its allocation, it is possible that the allocated objcg pointer > > array comes from the same slab that requires memory accounting. If this > > happens, the slab will never become empty again as there is at least > > one object left (the obj_cgroup array) in the slab. > > > > When it is freed, the objcg pointer array object may be the last one > > in its slab and hence causes kfree() to be called again. With the > > right workload, the slab cache may be set up in a way that allows the > > recursive kfree() calling loop to nest deep enough to cause a kernel > > stack overflow and panic the system. > > > > One way to solve this problem is to split the kmalloc-<n> caches > > (KMALLOC_NORMAL) into two separate sets - a new set of kmalloc-<n> > > (KMALLOC_NORMAL) caches for unaccounted objects only and a new set of > > kmalloc-cg-<n> (KMALLOC_CGROUP) caches for accounted objects only. All > > the other caches can still allow a mix of accounted and unaccounted > > objects. > > > > With this change, all the objcg pointer array objects will come from > > KMALLOC_NORMAL caches which won't have their objcg pointer arrays. So > > both the recursive kfree() problem and non-freeable slab problem are > > gone. > > > > Since both the KMALLOC_NORMAL and KMALLOC_CGROUP caches no longer have > > mixed accounted and unaccounted objects, this will slightly reduce the > > number of objcg pointer arrays that need to be allocated and save a bit > > of memory. On the other hand, creating a new set of kmalloc caches does > > have the effect of reducing cache utilization. So it is properly a wash. > > > > The new KMALLOC_CGROUP is added between KMALLOC_NORMAL and > > KMALLOC_RECLAIM so that the first for loop in create_kmalloc_caches() > > will include the newly added caches without change. > > > > Suggested-by: Vlastimil Babka <vbabka@suse.cz> > > Signed-off-by: Waiman Long <longman@redhat.com> > > Reviewed-by: Shakeel Butt <shakeelb@google.com> > > Reviewed-by: Vlastimil Babka <vbabka@suse.cz> > > I still believe the cgroup.memory=nokmem parameter should be respected, > otherwise the caches are not only created, but also used. +1 > I offer this followup > for squashing into your patch if you and Andrew agree: > > ----8<---- > From c87378d437d9a59b8757033485431b4721c74173 Mon Sep 17 00:00:00 2001 > From: Vlastimil Babka <vbabka@suse.cz> > Date: Thu, 6 May 2021 17:53:21 +0200 > Subject: [PATCH] mm: memcg/slab: don't create kmalloc-cg caches with > cgroup.memory=nokmem > > The caches should not be created when kmemcg is disabled on boot, otherwise > they are also filled by kmalloc(__GFP_ACCOUNT) allocations. When booted with > cgroup.memory=nokmem, link the kmalloc_caches[KMALLOC_CGROUP] entries to > KMALLOC_NORMAL entries instead. > > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Roman Gushchin <guro@fb.com> Thanks!
On 5/6/21 12:00 PM, Vlastimil Babka wrote: > On 5/5/21 10:06 PM, Waiman Long wrote: >> There are currently two problems in the way the objcg pointer array >> (memcg_data) in the page structure is being allocated and freed. >> >> On its allocation, it is possible that the allocated objcg pointer >> array comes from the same slab that requires memory accounting. If this >> happens, the slab will never become empty again as there is at least >> one object left (the obj_cgroup array) in the slab. >> >> When it is freed, the objcg pointer array object may be the last one >> in its slab and hence causes kfree() to be called again. With the >> right workload, the slab cache may be set up in a way that allows the >> recursive kfree() calling loop to nest deep enough to cause a kernel >> stack overflow and panic the system. >> >> One way to solve this problem is to split the kmalloc-<n> caches >> (KMALLOC_NORMAL) into two separate sets - a new set of kmalloc-<n> >> (KMALLOC_NORMAL) caches for unaccounted objects only and a new set of >> kmalloc-cg-<n> (KMALLOC_CGROUP) caches for accounted objects only. All >> the other caches can still allow a mix of accounted and unaccounted >> objects. >> >> With this change, all the objcg pointer array objects will come from >> KMALLOC_NORMAL caches which won't have their objcg pointer arrays. So >> both the recursive kfree() problem and non-freeable slab problem are >> gone. >> >> Since both the KMALLOC_NORMAL and KMALLOC_CGROUP caches no longer have >> mixed accounted and unaccounted objects, this will slightly reduce the >> number of objcg pointer arrays that need to be allocated and save a bit >> of memory. On the other hand, creating a new set of kmalloc caches does >> have the effect of reducing cache utilization. So it is properly a wash. >> >> The new KMALLOC_CGROUP is added between KMALLOC_NORMAL and >> KMALLOC_RECLAIM so that the first for loop in create_kmalloc_caches() >> will include the newly added caches without change. >> >> Suggested-by: Vlastimil Babka <vbabka@suse.cz> >> Signed-off-by: Waiman Long <longman@redhat.com> >> Reviewed-by: Shakeel Butt <shakeelb@google.com> > Reviewed-by: Vlastimil Babka <vbabka@suse.cz> > > I still believe the cgroup.memory=nokmem parameter should be respected, > otherwise the caches are not only created, but also used. I offer this followup > for squashing into your patch if you and Andrew agree: > > ----8<---- > From c87378d437d9a59b8757033485431b4721c74173 Mon Sep 17 00:00:00 2001 > From: Vlastimil Babka <vbabka@suse.cz> > Date: Thu, 6 May 2021 17:53:21 +0200 > Subject: [PATCH] mm: memcg/slab: don't create kmalloc-cg caches with > cgroup.memory=nokmem > > The caches should not be created when kmemcg is disabled on boot, otherwise > they are also filled by kmalloc(__GFP_ACCOUNT) allocations. When booted with > cgroup.memory=nokmem, link the kmalloc_caches[KMALLOC_CGROUP] entries to > KMALLOC_NORMAL entries instead. > > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > --- > mm/internal.h | 5 +++++ > mm/memcontrol.c | 2 +- > mm/slab_common.c | 9 +++++++-- > 3 files changed, 13 insertions(+), 3 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index ef5f336f59bd..b2d60b3403c7 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -135,6 +135,11 @@ extern void putback_lru_page(struct page *page); > */ > extern pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address); > > +/* > + * in mm/memcontrol.c: > + */ > +extern bool cgroup_memory_nokmem; > + > /* > * in mm/page_alloc.c > */ > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 5e3b4f23b830..b9ec01f2b4f6 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -83,7 +83,7 @@ DEFINE_PER_CPU(struct mem_cgroup *, int_active_memcg); > static bool cgroup_memory_nosocket; > > /* Kernel memory accounting disabled? */ > -static bool cgroup_memory_nokmem; > +bool cgroup_memory_nokmem; > > /* Whether the swap controller is active */ > #ifdef CONFIG_MEMCG_SWAP > diff --git a/mm/slab_common.c b/mm/slab_common.c > index bbaf41a7c77e..363f90215401 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -832,10 +832,15 @@ void __init setup_kmalloc_cache_index_table(void) > static void __init > new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) > { > - if (type == KMALLOC_RECLAIM) > + if (type == KMALLOC_RECLAIM) { > flags |= SLAB_RECLAIM_ACCOUNT; > - else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) > + } else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) { > + if (cgroup_memory_nokmem) { > + kmalloc_caches[type][idx] = kmalloc_caches[KMALLOC_NORMAL][idx]; > + return; > + } > flags |= SLAB_ACCOUNT; > + } > > kmalloc_caches[type][idx] = create_kmalloc_cache( > kmalloc_info[idx].name[type], Thanks, the patch looks good to me. Acked-by: Waiman Long <longman@redhat.com> Cheers, Longman
diff --git a/include/linux/slab.h b/include/linux/slab.h index 0c97d788762c..a51cad5f561c 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -305,12 +305,23 @@ static inline void __check_heap_object(const void *ptr, unsigned long n, /* * Whenever changing this, take care of that kmalloc_type() and * create_kmalloc_caches() still work as intended. + * + * KMALLOC_NORMAL can contain only unaccounted objects whereas KMALLOC_CGROUP + * is for accounted but unreclaimable and non-dma objects. All the other + * kmem caches can have both accounted and unaccounted objects. */ enum kmalloc_cache_type { KMALLOC_NORMAL = 0, +#ifdef CONFIG_MEMCG_KMEM + KMALLOC_CGROUP, +#else + KMALLOC_CGROUP = KMALLOC_NORMAL, +#endif KMALLOC_RECLAIM, #ifdef CONFIG_ZONE_DMA KMALLOC_DMA, +#else + KMALLOC_DMA = KMALLOC_NORMAL, #endif NR_KMALLOC_TYPES }; @@ -319,24 +330,36 @@ enum kmalloc_cache_type { extern struct kmem_cache * kmalloc_caches[NR_KMALLOC_TYPES][KMALLOC_SHIFT_HIGH + 1]; +/* + * Define gfp bits that should not be set for KMALLOC_NORMAL. + */ +#define KMALLOC_NOT_NORMAL_BITS \ + (__GFP_RECLAIMABLE | \ + (IS_ENABLED(CONFIG_ZONE_DMA) ? __GFP_DMA : 0) | \ + (IS_ENABLED(CONFIG_MEMCG_KMEM) ? __GFP_ACCOUNT : 0)) + static __always_inline enum kmalloc_cache_type kmalloc_type(gfp_t flags) { -#ifdef CONFIG_ZONE_DMA /* * The most common case is KMALLOC_NORMAL, so test for it - * with a single branch for both flags. + * with a single branch for all the relevant flags. */ - if (likely((flags & (__GFP_DMA | __GFP_RECLAIMABLE)) == 0)) + if (likely((flags & KMALLOC_NOT_NORMAL_BITS) == 0)) return KMALLOC_NORMAL; /* - * At least one of the flags has to be set. If both are, __GFP_DMA - * is more important. + * At least one of the flags has to be set. Their priorities in + * decreasing order are: + * 1) __GFP_DMA + * 2) __GFP_RECLAIMABLE + * 3) __GFP_ACCOUNT */ - return flags & __GFP_DMA ? KMALLOC_DMA : KMALLOC_RECLAIM; -#else - return flags & __GFP_RECLAIMABLE ? KMALLOC_RECLAIM : KMALLOC_NORMAL; -#endif + if (IS_ENABLED(CONFIG_ZONE_DMA) && (flags & __GFP_DMA)) + return KMALLOC_DMA; + if (!IS_ENABLED(CONFIG_MEMCG_KMEM) || (flags & __GFP_RECLAIMABLE)) + return KMALLOC_RECLAIM; + else + return KMALLOC_CGROUP; } /* diff --git a/mm/slab_common.c b/mm/slab_common.c index f8833d3e5d47..bbaf41a7c77e 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -727,21 +727,25 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags) } #ifdef CONFIG_ZONE_DMA -#define INIT_KMALLOC_INFO(__size, __short_size) \ -{ \ - .name[KMALLOC_NORMAL] = "kmalloc-" #__short_size, \ - .name[KMALLOC_RECLAIM] = "kmalloc-rcl-" #__short_size, \ - .name[KMALLOC_DMA] = "dma-kmalloc-" #__short_size, \ - .size = __size, \ -} +#define KMALLOC_DMA_NAME(sz) .name[KMALLOC_DMA] = "dma-kmalloc-" #sz, +#else +#define KMALLOC_DMA_NAME(sz) +#endif + +#ifdef CONFIG_MEMCG_KMEM +#define KMALLOC_CGROUP_NAME(sz) .name[KMALLOC_CGROUP] = "kmalloc-cg-" #sz, #else +#define KMALLOC_CGROUP_NAME(sz) +#endif + #define INIT_KMALLOC_INFO(__size, __short_size) \ { \ .name[KMALLOC_NORMAL] = "kmalloc-" #__short_size, \ .name[KMALLOC_RECLAIM] = "kmalloc-rcl-" #__short_size, \ + KMALLOC_CGROUP_NAME(__short_size) \ + KMALLOC_DMA_NAME(__short_size) \ .size = __size, \ } -#endif /* * kmalloc_info[] is to make slub_debug=,kmalloc-xx option work at boot time. @@ -830,6 +834,8 @@ new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) { if (type == KMALLOC_RECLAIM) flags |= SLAB_RECLAIM_ACCOUNT; + else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) + flags |= SLAB_ACCOUNT; kmalloc_caches[type][idx] = create_kmalloc_cache( kmalloc_info[idx].name[type], @@ -847,6 +853,9 @@ void __init create_kmalloc_caches(slab_flags_t flags) int i; enum kmalloc_cache_type type; + /* + * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined + */ for (type = KMALLOC_NORMAL; type <= KMALLOC_RECLAIM; type++) { for (i = KMALLOC_SHIFT_LOW; i <= KMALLOC_SHIFT_HIGH; i++) { if (!kmalloc_caches[type][i])