diff mbox series

[bpf-next,v5,6/7] mm: Make failslab, kfence, kmemleak aware of trylock mode

Message ID 20250115021746.34691-7-alexei.starovoitov@gmail.com (mailing list archive)
State New
Headers show
Series bpf, mm: Introduce try_alloc_pages() | expand

Commit Message

Alexei Starovoitov Jan. 15, 2025, 2:17 a.m. UTC
From: Alexei Starovoitov <ast@kernel.org>

When gfpflags_allow_spinning() == false spin_locks cannot be taken.
Make failslab, kfence, kmemleak compliant.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 mm/failslab.c    | 3 +++
 mm/kfence/core.c | 4 ++++
 mm/kmemleak.c    | 3 +++
 3 files changed, 10 insertions(+)

Comments

Vlastimil Babka Jan. 15, 2025, 5:57 p.m. UTC | #1
On 1/15/25 03:17, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
> 
> When gfpflags_allow_spinning() == false spin_locks cannot be taken.
> Make failslab, kfence, kmemleak compliant.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

All these are related to slab, so this would rather belong to a followup
series that expands the support from page allocator to slab, no?

> ---
>  mm/failslab.c    | 3 +++
>  mm/kfence/core.c | 4 ++++
>  mm/kmemleak.c    | 3 +++
>  3 files changed, 10 insertions(+)
> 
> diff --git a/mm/failslab.c b/mm/failslab.c
> index c3901b136498..86c7304ef25a 100644
> --- a/mm/failslab.c
> +++ b/mm/failslab.c
> @@ -27,6 +27,9 @@ int should_failslab(struct kmem_cache *s, gfp_t gfpflags)
>  	if (gfpflags & __GFP_NOFAIL)
>  		return 0;
>  
> +	if (!gfpflags_allow_spinning(gfpflags))
> +		return 0;
> +
>  	if (failslab.ignore_gfp_reclaim &&
>  			(gfpflags & __GFP_DIRECT_RECLAIM))
>  		return 0;
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index 67fc321db79b..e5f2d63f3220 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -1096,6 +1096,10 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
>  	if (s->flags & SLAB_SKIP_KFENCE)
>  		return NULL;
>  
> +	/* Bailout, since kfence_guarded_alloc() needs to take a lock */
> +	if (!gfpflags_allow_spinning(flags))
> +		return NULL;
> +
>  	allocation_gate = atomic_inc_return(&kfence_allocation_gate);
>  	if (allocation_gate > 1)
>  		return NULL;
> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index 2a945c07ae99..64cb44948e9e 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -648,6 +648,9 @@ static struct kmemleak_object *__alloc_object(gfp_t gfp)
>  {
>  	struct kmemleak_object *object;
>  
> +	if (!gfpflags_allow_spinning(gfp))
> +		return NULL;
> +
>  	object = mem_pool_alloc(gfp);
>  	if (!object) {
>  		pr_warn("Cannot allocate a kmemleak_object structure\n");
Alexei Starovoitov Jan. 16, 2025, 2:23 a.m. UTC | #2
On Wed, Jan 15, 2025 at 9:57 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 1/15/25 03:17, Alexei Starovoitov wrote:
> > From: Alexei Starovoitov <ast@kernel.org>
> >
> > When gfpflags_allow_spinning() == false spin_locks cannot be taken.
> > Make failslab, kfence, kmemleak compliant.
> >
> > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>
> All these are related to slab, so this would rather belong to a followup
> series that expands the support from page allocator to slab, no?

Sure. I can drop it for now.
It was more of the preview of things to come.
And how gfpflags_allow_spinning() fits in other places.
diff mbox series

Patch

diff --git a/mm/failslab.c b/mm/failslab.c
index c3901b136498..86c7304ef25a 100644
--- a/mm/failslab.c
+++ b/mm/failslab.c
@@ -27,6 +27,9 @@  int should_failslab(struct kmem_cache *s, gfp_t gfpflags)
 	if (gfpflags & __GFP_NOFAIL)
 		return 0;
 
+	if (!gfpflags_allow_spinning(gfpflags))
+		return 0;
+
 	if (failslab.ignore_gfp_reclaim &&
 			(gfpflags & __GFP_DIRECT_RECLAIM))
 		return 0;
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 67fc321db79b..e5f2d63f3220 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -1096,6 +1096,10 @@  void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
 	if (s->flags & SLAB_SKIP_KFENCE)
 		return NULL;
 
+	/* Bailout, since kfence_guarded_alloc() needs to take a lock */
+	if (!gfpflags_allow_spinning(flags))
+		return NULL;
+
 	allocation_gate = atomic_inc_return(&kfence_allocation_gate);
 	if (allocation_gate > 1)
 		return NULL;
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 2a945c07ae99..64cb44948e9e 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -648,6 +648,9 @@  static struct kmemleak_object *__alloc_object(gfp_t gfp)
 {
 	struct kmemleak_object *object;
 
+	if (!gfpflags_allow_spinning(gfp))
+		return NULL;
+
 	object = mem_pool_alloc(gfp);
 	if (!object) {
 		pr_warn("Cannot allocate a kmemleak_object structure\n");