diff mbox series

[04/31] kasan, page_alloc: simplify kasan_poison_pages call site

Message ID b28f30ed5d662439fd2354b7a05e4d58a2889e5f.1638308023.git.andreyknvl@google.com (mailing list archive)
State New
Headers show
Series kasan, vmalloc, arm64: add vmalloc tagging support for SW/HW_TAGS | expand

Commit Message

andrey.konovalov@linux.dev Nov. 30, 2021, 9:39 p.m. UTC
From: Andrey Konovalov <andreyknvl@google.com>

Simplify the code around calling kasan_poison_pages() in
free_pages_prepare().

Reording kasan_poison_pages() and kernel_init_free_pages() is OK,
since kernel_init_free_pages() can handle poisoned memory.

This patch does no functional changes besides reordering the calls.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/page_alloc.c | 18 +++++-------------
 1 file changed, 5 insertions(+), 13 deletions(-)

Comments

Marco Elver Dec. 1, 2021, 2:09 p.m. UTC | #1
On Tue, Nov 30, 2021 at 10:39PM +0100, andrey.konovalov@linux.dev wrote:
> From: Andrey Konovalov <andreyknvl@google.com>
> 
> Simplify the code around calling kasan_poison_pages() in
> free_pages_prepare().
> 
> Reording kasan_poison_pages() and kernel_init_free_pages() is OK,
> since kernel_init_free_pages() can handle poisoned memory.

Why did they have to be reordered?

> This patch does no functional changes besides reordering the calls.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  mm/page_alloc.c | 18 +++++-------------
>  1 file changed, 5 insertions(+), 13 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3f3ea41f8c64..0673db27dd12 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1289,6 +1289,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
>  {
>  	int bad = 0;
>  	bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);

skip_kasan_poison is only used once now, so you could remove the
variable -- unless later code will use it in more than once place again.

> +	bool init = want_init_on_free();
>  
>  	VM_BUG_ON_PAGE(PageTail(page), page);
>  
> @@ -1359,19 +1360,10 @@ static __always_inline bool free_pages_prepare(struct page *page,
>  	 * With hardware tag-based KASAN, memory tags must be set before the
>  	 * page becomes unavailable via debug_pagealloc or arch_free_page.
>  	 */
> -	if (kasan_has_integrated_init()) {
> -		bool init = want_init_on_free();
> -
> -		if (!skip_kasan_poison)
> -			kasan_poison_pages(page, order, init);
> -	} else {
> -		bool init = want_init_on_free();
> -
> -		if (init)
> -			kernel_init_free_pages(page, 1 << order);
> -		if (!skip_kasan_poison)
> -			kasan_poison_pages(page, order, init);
> -	}
> +	if (!skip_kasan_poison)
> +		kasan_poison_pages(page, order, init);
> +	if (init && !kasan_has_integrated_init())
> +		kernel_init_free_pages(page, 1 << order);
>  
>  	/*
>  	 * arch_free_page() can make the page's contents inaccessible.  s390
> -- 
> 2.25.1
Andrey Konovalov Dec. 6, 2021, 9:07 p.m. UTC | #2
On Wed, Dec 1, 2021 at 3:10 PM Marco Elver <elver@google.com> wrote:
>
> On Tue, Nov 30, 2021 at 10:39PM +0100, andrey.konovalov@linux.dev wrote:
> > From: Andrey Konovalov <andreyknvl@google.com>
> >
> > Simplify the code around calling kasan_poison_pages() in
> > free_pages_prepare().
> >
> > Reording kasan_poison_pages() and kernel_init_free_pages() is OK,
> > since kernel_init_free_pages() can handle poisoned memory.
>
> Why did they have to be reordered?

It's for the next patch, I'll move the reordering there in v2.

> > This patch does no functional changes besides reordering the calls.
> >
> > Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> > ---
> >  mm/page_alloc.c | 18 +++++-------------
> >  1 file changed, 5 insertions(+), 13 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 3f3ea41f8c64..0673db27dd12 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1289,6 +1289,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
> >  {
> >       int bad = 0;
> >       bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);
>
> skip_kasan_poison is only used once now, so you could remove the
> variable -- unless later code will use it in more than once place again.

Will do in v2.

Thanks!
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3f3ea41f8c64..0673db27dd12 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1289,6 +1289,7 @@  static __always_inline bool free_pages_prepare(struct page *page,
 {
 	int bad = 0;
 	bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags);
+	bool init = want_init_on_free();
 
 	VM_BUG_ON_PAGE(PageTail(page), page);
 
@@ -1359,19 +1360,10 @@  static __always_inline bool free_pages_prepare(struct page *page,
 	 * With hardware tag-based KASAN, memory tags must be set before the
 	 * page becomes unavailable via debug_pagealloc or arch_free_page.
 	 */
-	if (kasan_has_integrated_init()) {
-		bool init = want_init_on_free();
-
-		if (!skip_kasan_poison)
-			kasan_poison_pages(page, order, init);
-	} else {
-		bool init = want_init_on_free();
-
-		if (init)
-			kernel_init_free_pages(page, 1 << order);
-		if (!skip_kasan_poison)
-			kasan_poison_pages(page, order, init);
-	}
+	if (!skip_kasan_poison)
+		kasan_poison_pages(page, order, init);
+	if (init && !kasan_has_integrated_init())
+		kernel_init_free_pages(page, 1 << order);
 
 	/*
 	 * arch_free_page() can make the page's contents inaccessible.  s390