diff mbox series

[RFC,v2,06/27] mm: page_alloc: Allow an arch to hook early into free_pages_prepare()

Message ID 20231119165721.9849-7-alexandru.elisei@arm.com (mailing list archive)
State New, archived
Headers show
Series [RFC,v2,01/27] arm64: mte: Rework naming for tag manipulation functions | expand

Commit Message

Alexandru Elisei Nov. 19, 2023, 4:57 p.m. UTC
Add arch_free_pages_prepare() hook that is called before that page flags
are cleared. This will be used by arm64 when explicit management of tag
storage pages is enabled.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 include/linux/pgtable.h | 4 ++++
 mm/page_alloc.c         | 4 +++-
 2 files changed, 7 insertions(+), 1 deletion(-)

Comments

David Hildenbrand Nov. 24, 2023, 7:36 p.m. UTC | #1
On 19.11.23 17:57, Alexandru Elisei wrote:
> Add arch_free_pages_prepare() hook that is called before that page flags
> are cleared. This will be used by arm64 when explicit management of tag
> storage pages is enabled.

Can you elaborate a bit what exactly will be done by that code with that 
information?
Alexandru Elisei Nov. 27, 2023, 1:03 p.m. UTC | #2
Hi,

On Fri, Nov 24, 2023 at 08:36:52PM +0100, David Hildenbrand wrote:
> On 19.11.23 17:57, Alexandru Elisei wrote:
> > Add arch_free_pages_prepare() hook that is called before that page flags
> > are cleared. This will be used by arm64 when explicit management of tag
> > storage pages is enabled.
> 
> Can you elaborate a bit what exactly will be done by that code with that
> information?

Of course.

The MTE code that is in the kernel today uses the PG_arch_2 page flag, which it
renames to PG_mte_tagged, to track if a page has been mapped with tagging
enabled. That flag is cleared by free_pages_prepare() when it does:

	page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;

When tag storage management is enabled, tag storage is reserved for a page if
and only if the page is mapped as tagged. When a page is freed, the code looks
at the PG_mte_tagged flag to determine if the page was mapped as tagged, and
therefore has tag storage reserved, to determine if the corresponding tag
storage should also be freed.

I have considered using arch_free_page(), but free_pages_prepare() calls the
function after the flags are cleared.

Does that answer your question?

Alex

> 
> -- 
> Cheers,
> 
> David / dhildenb
>
David Hildenbrand Nov. 28, 2023, 4:58 p.m. UTC | #3
On 27.11.23 14:03, Alexandru Elisei wrote:
> Hi,
> 
> On Fri, Nov 24, 2023 at 08:36:52PM +0100, David Hildenbrand wrote:
>> On 19.11.23 17:57, Alexandru Elisei wrote:
>>> Add arch_free_pages_prepare() hook that is called before that page flags
>>> are cleared. This will be used by arm64 when explicit management of tag
>>> storage pages is enabled.
>>
>> Can you elaborate a bit what exactly will be done by that code with that
>> information?
> 
> Of course.
> 
> The MTE code that is in the kernel today uses the PG_arch_2 page flag, which it
> renames to PG_mte_tagged, to track if a page has been mapped with tagging
> enabled. That flag is cleared by free_pages_prepare() when it does:
> 
> 	page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
> 
> When tag storage management is enabled, tag storage is reserved for a page if
> and only if the page is mapped as tagged. When a page is freed, the code looks
> at the PG_mte_tagged flag to determine if the page was mapped as tagged, and
> therefore has tag storage reserved, to determine if the corresponding tag
> storage should also be freed.
> 
> I have considered using arch_free_page(), but free_pages_prepare() calls the
> function after the flags are cleared.
> 
> Does that answer your question?

Yes, please add some of that to the patch description!
Alexandru Elisei Nov. 28, 2023, 5:17 p.m. UTC | #4
Hi,

On Tue, Nov 28, 2023 at 05:58:55PM +0100, David Hildenbrand wrote:
> On 27.11.23 14:03, Alexandru Elisei wrote:
> > Hi,
> > 
> > On Fri, Nov 24, 2023 at 08:36:52PM +0100, David Hildenbrand wrote:
> > > On 19.11.23 17:57, Alexandru Elisei wrote:
> > > > Add arch_free_pages_prepare() hook that is called before that page flags
> > > > are cleared. This will be used by arm64 when explicit management of tag
> > > > storage pages is enabled.
> > > 
> > > Can you elaborate a bit what exactly will be done by that code with that
> > > information?
> > 
> > Of course.
> > 
> > The MTE code that is in the kernel today uses the PG_arch_2 page flag, which it
> > renames to PG_mte_tagged, to track if a page has been mapped with tagging
> > enabled. That flag is cleared by free_pages_prepare() when it does:
> > 
> > 	page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
> > 
> > When tag storage management is enabled, tag storage is reserved for a page if
> > and only if the page is mapped as tagged. When a page is freed, the code looks
> > at the PG_mte_tagged flag to determine if the page was mapped as tagged, and
> > therefore has tag storage reserved, to determine if the corresponding tag
> > storage should also be freed.
> > 
> > I have considered using arch_free_page(), but free_pages_prepare() calls the
> > function after the flags are cleared.
> > 
> > Does that answer your question?
> 
> Yes, please add some of that to the patch description!

Will do!

Thanks,
Alex

> 
> -- 
> Cheers,
> 
> David / dhildenb
>
diff mbox series

Patch

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index b31f53e9ab1d..3f34f00ced62 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -880,6 +880,10 @@  static inline int arch_prep_new_page(struct page *page, int order, gfp_t gfp)
 }
 #endif
 
+#ifndef __HAVE_ARCH_FREE_PAGES_PREPARE
+static inline void arch_free_pages_prepare(struct page *page, int order) { }
+#endif
+
 #ifndef __HAVE_ARCH_UNMAP_ONE
 /*
  * Some architectures support metadata associated with a page. When a
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b2782b778e78..86e4b1dac538 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1086,6 +1086,8 @@  static __always_inline bool free_pages_prepare(struct page *page,
 	trace_mm_page_free(page, order);
 	kmsan_free_page(page, order);
 
+	arch_free_pages_prepare(page, order);
+
 	if (unlikely(PageHWPoison(page)) && !order) {
 		/*
 		 * Do not let hwpoison pages hit pcplists/buddy
@@ -3171,7 +3173,7 @@  static inline unsigned int gfp_to_alloc_flags_cma(gfp_t gfp_mask,
 	return alloc_flags;
 }
 
-#ifdef HAVE_ARCH_ALLOC_PAGE
+#ifdef HAVE_ARCH_PREP_NEW_PAGE
 static void return_page_to_buddy(struct page *page, int order)
 {
 	int migratetype = get_pfnblock_migratetype(page, pfn);