Message ID | 20240607090939.89524-2-david@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/memory_hotplug: use PageOffline() instead of PageReserved() for !ZONE_DEVICE | expand |
On 07.06.24 11:09, David Hildenbrand wrote: > In preparation for further changes, let's teach __free_pages_core() > about the differences of memory hotplug handling. > > Move the memory hotplug specific handling from generic_online_page() to > __free_pages_core(), use adjust_managed_page_count() on the memory > hotplug path, and spell out why memory freed via memblock > cannot currently use adjust_managed_page_count(). > > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > mm/internal.h | 3 ++- > mm/kmsan/init.c | 2 +- > mm/memory_hotplug.c | 9 +-------- > mm/mm_init.c | 4 ++-- > mm/page_alloc.c | 17 +++++++++++++++-- > 5 files changed, 21 insertions(+), 14 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 12e95fdf61e90..3fdee779205ab 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -604,7 +604,8 @@ extern void __putback_isolated_page(struct page *page, unsigned int order, > int mt); > extern void memblock_free_pages(struct page *page, unsigned long pfn, > unsigned int order); > -extern void __free_pages_core(struct page *page, unsigned int order); > +extern void __free_pages_core(struct page *page, unsigned int order, > + enum meminit_context); > > /* > * This will have no effect, other than possibly generating a warning, if the > diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c > index 3ac3b8921d36f..ca79636f858e5 100644 > --- a/mm/kmsan/init.c > +++ b/mm/kmsan/init.c > @@ -172,7 +172,7 @@ static void do_collection(void) > shadow = smallstack_pop(&collect); > origin = smallstack_pop(&collect); > kmsan_setup_meta(page, shadow, origin, collect.order); > - __free_pages_core(page, collect.order); > + __free_pages_core(page, collect.order, MEMINIT_EARLY); > } > } > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 171ad975c7cfd..27e3be75edcf7 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -630,14 +630,7 @@ EXPORT_SYMBOL_GPL(restore_online_page_callback); > > void generic_online_page(struct page *page, unsigned int order) > { > - /* > - * Freeing the page with debug_pagealloc enabled will try to unmap it, > - * so we should map it first. This is better than introducing a special > - * case in page freeing fast path. > - */ > - debug_pagealloc_map_pages(page, 1 << order); > - __free_pages_core(page, order); > - totalram_pages_add(1UL << order); > + __free_pages_core(page, order, MEMINIT_HOTPLUG); > } > EXPORT_SYMBOL_GPL(generic_online_page); > > diff --git a/mm/mm_init.c b/mm/mm_init.c > index 019193b0d8703..feb5b6e8c8875 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -1938,7 +1938,7 @@ static void __init deferred_free_range(unsigned long pfn, > for (i = 0; i < nr_pages; i++, page++, pfn++) { > if (pageblock_aligned(pfn)) > set_pageblock_migratetype(page, MIGRATE_MOVABLE); > - __free_pages_core(page, 0); > + __free_pages_core(page, 0, MEMINIT_EARLY); > } > } The build bot just reminded me that I missed another case in this function: (CONFIG_DEFERRED_STRUCT_PAGE_INIT) diff --git a/mm/mm_init.c b/mm/mm_init.c index feb5b6e8c8875..5a0752261a795 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1928,7 +1928,7 @@ static void __init deferred_free_range(unsigned long pfn, if (nr_pages == MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) { for (i = 0; i < nr_pages; i += pageblock_nr_pages) set_pageblock_migratetype(page + i, MIGRATE_MOVABLE); - __free_pages_core(page, MAX_PAGE_ORDER); + __free_pages_core(page, MAX_PAGE_ORDER, MEMINIT_EARLY); return; }
On Fri, Jun 07, 2024 at 11:09:36AM +0200, David Hildenbrand wrote: > In preparation for further changes, let's teach __free_pages_core() > about the differences of memory hotplug handling. > > Move the memory hotplug specific handling from generic_online_page() to > __free_pages_core(), use adjust_managed_page_count() on the memory > hotplug path, and spell out why memory freed via memblock > cannot currently use adjust_managed_page_count(). > > Signed-off-by: David Hildenbrand <david@redhat.com> All looks good but I am puzzled with something. > + } else { > + /* memblock adjusts totalram_pages() ahead of time. */ > + atomic_long_add(nr_pages, &page_zone(page)->managed_pages); > + } You say that memblock adjusts totalram_pages ahead of time, and I guess you mean in memblock_free_all() pages = free_low_memory_core_early() totalram_pages_add(pages); but that is not ahead, it looks like it is upading __after__ sending them to buddy?
On 10.06.24 06:03, Oscar Salvador wrote: > On Fri, Jun 07, 2024 at 11:09:36AM +0200, David Hildenbrand wrote: >> In preparation for further changes, let's teach __free_pages_core() >> about the differences of memory hotplug handling. >> >> Move the memory hotplug specific handling from generic_online_page() to >> __free_pages_core(), use adjust_managed_page_count() on the memory >> hotplug path, and spell out why memory freed via memblock >> cannot currently use adjust_managed_page_count(). >> >> Signed-off-by: David Hildenbrand <david@redhat.com> > > All looks good but I am puzzled with something. > >> + } else { >> + /* memblock adjusts totalram_pages() ahead of time. */ >> + atomic_long_add(nr_pages, &page_zone(page)->managed_pages); >> + } > > You say that memblock adjusts totalram_pages ahead of time, and I guess > you mean in memblock_free_all() And memblock_free_late(), which uses atomic_long_inc(). > > pages = free_low_memory_core_early() > totalram_pages_add(pages); > > but that is not ahead, it looks like it is upading __after__ sending > them to buddy? Right (it's suboptimal, but not really problematic so far. Hopefully Wei can clean it up and move it in here as well) For the time being "/* memblock adjusts totalram_pages() manually. */" ? Thanks!
On Mon, Jun 10, 2024 at 10:38:05AM +0200, David Hildenbrand wrote: > On 10.06.24 06:03, Oscar Salvador wrote: > > On Fri, Jun 07, 2024 at 11:09:36AM +0200, David Hildenbrand wrote: > > > In preparation for further changes, let's teach __free_pages_core() > > > about the differences of memory hotplug handling. > > > > > > Move the memory hotplug specific handling from generic_online_page() to > > > __free_pages_core(), use adjust_managed_page_count() on the memory > > > hotplug path, and spell out why memory freed via memblock > > > cannot currently use adjust_managed_page_count(). > > > > > > Signed-off-by: David Hildenbrand <david@redhat.com> > > > > All looks good but I am puzzled with something. > > > > > + } else { > > > + /* memblock adjusts totalram_pages() ahead of time. */ > > > + atomic_long_add(nr_pages, &page_zone(page)->managed_pages); > > > + } > > > > You say that memblock adjusts totalram_pages ahead of time, and I guess > > you mean in memblock_free_all() > > And memblock_free_late(), which uses atomic_long_inc(). Ah yes. > Right (it's suboptimal, but not really problematic so far. Hopefully Wei can > clean it up and move it in here as well) That would be great. > For the time being > > "/* memblock adjusts totalram_pages() manually. */" Yes, I think that is better ;-) Thanks!
On 07.06.24 11:09, David Hildenbrand wrote: > In preparation for further changes, let's teach __free_pages_core() > about the differences of memory hotplug handling. > > Move the memory hotplug specific handling from generic_online_page() to > __free_pages_core(), use adjust_managed_page_count() on the memory > hotplug path, and spell out why memory freed via memblock > cannot currently use adjust_managed_page_count(). > > Signed-off-by: David Hildenbrand <david@redhat.com> > --- @Andrew, can you squash the following? From 0a7921cf21cacf178ca7485da0138fc38a97a28e Mon Sep 17 00:00:00 2001 From: David Hildenbrand <david@redhat.com> Date: Tue, 11 Jun 2024 12:05:09 +0200 Subject: [PATCH] fixup: mm/highmem: make nr_free_highpages() return "unsigned long" Fixup the memblock comment. Signed-off-by: David Hildenbrand <david@redhat.com> --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e0c8a8354be36..fc53f96db58a2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1245,7 +1245,7 @@ void __free_pages_core(struct page *page, unsigned int order, debug_pagealloc_map_pages(page, nr_pages); adjust_managed_page_count(page, nr_pages); } else { - /* memblock adjusts totalram_pages() ahead of time. */ + /* memblock adjusts totalram_pages() manually. */ atomic_long_add(nr_pages, &page_zone(page)->managed_pages); }
On Tue, 11 Jun 2024 12:06:56 +0200 David Hildenbrand <david@redhat.com> wrote: > On 07.06.24 11:09, David Hildenbrand wrote: > > In preparation for further changes, let's teach __free_pages_core() > > about the differences of memory hotplug handling. > > > > Move the memory hotplug specific handling from generic_online_page() to > > __free_pages_core(), use adjust_managed_page_count() on the memory > > hotplug path, and spell out why memory freed via memblock > > cannot currently use adjust_managed_page_count(). > > > > Signed-off-by: David Hildenbrand <david@redhat.com> > > --- > > @Andrew, can you squash the following? Sure. I queued it against "mm: pass meminit_context to __free_pages_core()", not against > Subject: [PATCH] fixup: mm/highmem: make nr_free_highpages() return "unsigned > long"
On 11.06.24 21:19, Andrew Morton wrote: > On Tue, 11 Jun 2024 12:06:56 +0200 David Hildenbrand <david@redhat.com> wrote: > >> On 07.06.24 11:09, David Hildenbrand wrote: >>> In preparation for further changes, let's teach __free_pages_core() >>> about the differences of memory hotplug handling. >>> >>> Move the memory hotplug specific handling from generic_online_page() to >>> __free_pages_core(), use adjust_managed_page_count() on the memory >>> hotplug path, and spell out why memory freed via memblock >>> cannot currently use adjust_managed_page_count(). >>> >>> Signed-off-by: David Hildenbrand <david@redhat.com> >>> --- >> >> @Andrew, can you squash the following? > > Sure. > > I queued it against "mm: pass meminit_context to __free_pages_core()", > not against Ah yes, sorry. Thanks!
On Fri, 2024-06-07 at 11:09 +0200, David Hildenbrand wrote: > In preparation for further changes, let's teach __free_pages_core() > about the differences of memory hotplug handling. > > Move the memory hotplug specific handling from generic_online_page() to > __free_pages_core(), use adjust_managed_page_count() on the memory > hotplug path, and spell out why memory freed via memblock > cannot currently use adjust_managed_page_count(). > > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > mm/internal.h | 3 ++- > mm/kmsan/init.c | 2 +- > mm/memory_hotplug.c | 9 +-------- > mm/mm_init.c | 4 ++-- > mm/page_alloc.c | 17 +++++++++++++++-- > 5 files changed, 21 insertions(+), 14 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 12e95fdf61e90..3fdee779205ab 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -604,7 +604,8 @@ extern void __putback_isolated_page(struct page *page, unsigned int order, > int mt); > extern void memblock_free_pages(struct page *page, unsigned long pfn, > unsigned int order); > -extern void __free_pages_core(struct page *page, unsigned int order); > +extern void __free_pages_core(struct page *page, unsigned int order, > + enum meminit_context); Shouldn't the above be enum meminit_context context); > > /* > * This will have no effect, other than possibly generating a warning, if the Thanks. Tim
On 11.06.24 21:41, Tim Chen wrote: > On Fri, 2024-06-07 at 11:09 +0200, David Hildenbrand wrote: >> In preparation for further changes, let's teach __free_pages_core() >> about the differences of memory hotplug handling. >> >> Move the memory hotplug specific handling from generic_online_page() to >> __free_pages_core(), use adjust_managed_page_count() on the memory >> hotplug path, and spell out why memory freed via memblock >> cannot currently use adjust_managed_page_count(). >> >> Signed-off-by: David Hildenbrand <david@redhat.com> >> --- >> mm/internal.h | 3 ++- >> mm/kmsan/init.c | 2 +- >> mm/memory_hotplug.c | 9 +-------- >> mm/mm_init.c | 4 ++-- >> mm/page_alloc.c | 17 +++++++++++++++-- >> 5 files changed, 21 insertions(+), 14 deletions(-) >> >> diff --git a/mm/internal.h b/mm/internal.h >> index 12e95fdf61e90..3fdee779205ab 100644 >> --- a/mm/internal.h >> +++ b/mm/internal.h >> @@ -604,7 +604,8 @@ extern void __putback_isolated_page(struct page *page, unsigned int order, >> int mt); >> extern void memblock_free_pages(struct page *page, unsigned long pfn, >> unsigned int order); >> -extern void __free_pages_core(struct page *page, unsigned int order); >> +extern void __free_pages_core(struct page *page, unsigned int order, >> + enum meminit_context); > > Shouldn't the above be > enum meminit_context context); Although C allows parameters without names in declarations, this was unintended. Thanks!
On 11.06.24 21:19, Andrew Morton wrote: > On Tue, 11 Jun 2024 12:06:56 +0200 David Hildenbrand <david@redhat.com> wrote: > >> On 07.06.24 11:09, David Hildenbrand wrote: >>> In preparation for further changes, let's teach __free_pages_core() >>> about the differences of memory hotplug handling. >>> >>> Move the memory hotplug specific handling from generic_online_page() to >>> __free_pages_core(), use adjust_managed_page_count() on the memory >>> hotplug path, and spell out why memory freed via memblock >>> cannot currently use adjust_managed_page_count(). >>> >>> Signed-off-by: David Hildenbrand <david@redhat.com> >>> --- >> >> @Andrew, can you squash the following? > > Sure. > > I queued it against "mm: pass meminit_context to __free_pages_core()", > not against > >> Subject: [PATCH] fixup: mm/highmem: make nr_free_highpages() return "unsigned >> long" > Can you squash the following as well? (hopefully the last fixup, otherwise I might just resend a v2) From 53c8c5834e638b2ae5e2a34fa7d49ce0dcf25192 Mon Sep 17 00:00:00 2001 From: David Hildenbrand <david@redhat.com> Date: Wed, 12 Jun 2024 20:31:07 +0200 Subject: [PATCH] fixup: mm: pass meminit_context to __free_pages_core() Let's add the parameter name also in the declaration. Signed-off-by: David Hildenbrand <david@redhat.com> --- mm/internal.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/internal.h b/mm/internal.h index 14bab8a41baf6..254dd907bf9a2 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -605,7 +605,7 @@ extern void __putback_isolated_page(struct page *page, unsigned int order, extern void memblock_free_pages(struct page *page, unsigned long pfn, unsigned int order); extern void __free_pages_core(struct page *page, unsigned int order, - enum meminit_context); + enum meminit_context context); /* * This will have no effect, other than possibly generating a warning, if the
diff --git a/mm/internal.h b/mm/internal.h index 12e95fdf61e90..3fdee779205ab 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -604,7 +604,8 @@ extern void __putback_isolated_page(struct page *page, unsigned int order, int mt); extern void memblock_free_pages(struct page *page, unsigned long pfn, unsigned int order); -extern void __free_pages_core(struct page *page, unsigned int order); +extern void __free_pages_core(struct page *page, unsigned int order, + enum meminit_context); /* * This will have no effect, other than possibly generating a warning, if the diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c index 3ac3b8921d36f..ca79636f858e5 100644 --- a/mm/kmsan/init.c +++ b/mm/kmsan/init.c @@ -172,7 +172,7 @@ static void do_collection(void) shadow = smallstack_pop(&collect); origin = smallstack_pop(&collect); kmsan_setup_meta(page, shadow, origin, collect.order); - __free_pages_core(page, collect.order); + __free_pages_core(page, collect.order, MEMINIT_EARLY); } } diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 171ad975c7cfd..27e3be75edcf7 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -630,14 +630,7 @@ EXPORT_SYMBOL_GPL(restore_online_page_callback); void generic_online_page(struct page *page, unsigned int order) { - /* - * Freeing the page with debug_pagealloc enabled will try to unmap it, - * so we should map it first. This is better than introducing a special - * case in page freeing fast path. - */ - debug_pagealloc_map_pages(page, 1 << order); - __free_pages_core(page, order); - totalram_pages_add(1UL << order); + __free_pages_core(page, order, MEMINIT_HOTPLUG); } EXPORT_SYMBOL_GPL(generic_online_page); diff --git a/mm/mm_init.c b/mm/mm_init.c index 019193b0d8703..feb5b6e8c8875 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1938,7 +1938,7 @@ static void __init deferred_free_range(unsigned long pfn, for (i = 0; i < nr_pages; i++, page++, pfn++) { if (pageblock_aligned(pfn)) set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_core(page, 0); + __free_pages_core(page, 0, MEMINIT_EARLY); } } @@ -2513,7 +2513,7 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, } } - __free_pages_core(page, order); + __free_pages_core(page, order, MEMINIT_EARLY); } DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, init_on_alloc); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2224965ada468..e0c8a8354be36 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1214,7 +1214,8 @@ static void __free_pages_ok(struct page *page, unsigned int order, __count_vm_events(PGFREE, 1 << order); } -void __free_pages_core(struct page *page, unsigned int order) +void __free_pages_core(struct page *page, unsigned int order, + enum meminit_context context) { unsigned int nr_pages = 1 << order; struct page *p = page; @@ -1234,7 +1235,19 @@ void __free_pages_core(struct page *page, unsigned int order) __ClearPageReserved(p); set_page_count(p, 0); - atomic_long_add(nr_pages, &page_zone(page)->managed_pages); + if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG) && + unlikely(context == MEMINIT_HOTPLUG)) { + /* + * Freeing the page with debug_pagealloc enabled will try to + * unmap it; some archs don't like double-unmappings, so + * map it first. + */ + debug_pagealloc_map_pages(page, nr_pages); + adjust_managed_page_count(page, nr_pages); + } else { + /* memblock adjusts totalram_pages() ahead of time. */ + atomic_long_add(nr_pages, &page_zone(page)->managed_pages); + } if (page_contains_unaccepted(page, order)) { if (order == MAX_PAGE_ORDER && __free_unaccepted(page))
In preparation for further changes, let's teach __free_pages_core() about the differences of memory hotplug handling. Move the memory hotplug specific handling from generic_online_page() to __free_pages_core(), use adjust_managed_page_count() on the memory hotplug path, and spell out why memory freed via memblock cannot currently use adjust_managed_page_count(). Signed-off-by: David Hildenbrand <david@redhat.com> --- mm/internal.h | 3 ++- mm/kmsan/init.c | 2 +- mm/memory_hotplug.c | 9 +-------- mm/mm_init.c | 4 ++-- mm/page_alloc.c | 17 +++++++++++++++-- 5 files changed, 21 insertions(+), 14 deletions(-)