Message ID | 20211209230414.2766515-2-zi.yan@sent.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Use pageblock_order for cma and alloc_contig_range alignment. | expand |
Hi, On 2021/12/10 07:04, Zi Yan wrote: > From: Zi Yan <ziy@nvidia.com> > > This is done in addition to MIGRATE_ISOLATE pageblock merge avoidance. > It prepares for the upcoming removal of the MAX_ORDER-1 alignment > requirement for CMA and alloc_contig_range(). > > MIGRARTE_HIGHATOMIC should not merge with other migratetypes like > MIGRATE_ISOLATE and MIGRARTE_CMA[1], so this commit prevents that too. > Also add MIGRARTE_HIGHATOMIC to fallbacks array for completeness. > > [1] https://lore.kernel.org/linux-mm/20211130100853.GP3366@techsingularity.net/ > > Signed-off-by: Zi Yan <ziy@nvidia.com> > --- > include/linux/mmzone.h | 6 ++++++ > mm/page_alloc.c | 28 ++++++++++++++++++---------- > 2 files changed, 24 insertions(+), 10 deletions(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 58e744b78c2c..b925431b0123 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -83,6 +83,12 @@ static inline bool is_migrate_movable(int mt) > return is_migrate_cma(mt) || mt == MIGRATE_MOVABLE; > } > > +/* See fallbacks[MIGRATE_TYPES][3] in page_alloc.c */ > +static inline bool migratetype_has_fallback(int mt) > +{ > + return mt < MIGRATE_PCPTYPES; > +} > + I would suggest spliting the patch into 2 parts. The first part: no functioning change, just introduce migratetype_has_fallback() and replace where it applys to. > #define for_each_migratetype_order(order, type) \ > for (order = 0; order < MAX_ORDER; order++) \ > for (type = 0; type < MIGRATE_TYPES; type++) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index edfd6c81af82..107a5f186d3b 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1041,6 +1041,12 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, > return page_is_buddy(higher_page, higher_buddy, order + 1); > } > > +static inline bool has_non_fallback_pageblock(struct zone *zone) > +{ > + return has_isolate_pageblock(zone) || zone_cma_pages(zone) != 0 || > + zone->nr_reserved_highatomic != 0; Make zone->nr_reserved_highatomic != 0 a helper as zone_cma_pages()? > +} > + > /* > * Freeing function for a buddy system allocator. > * > @@ -1116,14 +1122,15 @@ static inline void __free_one_page(struct page *page, > } > if (order < MAX_ORDER - 1) { > /* If we are here, it means order is >= pageblock_order. > - * We want to prevent merge between freepages on isolate > - * pageblock and normal pageblock. Without this, pageblock > - * isolation could cause incorrect freepage or CMA accounting. > + * We want to prevent merge between freepages on pageblock > + * without fallbacks and normal pageblock. Without this, > + * pageblock isolation could cause incorrect freepage or CMA > + * accounting or HIGHATOMIC accounting. > * > * We don't want to hit this code for the more frequent > * low-order merging. > */ > - if (unlikely(has_isolate_pageblock(zone))) { > + if (unlikely(has_non_fallback_pageblock(zone))) { I'm not familiar with the code details, just wondering if this change would has side effects on cma pageblock merging as it the condition stronger? Thanks, Eric > int buddy_mt; > > buddy_pfn = __find_buddy_pfn(pfn, order); > @@ -1131,8 +1138,8 @@ static inline void __free_one_page(struct page *page, > buddy_mt = get_pageblock_migratetype(buddy); > > if (migratetype != buddy_mt > - && (is_migrate_isolate(migratetype) || > - is_migrate_isolate(buddy_mt))) > + && (!migratetype_has_fallback(migratetype) || > + !migratetype_has_fallback(buddy_mt))) > goto done_merging; > } > max_order = order + 1; > @@ -2483,6 +2490,7 @@ static int fallbacks[MIGRATE_TYPES][3] = { > [MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, > [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_TYPES }, > [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, > + [MIGRATE_HIGHATOMIC] = { MIGRATE_TYPES }, /* Never used */ > #ifdef CONFIG_CMA > [MIGRATE_CMA] = { MIGRATE_TYPES }, /* Never used */ > #endif > @@ -2794,8 +2802,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone, > > /* Yoink! */ > mt = get_pageblock_migratetype(page); > - if (!is_migrate_highatomic(mt) && !is_migrate_isolate(mt) > - && !is_migrate_cma(mt)) { > + /* Only reserve normal pageblock */ > + if (migratetype_has_fallback(mt)) { > zone->nr_reserved_highatomic += pageblock_nr_pages; > set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); > move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL); > @@ -3544,8 +3552,8 @@ int __isolate_free_page(struct page *page, unsigned int order) > struct page *endpage = page + (1 << order) - 1; > for (; page < endpage; page += pageblock_nr_pages) { > int mt = get_pageblock_migratetype(page); > - if (!is_migrate_isolate(mt) && !is_migrate_cma(mt) > - && !is_migrate_highatomic(mt)) > + /* Only change normal pageblock */ > + if (migratetype_has_fallback(mt)) > set_pageblock_migratetype(page, > MIGRATE_MOVABLE); > }
Hi Eric, Thanks for looking into my patch. On 10 Dec 2021, at 2:43, Eric Ren wrote: > Hi, > > On 2021/12/10 07:04, Zi Yan wrote: >> From: Zi Yan <ziy@nvidia.com> >> >> This is done in addition to MIGRATE_ISOLATE pageblock merge avoidance. >> It prepares for the upcoming removal of the MAX_ORDER-1 alignment >> requirement for CMA and alloc_contig_range(). >> >> MIGRARTE_HIGHATOMIC should not merge with other migratetypes like >> MIGRATE_ISOLATE and MIGRARTE_CMA[1], so this commit prevents that too. >> Also add MIGRARTE_HIGHATOMIC to fallbacks array for completeness. >> >> [1] https://lore.kernel.org/linux-mm/20211130100853.GP3366@techsingularity.net/ >> >> Signed-off-by: Zi Yan <ziy@nvidia.com> >> --- >> include/linux/mmzone.h | 6 ++++++ >> mm/page_alloc.c | 28 ++++++++++++++++++---------- >> 2 files changed, 24 insertions(+), 10 deletions(-) >> >> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h >> index 58e744b78c2c..b925431b0123 100644 >> --- a/include/linux/mmzone.h >> +++ b/include/linux/mmzone.h >> @@ -83,6 +83,12 @@ static inline bool is_migrate_movable(int mt) >> return is_migrate_cma(mt) || mt == MIGRATE_MOVABLE; >> } >> +/* See fallbacks[MIGRATE_TYPES][3] in page_alloc.c */ >> +static inline bool migratetype_has_fallback(int mt) >> +{ >> + return mt < MIGRATE_PCPTYPES; >> +} >> + > > I would suggest spliting the patch into 2 parts. The first part: no functioning change, just introduce migratetype_has_fallback() > and replace where it applys to. OK. I can do that. > >> #define for_each_migratetype_order(order, type) \ >> for (order = 0; order < MAX_ORDER; order++) \ >> for (type = 0; type < MIGRATE_TYPES; type++) >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index edfd6c81af82..107a5f186d3b 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -1041,6 +1041,12 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, >> return page_is_buddy(higher_page, higher_buddy, order + 1); >> } >> +static inline bool has_non_fallback_pageblock(struct zone *zone) >> +{ >> + return has_isolate_pageblock(zone) || zone_cma_pages(zone) != 0 || >> + zone->nr_reserved_highatomic != 0; > > Make zone->nr_reserved_highatomic != 0 a helper as zone_cma_pages()? I am not sure. We have zone_cma_pages() because when CMA is not enabled, 0 can be simply returned. But MIGRATE_HIGHATOMIC is always present, then an helper function is not that useful. >> +} >> + >> /* >> * Freeing function for a buddy system allocator. >> * >> @@ -1116,14 +1122,15 @@ static inline void __free_one_page(struct page *page, >> } >> if (order < MAX_ORDER - 1) { >> /* If we are here, it means order is >= pageblock_order. >> - * We want to prevent merge between freepages on isolate >> - * pageblock and normal pageblock. Without this, pageblock >> - * isolation could cause incorrect freepage or CMA accounting. >> + * We want to prevent merge between freepages on pageblock >> + * without fallbacks and normal pageblock. Without this, >> + * pageblock isolation could cause incorrect freepage or CMA >> + * accounting or HIGHATOMIC accounting. >> * >> * We don't want to hit this code for the more frequent >> * low-order merging. >> */ >> - if (unlikely(has_isolate_pageblock(zone))) { >> + if (unlikely(has_non_fallback_pageblock(zone))) { > I'm not familiar with the code details, just wondering if this change would has side effects on cma > pageblock merging as it the condition stronger? No impact on cma pageblock merging, AFAICT. > > Thanks, > Eric >> int buddy_mt; >> buddy_pfn = __find_buddy_pfn(pfn, order); >> @@ -1131,8 +1138,8 @@ static inline void __free_one_page(struct page *page, >> buddy_mt = get_pageblock_migratetype(buddy); >> if (migratetype != buddy_mt >> - && (is_migrate_isolate(migratetype) || >> - is_migrate_isolate(buddy_mt))) >> + && (!migratetype_has_fallback(migratetype) || >> + !migratetype_has_fallback(buddy_mt))) >> goto done_merging; >> } >> max_order = order + 1; >> @@ -2483,6 +2490,7 @@ static int fallbacks[MIGRATE_TYPES][3] = { >> [MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, >> [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_TYPES }, >> [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, >> + [MIGRATE_HIGHATOMIC] = { MIGRATE_TYPES }, /* Never used */ >> #ifdef CONFIG_CMA >> [MIGRATE_CMA] = { MIGRATE_TYPES }, /* Never used */ >> #endif >> @@ -2794,8 +2802,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone, >> /* Yoink! */ >> mt = get_pageblock_migratetype(page); >> - if (!is_migrate_highatomic(mt) && !is_migrate_isolate(mt) >> - && !is_migrate_cma(mt)) { >> + /* Only reserve normal pageblock */ >> + if (migratetype_has_fallback(mt)) { >> zone->nr_reserved_highatomic += pageblock_nr_pages; >> set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); >> move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL); >> @@ -3544,8 +3552,8 @@ int __isolate_free_page(struct page *page, unsigned int order) >> struct page *endpage = page + (1 << order) - 1; >> for (; page < endpage; page += pageblock_nr_pages) { >> int mt = get_pageblock_migratetype(page); >> - if (!is_migrate_isolate(mt) && !is_migrate_cma(mt) >> - && !is_migrate_highatomic(mt)) >> + /* Only change normal pageblock */ >> + if (migratetype_has_fallback(mt)) >> set_pageblock_migratetype(page, >> MIGRATE_MOVABLE); >> } -- Best Regards, Yan, Zi
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 58e744b78c2c..b925431b0123 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -83,6 +83,12 @@ static inline bool is_migrate_movable(int mt) return is_migrate_cma(mt) || mt == MIGRATE_MOVABLE; } +/* See fallbacks[MIGRATE_TYPES][3] in page_alloc.c */ +static inline bool migratetype_has_fallback(int mt) +{ + return mt < MIGRATE_PCPTYPES; +} + #define for_each_migratetype_order(order, type) \ for (order = 0; order < MAX_ORDER; order++) \ for (type = 0; type < MIGRATE_TYPES; type++) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index edfd6c81af82..107a5f186d3b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1041,6 +1041,12 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, return page_is_buddy(higher_page, higher_buddy, order + 1); } +static inline bool has_non_fallback_pageblock(struct zone *zone) +{ + return has_isolate_pageblock(zone) || zone_cma_pages(zone) != 0 || + zone->nr_reserved_highatomic != 0; +} + /* * Freeing function for a buddy system allocator. * @@ -1116,14 +1122,15 @@ static inline void __free_one_page(struct page *page, } if (order < MAX_ORDER - 1) { /* If we are here, it means order is >= pageblock_order. - * We want to prevent merge between freepages on isolate - * pageblock and normal pageblock. Without this, pageblock - * isolation could cause incorrect freepage or CMA accounting. + * We want to prevent merge between freepages on pageblock + * without fallbacks and normal pageblock. Without this, + * pageblock isolation could cause incorrect freepage or CMA + * accounting or HIGHATOMIC accounting. * * We don't want to hit this code for the more frequent * low-order merging. */ - if (unlikely(has_isolate_pageblock(zone))) { + if (unlikely(has_non_fallback_pageblock(zone))) { int buddy_mt; buddy_pfn = __find_buddy_pfn(pfn, order); @@ -1131,8 +1138,8 @@ static inline void __free_one_page(struct page *page, buddy_mt = get_pageblock_migratetype(buddy); if (migratetype != buddy_mt - && (is_migrate_isolate(migratetype) || - is_migrate_isolate(buddy_mt))) + && (!migratetype_has_fallback(migratetype) || + !migratetype_has_fallback(buddy_mt))) goto done_merging; } max_order = order + 1; @@ -2483,6 +2490,7 @@ static int fallbacks[MIGRATE_TYPES][3] = { [MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_TYPES }, [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, + [MIGRATE_HIGHATOMIC] = { MIGRATE_TYPES }, /* Never used */ #ifdef CONFIG_CMA [MIGRATE_CMA] = { MIGRATE_TYPES }, /* Never used */ #endif @@ -2794,8 +2802,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone, /* Yoink! */ mt = get_pageblock_migratetype(page); - if (!is_migrate_highatomic(mt) && !is_migrate_isolate(mt) - && !is_migrate_cma(mt)) { + /* Only reserve normal pageblock */ + if (migratetype_has_fallback(mt)) { zone->nr_reserved_highatomic += pageblock_nr_pages; set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL); @@ -3544,8 +3552,8 @@ int __isolate_free_page(struct page *page, unsigned int order) struct page *endpage = page + (1 << order) - 1; for (; page < endpage; page += pageblock_nr_pages) { int mt = get_pageblock_migratetype(page); - if (!is_migrate_isolate(mt) && !is_migrate_cma(mt) - && !is_migrate_highatomic(mt)) + /* Only change normal pageblock */ + if (migratetype_has_fallback(mt)) set_pageblock_migratetype(page, MIGRATE_MOVABLE); }