Message ID | 20211209230414.2766515-4-zi.yan@sent.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Use pageblock_order for cma and alloc_contig_range alignment. | expand |
Hi, On 2021/12/10 07:04, Zi Yan wrote: > From: Zi Yan <ziy@nvidia.com> > > alloc_migration_target() is used by alloc_contig_range() and non-LRU > movable compound pages can be migrated. Current code does not allocate the > right page size for such pages. Check THP precisely using > is_transparent_huge() and add allocation support for non-LRU compound > pages. Could you elaborate on why the current code doesn't get the right size? how this patch fixes it. The description sounds like it's an existing bug, if so, the patch subject should be changed to "Fixes ..."? > > Signed-off-by: Zi Yan <ziy@nvidia.com> > --- > mm/migrate.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index d487a399253b..2ce3c771b1de 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1563,7 +1563,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) > return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask); > } > > - if (PageTransHuge(page)) { > + if (is_transparent_hugepage(page)) { > /* > * clear __GFP_RECLAIM to make the migration callback > * consistent with regular THP allocations. > @@ -1572,13 +1572,17 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) if (PageTransHuge(page)) { // just give more code context ... > gfp_mask |= GFP_TRANSHUGE; > order = HPAGE_PMD_ORDER; order assigned here > } > + if (PageCompound(page)) { > + gfp_mask |= __GFP_COMP; > + order = compound_order(page); re-assinged again as THP is a compound page? Thanks, Eric > + } > zidx = zone_idx(page_zone(page)); > if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) > gfp_mask |= __GFP_HIGHMEM; > > new_page = __alloc_pages(gfp_mask, order, nid, mtc->nmask); > > - if (new_page && PageTransHuge(new_page)) > + if (new_page && is_transparent_hugepage(page)) > prep_transhuge_page(new_page); > > return new_page;
On 10 Dec 2021, at 2:53, Eric Ren wrote: > Hi, > > On 2021/12/10 07:04, Zi Yan wrote: >> From: Zi Yan <ziy@nvidia.com> >> >> alloc_migration_target() is used by alloc_contig_range() and non-LRU >> movable compound pages can be migrated. Current code does not allocate the >> right page size for such pages. Check THP precisely using >> is_transparent_huge() and add allocation support for non-LRU compound >> pages. > Could you elaborate on why the current code doesn't get the right size? how this patch fixes it. The current code only check PageHuge() and PageTransHuge(). Non-LRU compound pages will be regarded as PageTransHuge() and an order-9 page is always allocated regardless of the actual page order. This patch makes the exact check for THPs using is_transparent_huge() instead of PageTransHuge() and checks PageCompound() after PageHuge() and is_transparent_huge() for non-LRU compound pages. > > The description sounds like it's an existing bug, if so, the patch subject should be changed to > "Fixes ..."? I have not seen any related bug report. > >> >> Signed-off-by: Zi Yan <ziy@nvidia.com> >> --- >> mm/migrate.c | 8 ++++++-- >> 1 file changed, 6 insertions(+), 2 deletions(-) >> >> diff --git a/mm/migrate.c b/mm/migrate.c >> index d487a399253b..2ce3c771b1de 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1563,7 +1563,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) >> return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask); >> } >> - if (PageTransHuge(page)) { >> + if (is_transparent_hugepage(page)) { >> /* >> * clear __GFP_RECLAIM to make the migration callback >> * consistent with regular THP allocations. >> @@ -1572,13 +1572,17 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) > if (PageTransHuge(page)) { // just give more code context > ... >> gfp_mask |= GFP_TRANSHUGE; >> order = HPAGE_PMD_ORDER; > order assigned here >> } >> + if (PageCompound(page)) { >> + gfp_mask |= __GFP_COMP; >> + order = compound_order(page); > re-assinged again as THP is a compound page? Ah, you are right. Will use else if instead. Thanks. > Thanks, > Eric >> + } >> zidx = zone_idx(page_zone(page)); >> if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) >> gfp_mask |= __GFP_HIGHMEM; >> new_page = __alloc_pages(gfp_mask, order, nid, mtc->nmask); >> - if (new_page && PageTransHuge(new_page)) >> + if (new_page && is_transparent_hugepage(page)) >> prep_transhuge_page(new_page); >> return new_page; -- Best Regards, Yan, Zi
On Fri, Dec 10, 2021 at 8:08 AM Zi Yan <ziy@nvidia.com> wrote: > > On 10 Dec 2021, at 2:53, Eric Ren wrote: > > > Hi, > > > > On 2021/12/10 07:04, Zi Yan wrote: > >> From: Zi Yan <ziy@nvidia.com> > >> > >> alloc_migration_target() is used by alloc_contig_range() and non-LRU > >> movable compound pages can be migrated. Current code does not allocate the > >> right page size for such pages. Check THP precisely using > >> is_transparent_huge() and add allocation support for non-LRU compound > >> pages. > > Could you elaborate on why the current code doesn't get the right size? how this patch fixes it. > > The current code only check PageHuge() and PageTransHuge(). Non-LRU compound > pages will be regarded as PageTransHuge() and an order-9 page is always allocated > regardless of the actual page order. This patch makes the exact check for > THPs using is_transparent_huge() instead of PageTransHuge() and checks PageCompound() > after PageHuge() and is_transparent_huge() for non-LRU compound pages. > > > > > The description sounds like it's an existing bug, if so, the patch subject should be changed to > > "Fixes ..."? > > I have not seen any related bug report. > > > > >> > >> Signed-off-by: Zi Yan <ziy@nvidia.com> > >> --- > >> mm/migrate.c | 8 ++++++-- > >> 1 file changed, 6 insertions(+), 2 deletions(-) > >> > >> diff --git a/mm/migrate.c b/mm/migrate.c > >> index d487a399253b..2ce3c771b1de 100644 > >> --- a/mm/migrate.c > >> +++ b/mm/migrate.c > >> @@ -1563,7 +1563,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) > >> return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask); > >> } > >> - if (PageTransHuge(page)) { > >> + if (is_transparent_hugepage(page)) { > >> /* > >> * clear __GFP_RECLAIM to make the migration callback > >> * consistent with regular THP allocations. > >> @@ -1572,13 +1572,17 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) > > if (PageTransHuge(page)) { // just give more code context > > ... > >> gfp_mask |= GFP_TRANSHUGE; > >> order = HPAGE_PMD_ORDER; > > order assigned here > >> } > >> + if (PageCompound(page)) { > >> + gfp_mask |= __GFP_COMP; > >> + order = compound_order(page); > > re-assinged again as THP is a compound page? > > Ah, you are right. Will use else if instead. Thanks. You don't have to use else if, you could just do: if (PageCompound(page)) { gfp_mask |= __GFP_COMP; order = compound_order(page); } if (is_transparent_hugepage(page)) { /* Manipulate THP specific gfp mask */ .... } > > > Thanks, > > Eric > >> + } > >> zidx = zone_idx(page_zone(page)); > >> if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) > >> gfp_mask |= __GFP_HIGHMEM; > >> new_page = __alloc_pages(gfp_mask, order, nid, mtc->nmask); > >> - if (new_page && PageTransHuge(new_page)) > >> + if (new_page && is_transparent_hugepage(page)) > >> prep_transhuge_page(new_page); > >> return new_page; > > > -- > Best Regards, > Yan, Zi
diff --git a/mm/migrate.c b/mm/migrate.c index d487a399253b..2ce3c771b1de 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1563,7 +1563,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask); } - if (PageTransHuge(page)) { + if (is_transparent_hugepage(page)) { /* * clear __GFP_RECLAIM to make the migration callback * consistent with regular THP allocations. @@ -1572,13 +1572,17 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) gfp_mask |= GFP_TRANSHUGE; order = HPAGE_PMD_ORDER; } + if (PageCompound(page)) { + gfp_mask |= __GFP_COMP; + order = compound_order(page); + } zidx = zone_idx(page_zone(page)); if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) gfp_mask |= __GFP_HIGHMEM; new_page = __alloc_pages(gfp_mask, order, nid, mtc->nmask); - if (new_page && PageTransHuge(new_page)) + if (new_page && is_transparent_hugepage(page)) prep_transhuge_page(new_page); return new_page;