Message ID | 20220105214756.91065-4-zi.yan@sent.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Use pageblock_order for cma and alloc_contig_range alignment. | expand |
On 05.01.22 22:47, Zi Yan wrote: > From: Zi Yan <ziy@nvidia.com> > > alloc_migration_target() is used by alloc_contig_range() and non-LRU > movable compound pages can be migrated. Current code does not allocate the > right page size for such pages. Check THP precisely using > is_transparent_huge() and add allocation support for non-LRU compound > pages. IIRC, we don't have any non-lru migratable pages that are coumpound pages. Read: not used and not supported :) Why is this required in the context of this series?
On 12 Jan 2022, at 6:04, David Hildenbrand wrote: > On 05.01.22 22:47, Zi Yan wrote: >> From: Zi Yan <ziy@nvidia.com> >> >> alloc_migration_target() is used by alloc_contig_range() and non-LRU >> movable compound pages can be migrated. Current code does not allocate the >> right page size for such pages. Check THP precisely using >> is_transparent_huge() and add allocation support for non-LRU compound >> pages. > > IIRC, we don't have any non-lru migratable pages that are coumpound > pages. Read: not used and not supported :) OK, but nothing prevents one writing a driver that allocates compound pages and provides address_space->migratepage() and address_space->isolate_page(). Actually, to test this series, I write a kernel module that allocates an order-10 page, gives it a fake address_space with migratepage() and isolate_page(), __SetPageMovable() on it, then call alloc_contig_range() on the page range. Apparently, my kernel module is not supported by the kernel, thus, I added this patch. Do you have an alternative test to my kernel module, so that I do not even need this patch myself? > Why is this required in the context of this series? It might not be required. I will drop it. -- Best Regards, Yan, Zi
On 13.01.22 16:46, Zi Yan wrote: > On 12 Jan 2022, at 6:04, David Hildenbrand wrote: > >> On 05.01.22 22:47, Zi Yan wrote: >>> From: Zi Yan <ziy@nvidia.com> >>> >>> alloc_migration_target() is used by alloc_contig_range() and non-LRU >>> movable compound pages can be migrated. Current code does not allocate the >>> right page size for such pages. Check THP precisely using >>> is_transparent_huge() and add allocation support for non-LRU compound >>> pages. >> >> IIRC, we don't have any non-lru migratable pages that are coumpound >> pages. Read: not used and not supported :) > > OK, but nothing prevents one writing a driver that allocates compound > pages and provides address_space->migratepage() and address_space->isolate_page(). > > Actually, to test this series, I write a kernel module that allocates > an order-10 page, gives it a fake address_space with migratepage() and > isolate_page(), __SetPageMovable() on it, then call alloc_contig_range() > on the page range. Apparently, my kernel module is not supported by > the kernel, thus, I added this patch. > > Do you have an alternative test to my kernel module, so that I do not > even need this patch myself? > >> Why is this required in the context of this series? > > It might not be required. I will drop it. That's why I think it would be best dropping it. If you need it in different context, better submit it in different context. Makes this series easier to digest :)
diff --git a/mm/migrate.c b/mm/migrate.c index c7da064b4781..b1851ffb8576 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1546,9 +1546,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) gfp_mask = htlb_modify_alloc_mask(h, gfp_mask); return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask); - } - - if (PageTransHuge(page)) { + } else if (is_transparent_hugepage(page)) { /* * clear __GFP_RECLAIM to make the migration callback * consistent with regular THP allocations. @@ -1556,14 +1554,19 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) gfp_mask &= ~__GFP_RECLAIM; gfp_mask |= GFP_TRANSHUGE; order = HPAGE_PMD_ORDER; + } else if (PageCompound(page)) { + /* for non-LRU movable compound pages */ + gfp_mask |= __GFP_COMP; + order = compound_order(page); } + zidx = zone_idx(page_zone(page)); if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) gfp_mask |= __GFP_HIGHMEM; new_page = __alloc_pages(gfp_mask, order, nid, mtc->nmask); - if (new_page && PageTransHuge(new_page)) + if (new_page && is_transparent_hugepage(page)) prep_transhuge_page(new_page); return new_page;