Message ID | 20201106005147.20113-5-rcampbell@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/hmm/nouveau: add THP migration to migrate_vma_* | expand |
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > +extern struct page *alloc_transhugepage(struct vm_area_struct *vma, > + unsigned long addr); No need for the extern. And also here: do we actually need the stub, or can the caller make sure (using IS_ENABLED and similar) that the compiler knows the code is dead? > +struct page *alloc_transhugepage(struct vm_area_struct *vma, > + unsigned long haddr) > +{ > + gfp_t gfp; > + struct page *page; > + > + gfp = alloc_hugepage_direct_gfpmask(vma); > + page = alloc_hugepage_vma(gfp, vma, haddr, HPAGE_PMD_ORDER); > + if (page) > + prep_transhuge_page(page); > + return page; I think do_huge_pmd_anonymous_page should be switched to use this helper as well.
On 11/6/20 12:01 AM, Christoph Hellwig wrote: >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE >> +extern struct page *alloc_transhugepage(struct vm_area_struct *vma, >> + unsigned long addr); > > No need for the extern. And also here: do we actually need the stub, > or can the caller make sure (using IS_ENABLED and similar) that the > compiler knows the code is dead? Same problem as with prep_transhuge_device_private_page(), since alloc_hugepage_direct_gfpmask() and alloc_hugepage_vma() are not EXPORT_SYMBOL_GPL. >> +struct page *alloc_transhugepage(struct vm_area_struct *vma, >> + unsigned long haddr) >> +{ >> + gfp_t gfp; >> + struct page *page; >> + >> + gfp = alloc_hugepage_direct_gfpmask(vma); >> + page = alloc_hugepage_vma(gfp, vma, haddr, HPAGE_PMD_ORDER); >> + if (page) >> + prep_transhuge_page(page); >> + return page; > > I think do_huge_pmd_anonymous_page should be switched to use this > helper as well. Sure, I'll do that for v4.
diff --git a/include/linux/gfp.h b/include/linux/gfp.h index c603237e006c..242398c4b556 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -564,6 +564,16 @@ static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0) #define alloc_page_vma(gfp_mask, vma, addr) \ alloc_pages_vma(gfp_mask, 0, vma, addr, numa_node_id(), false) +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +extern struct page *alloc_transhugepage(struct vm_area_struct *vma, + unsigned long addr); +#else +static inline struct page *alloc_transhugepage(struct vm_area_struct *vma, + unsigned long addr) +{ + return NULL; +} +#endif extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); extern unsigned long get_zeroed_page(gfp_t gfp_mask); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a073e66d0ee2..c2c1d3e7c35f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -765,6 +765,20 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return __do_huge_pmd_anonymous_page(vmf, page, gfp); } +struct page *alloc_transhugepage(struct vm_area_struct *vma, + unsigned long haddr) +{ + gfp_t gfp; + struct page *page; + + gfp = alloc_hugepage_direct_gfpmask(vma); + page = alloc_hugepage_vma(gfp, vma, haddr, HPAGE_PMD_ORDER); + if (page) + prep_transhuge_page(page); + return page; +} +EXPORT_SYMBOL_GPL(alloc_transhugepage); + static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write, pgtable_t pgtable)
Transparent huge page allocation policy is controlled by several sysfs variables. Rather than expose these to each device driver that needs to allocate THPs, provide a helper function. Signed-off-by: Ralph Campbell <rcampbell@nvidia.com> --- include/linux/gfp.h | 10 ++++++++++ mm/huge_memory.c | 14 ++++++++++++++ 2 files changed, 24 insertions(+)