Message ID | 20240321220802.679544-5-peterx@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/gup: Unify hugetlb, part 2 | expand |
On Thu, Mar 21, 2024 at 06:07:54PM -0400, peterx@redhat.com wrote: > From: Peter Xu <peterx@redhat.com> > > Introduce per-vma begin()/end() helpers for pgtable walks. This is a > preparation work to merge hugetlb pgtable walkers with generic mm. > > The helpers need to be called before and after a pgtable walk, will start > to be needed if the pgtable walker code supports hugetlb pages. It's a > hook point for any type of VMA, but for now only hugetlb uses it to > stablize the pgtable pages from getting away (due to possible pmd > unsharing). > > Reviewed-by: Christoph Hellwig <hch@infradead.org> > Reviewed-by: Muchun Song <muchun.song@linux.dev> > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > include/linux/mm.h | 3 +++ > mm/memory.c | 12 ++++++++++++ > 2 files changed, 15 insertions(+) is_vm_hugetlb_page(vma) seems weirdly named. Regardless Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Jason
diff --git a/include/linux/mm.h b/include/linux/mm.h index 8147b1302413..d10eb89f4096 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4198,4 +4198,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn) return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE); } +void vma_pgtable_walk_begin(struct vm_area_struct *vma); +void vma_pgtable_walk_end(struct vm_area_struct *vma); + #endif /* _LINUX_MM_H */ diff --git a/mm/memory.c b/mm/memory.c index 9bce1fa76dd7..4f2caf1c3c4d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6438,3 +6438,15 @@ void ptlock_free(struct ptdesc *ptdesc) kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif + +void vma_pgtable_walk_begin(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_lock_read(vma); +} + +void vma_pgtable_walk_end(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_unlock_read(vma); +}