Message ID | 20231219075538.414708-6-peterx@redhat.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | mm/gup: Unify hugetlb, part 2 | expand |
> On Dec 19, 2023, at 15:55, peterx@redhat.com wrote: > > From: Peter Xu <peterx@redhat.com> > > Introduce per-vma begin()/end() helpers for pgtable walks. This is a > preparation work to merge hugetlb pgtable walkers with generic mm. > > The helpers need to be called before and after a pgtable walk, will start > to be needed if the pgtable walker code supports hugetlb pages. It's a > hook point for any type of VMA, but for now only hugetlb uses it to > stablize the pgtable pages from getting away (due to possible pmd > unsharing). > > Reviewed-by: Christoph Hellwig <hch@infradead.org> > Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks.
On Mon, Dec 25, 2023 at 02:34:48PM +0800, Muchun Song wrote:
> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
You're using the old email address here. Do you want me to also use the
linux.dev one that you suggested me to use?
> On Jan 2, 2024, at 13:39, Peter Xu <peterx@redhat.com> wrote: > > On Mon, Dec 25, 2023 at 02:34:48PM +0800, Muchun Song wrote: >> Reviewed-by: Muchun Song <songmuchun@bytedance.com> > > You're using the old email address here. Do you want me to also use the > linux.dev one that you suggested me to use? Either is OK for the RB tag. > > -- > Peter Xu >
diff --git a/include/linux/mm.h b/include/linux/mm.h index b72bf25a45cf..85e43775824b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4181,4 +4181,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn) return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE); } +void vma_pgtable_walk_begin(struct vm_area_struct *vma); +void vma_pgtable_walk_end(struct vm_area_struct *vma); + #endif /* _LINUX_MM_H */ diff --git a/mm/memory.c b/mm/memory.c index 1795aba53cf5..9ac6a9db971e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6270,3 +6270,15 @@ void ptlock_free(struct ptdesc *ptdesc) kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif + +void vma_pgtable_walk_begin(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_lock_read(vma); +} + +void vma_pgtable_walk_end(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_unlock_read(vma); +}