Message ID | 20221207203034.650899-7-peterx@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2,01/10] mm/hugetlb: Let vma_offset_start() to return start | expand |
On 12/7/22 12:30, Peter Xu wrote: > Since hugetlb_follow_page_mask() walks the pgtable, it needs the vma lock > to make sure the pgtable page will not be freed concurrently. > > Acked-by: David Hildenbrand <david@redhat.com> > Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > mm/hugetlb.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) Reviewed-by: John Hubbard <jhubbard@nvidia.com> thanks,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 49f73677a418..3fbbd599d015 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6226,9 +6226,10 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, if (WARN_ON_ONCE(flags & FOLL_PIN)) return NULL; + hugetlb_vma_lock_read(vma); pte = huge_pte_offset(mm, haddr, huge_page_size(h)); if (!pte) - return NULL; + goto out_unlock; ptl = huge_pte_lock(h, mm, pte); entry = huge_ptep_get(pte); @@ -6251,6 +6252,8 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, } out: spin_unlock(ptl); +out_unlock: + hugetlb_vma_unlock_read(vma); return page; }