Message ID | 20221207203034.650899-3-peterx@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2,01/10] mm/hugetlb: Let vma_offset_start() to return start | expand |
On 12/7/22 12:30, Peter Xu wrote: > That's what the code does with !hugetlb pages, so we should logically do > the same for hugetlb, so migration entry will also be treated as no page. This reasoning make good sense to me. I looked again at the follow_page*() paths and just double-checked that this is accurate, and it is. > > This is probably also the last piece in follow_page code that may sleep, > the last one should be removed in cf994dd8af27 ("mm/gup: remove > FOLL_MIGRATION", 2022-11-16). > > Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> > Reviewed-by: David Hildenbrand <david@redhat.com> > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > mm/hugetlb.c | 11 ----------- > 1 file changed, 11 deletions(-) Reviewed-by: John Hubbard <jhubbard@nvidia.com> thanks,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1088f2f41c88..c8a6673fe5b4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6232,7 +6232,6 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, if (WARN_ON_ONCE(flags & FOLL_PIN)) return NULL; -retry: pte = huge_pte_offset(mm, haddr, huge_page_size(h)); if (!pte) return NULL; @@ -6255,16 +6254,6 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, page = NULL; goto out; } - } else { - if (is_hugetlb_entry_migration(entry)) { - spin_unlock(ptl); - __migration_entry_wait_huge(pte, ptl); - goto retry; - } - /* - * hwpoisoned entry is treated as no_page_table in - * follow_page_mask(). - */ } out: spin_unlock(ptl);