Message ID | e31a483c-6d73-a6bb-26c5-43c3b880a2@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: page_vma_mapped_walk() cleanup and THP fixes | expand |
On Wed, Jun 09, 2021 at 11:36:36PM -0700, Hugh Dickins wrote: > page_vma_mapped_walk() cleanup: get the hugetlbfs PageHuge case > out of the way at the start, so no need to worry about it later. > > Signed-off-by: Hugh Dickins <hughd@google.com> > Cc: <stable@vger.kernel.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
On Wed, Jun 09, 2021 at 11:36:36PM -0700, Hugh Dickins wrote: > page_vma_mapped_walk() cleanup: get the hugetlbfs PageHuge case > out of the way at the start, so no need to worry about it later. > > Signed-off-by: Hugh Dickins <hughd@google.com> > Cc: <stable@vger.kernel.org> > --- > mm/page_vma_mapped.c | 12 ++++++++---- > 1 file changed, 8 insertions(+), 4 deletions(-) > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index a6dbf714ca15..7c0504641fb8 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -153,10 +153,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > if (pvmw->pmd && !pvmw->pte) > return not_found(pvmw); > > - if (pvmw->pte) > - goto next_pte; > - > if (unlikely(PageHuge(page))) { > + /* The only possible mapping was handled on last iteration */ > + if (pvmw->pte) > + return not_found(pvmw); > + > /* when pud is not present, pte will be NULL */ > pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page)); > if (!pvmw->pte) Would it be even nicer to move the initial check to be after PageHuge() too? if (pvmw->pmd && !pvmw->pte) return not_found(pvmw); It looks already better, so no strong opinion. Reviewed-by: Peter Xu <peterx@redhat.com> Thanks,
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index a6dbf714ca15..7c0504641fb8 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -153,10 +153,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (pvmw->pmd && !pvmw->pte) return not_found(pvmw); - if (pvmw->pte) - goto next_pte; - if (unlikely(PageHuge(page))) { + /* The only possible mapping was handled on last iteration */ + if (pvmw->pte) + return not_found(pvmw); + /* when pud is not present, pte will be NULL */ pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page)); if (!pvmw->pte) @@ -168,6 +169,9 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return not_found(pvmw); return true; } + + if (pvmw->pte) + goto next_pte; restart: pgd = pgd_offset(mm, pvmw->address); if (!pgd_present(*pgd)) @@ -233,7 +237,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return true; next_pte: /* Seek to next pte only makes sense for THP */ - if (!PageTransHuge(page) || PageHuge(page)) + if (!PageTransHuge(page)) return not_found(pvmw); end = vma_address_end(page, pvmw->vma); do {
page_vma_mapped_walk() cleanup: get the hugetlbfs PageHuge case out of the way at the start, so no need to worry about it later. Signed-off-by: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> --- mm/page_vma_mapped.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)