Message ID | 378c8650-1488-2edf-9647-32a53cf2e21@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: page_vma_mapped_walk() cleanup and THP fixes | expand |
On Wed, Jun 09, 2021 at 11:42:12PM -0700, Hugh Dickins wrote: > page_vma_mapped_walk() cleanup: rearrange the !pmd_present() block to > follow the same "return not_found, return not_found, return true" pattern > as the block above it (note: returning not_found there is never premature, > since existence or prior existence of huge pmd guarantees good alignment). > > Signed-off-by: Hugh Dickins <hughd@google.com> > Cc: <stable@vger.kernel.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
On Wed, Jun 09, 2021 at 11:42:12PM -0700, Hugh Dickins wrote: > page_vma_mapped_walk() cleanup: rearrange the !pmd_present() block to > follow the same "return not_found, return not_found, return true" pattern > as the block above it (note: returning not_found there is never premature, > since existence or prior existence of huge pmd guarantees good alignment). > > Signed-off-by: Hugh Dickins <hughd@google.com> > Cc: <stable@vger.kernel.org> > --- > mm/page_vma_mapped.c | 30 ++++++++++++++---------------- > 1 file changed, 14 insertions(+), 16 deletions(-) > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index 81000dd0b5da..b96fae568bc2 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -201,24 +201,22 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > if (pmd_page(pmde) != page) > return not_found(pvmw); > return true; > - } else if (!pmd_present(pmde)) { > - if (thp_migration_supported()) { > - if (!(pvmw->flags & PVMW_MIGRATION)) > - return not_found(pvmw); > - if (is_migration_entry(pmd_to_swp_entry(pmde))) { > - swp_entry_t entry = pmd_to_swp_entry(pmde); > + } > + if (!pmd_present(pmde)) { > + swp_entry_t entry; > > - if (migration_entry_to_page(entry) != page) > - return not_found(pvmw); > - return true; > - } > - } > - return not_found(pvmw); > - } else { > - /* THP pmd was split under us: handle on pte level */ > - spin_unlock(pvmw->ptl); > - pvmw->ptl = NULL; > + if (!thp_migration_supported() || > + !(pvmw->flags & PVMW_MIGRATION)) > + return not_found(pvmw); > + entry = pmd_to_swp_entry(pmde); > + if (!is_migration_entry(entry) || > + migration_entry_to_page(entry) != page) We'll need to do s/migration_entry_to_page/pfn_swap_entry_to_page/, depending on whether Alistair's series lands first or not I guess (as you mentioned in the cover letter). Thanks for the change, it does look much better. Reviewed-by: Peter Xu <peterx@redhat.com> > + return not_found(pvmw); > + return true; > } > + /* THP pmd was split under us: handle on pte level */ > + spin_unlock(pvmw->ptl); > + pvmw->ptl = NULL; > } else if (!pmd_present(pmde)) { > /* > * If PVMW_SYNC, take and drop THP pmd lock so that we > -- > 2.26.2 >
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 81000dd0b5da..b96fae568bc2 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -201,24 +201,22 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (pmd_page(pmde) != page) return not_found(pvmw); return true; - } else if (!pmd_present(pmde)) { - if (thp_migration_supported()) { - if (!(pvmw->flags & PVMW_MIGRATION)) - return not_found(pvmw); - if (is_migration_entry(pmd_to_swp_entry(pmde))) { - swp_entry_t entry = pmd_to_swp_entry(pmde); + } + if (!pmd_present(pmde)) { + swp_entry_t entry; - if (migration_entry_to_page(entry) != page) - return not_found(pvmw); - return true; - } - } - return not_found(pvmw); - } else { - /* THP pmd was split under us: handle on pte level */ - spin_unlock(pvmw->ptl); - pvmw->ptl = NULL; + if (!thp_migration_supported() || + !(pvmw->flags & PVMW_MIGRATION)) + return not_found(pvmw); + entry = pmd_to_swp_entry(pmde); + if (!is_migration_entry(entry) || + migration_entry_to_page(entry) != page) + return not_found(pvmw); + return true; } + /* THP pmd was split under us: handle on pte level */ + spin_unlock(pvmw->ptl); + pvmw->ptl = NULL; } else if (!pmd_present(pmde)) { /* * If PVMW_SYNC, take and drop THP pmd lock so that we
page_vma_mapped_walk() cleanup: rearrange the !pmd_present() block to follow the same "return not_found, return not_found, return true" pattern as the block above it (note: returning not_found there is never premature, since existence or prior existence of huge pmd guarantees good alignment). Signed-off-by: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> --- mm/page_vma_mapped.c | 30 ++++++++++++++---------------- 1 file changed, 14 insertions(+), 16 deletions(-)