Message ID | 799b3f9c-2a9e-dfef-5d89-26e9f76fd97@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: page_vma_mapped_walk() cleanup and THP fixes | expand |
On Wed, Jun 09, 2021 at 11:44:10PM -0700, Hugh Dickins wrote: > page_vma_mapped_walk() cleanup: adjust the test for crossing page table > boundary - I believe pvmw->address is always page-aligned, but nothing > else here assumed that; Maybe we should just get it aligned instead? (PMD_SIZE - PAGE_SIZE) is not most obvious mask calculation. > and remember to reset pvmw->pte to NULL after > unmapping the page table, though I never saw any bug from that. Okay, it's fair enough.
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index b96fae568bc2..0fe6e558d336 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -247,16 +247,16 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (pvmw->address >= end) return not_found(pvmw); /* Did we cross page table boundary? */ - if (pvmw->address % PMD_SIZE == 0) { - pte_unmap(pvmw->pte); + if ((pvmw->address & (PMD_SIZE - PAGE_SIZE)) == 0) { if (pvmw->ptl) { spin_unlock(pvmw->ptl); pvmw->ptl = NULL; } + pte_unmap(pvmw->pte); + pvmw->pte = NULL; goto restart; - } else { - pvmw->pte++; } + pvmw->pte++; } while (pte_none(*pvmw->pte)); if (!pvmw->ptl) {
page_vma_mapped_walk() cleanup: adjust the test for crossing page table boundary - I believe pvmw->address is always page-aligned, but nothing else here assumed that; and remember to reset pvmw->pte to NULL after unmapping the page table, though I never saw any bug from that. Signed-off-by: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> --- mm/page_vma_mapped.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)