Message ID | 20210127093349.39081-1-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/rmap: Fix potential pte_unmap on an not mapped pte | expand |
On Wed, 27 Jan 2021 04:33:49 -0500 Miaohe Lin <linmiaohe@huawei.com> wrote: > For PMD-mapped page (usually THP), pvmw->pte is NULL. For PTE-mapped THP, > pvmw->pte is mapped. But for HugeTLB pages, pvmw->pte is not mapped and set > to the relevant page table entry. So in page_vma_mapped_walk_done(), we may > do pte_unmap() for HugeTLB pte which is not mapped. Fix this by checking > pvmw->page against PageHuge before trying to do pte_unmap(). > What are the runtime consequences of this? Is there a workload which is known to trigger it? IOW, how do we justify a -stable backport of this fix? > > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -213,7 +213,8 @@ struct page_vma_mapped_walk { > > static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) > { > - if (pvmw->pte) > + /* HugeTLB pte is set to the relevant page table entry without pte_mapped. */ > + if (pvmw->pte && !PageHuge(pvmw->page)) > pte_unmap(pvmw->pte); > if (pvmw->ptl) > spin_unlock(pvmw->ptl); > -- > 2.19.1
Hi: On 2021/1/28 8:09, Andrew Morton wrote: > On Wed, 27 Jan 2021 04:33:49 -0500 Miaohe Lin <linmiaohe@huawei.com> wrote: > >> For PMD-mapped page (usually THP), pvmw->pte is NULL. For PTE-mapped THP, >> pvmw->pte is mapped. But for HugeTLB pages, pvmw->pte is not mapped and set >> to the relevant page table entry. So in page_vma_mapped_walk_done(), we may >> do pte_unmap() for HugeTLB pte which is not mapped. Fix this by checking >> pvmw->page against PageHuge before trying to do pte_unmap(). >> > > What are the runtime consequences of this? Is there a workload which > is known to trigger it? > Not yet. This should not be backported. My bad. Sorry about it. > IOW, how do we justify a -stable backport of this fix? > >> >> --- a/include/linux/rmap.h >> +++ b/include/linux/rmap.h >> @@ -213,7 +213,8 @@ struct page_vma_mapped_walk { >> >> static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) >> { >> - if (pvmw->pte) >> + /* HugeTLB pte is set to the relevant page table entry without pte_mapped. */ >> + if (pvmw->pte && !PageHuge(pvmw->page)) >> pte_unmap(pvmw->pte); >> if (pvmw->ptl) >> spin_unlock(pvmw->ptl); >> -- >> 2.19.1 > . >
diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 70085ca1a3fc..def5c62c93b3 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -213,7 +213,8 @@ struct page_vma_mapped_walk { static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) { - if (pvmw->pte) + /* HugeTLB pte is set to the relevant page table entry without pte_mapped. */ + if (pvmw->pte && !PageHuge(pvmw->page)) pte_unmap(pvmw->pte); if (pvmw->ptl) spin_unlock(pvmw->ptl);