Message ID | 20220424091105.48374-3-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | A few fixup patches for mm | expand |
On 24.04.22 11:11, Miaohe Lin wrote: > This is observed by code review only but not any real report. > > When we turn off swapping we could have lost the bits stored in the swap > ptes. The new rmap-exclusive bit is fine since that turned into a page > flag, but not for soft-dirty and uffd-wp. Add them. > > Suggested-by: Peter Xu <peterx@redhat.com> > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/swapfile.c | 10 +++++++--- > 1 file changed, 7 insertions(+), 3 deletions(-) > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 95b63f69f388..522a0eb16bf1 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1783,7 +1783,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, > { > struct page *swapcache; > spinlock_t *ptl; > - pte_t *pte; > + pte_t *pte, new_pte; > int ret = 1; > > swapcache = page; > @@ -1832,8 +1832,12 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, > page_add_new_anon_rmap(page, vma, addr); > lru_cache_add_inactive_or_unevictable(page, vma); > } > - set_pte_at(vma->vm_mm, addr, pte, > - pte_mkold(mk_pte(page, vma->vm_page_prot))); > + new_pte = pte_mkold(mk_pte(page, vma->vm_page_prot)); > + if (pte_swp_soft_dirty(*pte)) > + new_pte = pte_mksoft_dirty(new_pte); > + if (pte_swp_uffd_wp(*pte)) > + new_pte = pte_mkuffd_wp(new_pte); > + set_pte_at(vma->vm_mm, addr, pte, new_pte); > swap_free(entry); > out: > pte_unmap_unlock(pte, ptl); Reviewed-by: David Hildenbrand <david@redhat.com>
diff --git a/mm/swapfile.c b/mm/swapfile.c index 95b63f69f388..522a0eb16bf1 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1783,7 +1783,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, { struct page *swapcache; spinlock_t *ptl; - pte_t *pte; + pte_t *pte, new_pte; int ret = 1; swapcache = page; @@ -1832,8 +1832,12 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, page_add_new_anon_rmap(page, vma, addr); lru_cache_add_inactive_or_unevictable(page, vma); } - set_pte_at(vma->vm_mm, addr, pte, - pte_mkold(mk_pte(page, vma->vm_page_prot))); + new_pte = pte_mkold(mk_pte(page, vma->vm_page_prot)); + if (pte_swp_soft_dirty(*pte)) + new_pte = pte_mksoft_dirty(new_pte); + if (pte_swp_uffd_wp(*pte)) + new_pte = pte_mkuffd_wp(new_pte); + set_pte_at(vma->vm_mm, addr, pte, new_pte); swap_free(entry); out: pte_unmap_unlock(pte, ptl);
This is observed by code review only but not any real report. When we turn off swapping we could have lost the bits stored in the swap ptes. The new rmap-exclusive bit is fine since that turned into a page flag, but not for soft-dirty and uffd-wp. Add them. Suggested-by: Peter Xu <peterx@redhat.com> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/swapfile.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)