Message ID | 20220224122614.94921-2-david@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: COW fixes part 2: reliable GUP pins of anonymous pages | expand |
On 2/24/22 05:26, David Hildenbrand wrote: > In case arch_unmap_one() fails, we already did a swap_duplicate(). let's > undo that properly via swap_free(). > > Fixes: ca827d55ebaa ("mm, swap: Add infrastructure for saving page metadata on swap") > Cc: Khalid Aziz <khalid.aziz@oracle.com> > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > mm/rmap.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 6a1e8c7f6213..f825aeef61ca 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1625,6 +1625,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > break; > } > if (arch_unmap_one(mm, vma, address, pteval) < 0) { > + swap_free(entry); > set_pte_at(mm, address, pvmw.pte, pteval); > ret = false; > page_vma_mapped_walk_done(&pvmw); That looks reasonable. Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com> -- Khalid
diff --git a/mm/rmap.c b/mm/rmap.c index 6a1e8c7f6213..f825aeef61ca 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1625,6 +1625,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, break; } if (arch_unmap_one(mm, vma, address, pteval) < 0) { + swap_free(entry); set_pte_at(mm, address, pvmw.pte, pteval); ret = false; page_vma_mapped_walk_done(&pvmw);
In case arch_unmap_one() fails, we already did a swap_duplicate(). let's undo that properly via swap_free(). Fixes: ca827d55ebaa ("mm, swap: Add infrastructure for saving page metadata on swap") Cc: Khalid Aziz <khalid.aziz@oracle.com> Signed-off-by: David Hildenbrand <david@redhat.com> --- mm/rmap.c | 1 + 1 file changed, 1 insertion(+)