diff mbox series

[2/3] mm/memory-failure: Check the mapcount of the precise page

Message ID 20231218135837.3310403-3-willy@infradead.org (mailing list archive)
State New
Headers show
Series Three memory-failure fixes | expand

Commit Message

Matthew Wilcox (Oracle) Dec. 18, 2023, 1:58 p.m. UTC
A process may map only some of the pages in a folio, and might be missed
if it maps the poisoned page but not the head page.  Or it might be
unnecessarily hit if it maps the head page, but not the poisoned page.

Fixes: 7af446a841a2 ("HWPOISON, hugetlb: enable error handling path for hugepage")
Cc: stable@vger.kernel.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/memory-failure.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Naoya Horiguchi Dec. 22, 2023, 1:23 a.m. UTC | #1
On Mon, Dec 18, 2023 at 01:58:36PM +0000, Matthew Wilcox (Oracle) wrote:
> A process may map only some of the pages in a folio, and might be missed
> if it maps the poisoned page but not the head page.  Or it might be
> unnecessarily hit if it maps the head page, but not the poisoned page.
> 
> Fixes: 7af446a841a2 ("HWPOISON, hugetlb: enable error handling path for hugepage")
> Cc: stable@vger.kernel.org
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
diff mbox series

Patch

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 6953bda11e6e..82e15baabb48 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1570,7 +1570,7 @@  static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
 	 * This check implies we don't kill processes if their pages
 	 * are in the swap cache early. Those are always late kills.
 	 */
-	if (!page_mapped(hpage))
+	if (!page_mapped(p))
 		return true;
 
 	if (PageSwapCache(p)) {
@@ -1621,10 +1621,10 @@  static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
 		try_to_unmap(folio, ttu);
 	}
 
-	unmap_success = !page_mapped(hpage);
+	unmap_success = !page_mapped(p);
 	if (!unmap_success)
 		pr_err("%#lx: failed to unmap page (mapcount=%d)\n",
-		       pfn, page_mapcount(hpage));
+		       pfn, page_mapcount(p));
 
 	/*
 	 * try_to_unmap() might put mlocked page in lru cache, so call