From patchwork Mon Dec 18 13:58:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13497025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 812CAC35274 for ; Mon, 18 Dec 2023 13:58:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 685568D0008; Mon, 18 Dec 2023 08:58:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 60CEC8D0001; Mon, 18 Dec 2023 08:58:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4856A8D0008; Mon, 18 Dec 2023 08:58:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3925C8D0001 for ; Mon, 18 Dec 2023 08:58:54 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 039B28134C for ; Mon, 18 Dec 2023 13:58:53 +0000 (UTC) X-FDA: 81580094988.05.03E5C2A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 5DC4C40019 for ; Mon, 18 Dec 2023 13:58:52 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DTyqitNw; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702907932; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1MBkPk64KnUCtWs6hUkkJI8+9eArZ9bghM4uWRzSldU=; b=MXrqvd3K4YJB49VyJPV0/l3A+If9vhayhcnViJgf1IYD7WJpOICWlx7yONmXrTX9Kiwq0M vb+ywqWoC0SrelqT/eC2QAMYyZ1WE9TQAQSIEhrMT6RefbavpoZV+oRPODAIsQsM3NL2qt 1KG7OxiCMf8vwRzP+tZ75+YpJ4r1GPo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702907932; a=rsa-sha256; cv=none; b=m85ZYC4Gtlod041CgjKhe4nDuT36y87q0LXYk9qGiGvHwidNkR+QcoWhMvfFWGTEh3h+wO b++OU73NdFqUkKMYrdwNrzQdYiaPBd70pIobFlJkDBPeqHFxNP7Y2LRIxP1NydaAkLj5Ay CWVcvtdxOy8E46YiweVF3EaRDzH46B0= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DTyqitNw; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=1MBkPk64KnUCtWs6hUkkJI8+9eArZ9bghM4uWRzSldU=; b=DTyqitNwZmgyU2BBv/AwwA8oMg ZgC3wsQ43DD+kqvXtCR5Mw4En2zuTqcy1KZxYHH4LNVPWStRUiyoo7eBhTwUQ/fn4orOuCjqlrejK Y2UBE/yQNkOT8GAgD61LO6c7bPzyIZ9Fxd0uKwJyDK9idTV+qcMyySXDJ7+/0kSvfjmrBfzUd77/P wrH1D7+sGBi9HLL/1iJPkXwCg8i7KesTOu9gR22shbfGJKsOiN/VvfQRucDA9vwJ8T72D+KtOEHcG V/kFUiHUCjmULvQ+Yl4gPi7nC8MQksB6Og0Pd6jz7v/jJStFJ5Ipk/xU8HFoHHoz2Ejvmv/Ai4cDp Vai8syIg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rFE8l-00DtCM-Br; Mon, 18 Dec 2023 13:58:39 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Naoya Horiguchi , Dan Williams , stable@vger.kernel.org Subject: [PATCH 1/3] mm/memory-failure: Pass the folio and the page to collect_procs() Date: Mon, 18 Dec 2023 13:58:35 +0000 Message-Id: <20231218135837.3310403-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231218135837.3310403-1-willy@infradead.org> References: <20231218135837.3310403-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: 8aou161rzb58gacnr5btmuyppnfqpj3s X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 5DC4C40019 X-Rspam-User: X-HE-Tag: 1702907932-147572 X-HE-Meta: U2FsdGVkX181m9zxh6Qx9+onYsMeEcfBGlw0K604WHiBDpaTZDgT9w4UJmuY0eVLRSVHOuXMwd7D4I1oq0jXA4Z6WgAJh9aUw8z3B4p270cZzjagk3iOWafAxqyJ3qQD3MAd8lOxdRWiB2X0qV1UanFZ06803/8GXxCWHx0lT5csO2DZfm2V3t2EG73gdDIHNwnne34fv38u4JXpJgpolT2sDPrR2dzAqGnkYNb6c/EulHkpOKfnUGCkL2GsWVe1emfLeD5Xf5lYRv1xpnmMZti+DXxxo16I3+q1OxwubX0NpZ9gcosYKNbWYAVhFJ/xLmvXkoMrvg8vK46PVnJ63f4q/BNsy0zDP9Qc49aLJDzUFHmIhGHhrxL3qWoBo4Z8fyc1BdbqJxUuBmEKIta7KLiNt3iKglDQneEVOMCm78pvZwwTmNQ8v891x+xYNJHfcr+6pG0Jkp5kdDNxHPiOR8uVC9rtEhiK63Ak5gCpiv9Z6D4izLXdL3vMXIJ96bg2pG7Megxwf4q4U/Kmc38CY8B/cp5u5rUxq0pW8cTUI5WzpeyikRk54DkgoQiQ1uP8GF3SjvlosUlYu4BMscT/4Rm45UKBBL+lWJ2cSViS7IerJt/MwQSx+Ct4SQGh2G0NzMVXOxm8+w0IXcP1cjoAMjc+T+6mhZt2U9H/z5cx/TWFbLwPRh4p7ehyQUMo5whxSm/XlSwITW6OkFO+x7Gy2dJZmXDkahuBCqLN5r8HaLl0olYx7Kj/6rBbStEwhNG+f7FJz+eE8wWQkcpYkpm7Gd4lizwUgBRyl1VnAqfx+ovnHTv7/4kUuSLLklBcTkChxfhFKXCqkwa+5S+sX++3G8tko5Z38p7/2WixbxPGPR+rpoAh6zkNbRazaKQFeV4ALSRL4CwlbhJRt9HiewmsqrswW321dKrB8+nP7NPSIVt1VxncZxp3JFmhCwMM8zyxMw0D4NmnvpkGt6g4vjH stdRCROu S9Gt93u2Qf2oeNklTuj3XFxHJt0HyQRchXRxlCEfjt/GHnM++8Vyp33oLD4huoq9BZiiTXXDCyoMYVQYCritko/Kv5igkpOIOH1ABaIthPqjwYz9WAkbv9nI00mJfJCvGaicVIMiA7+yD7ryuBNHpmtgeDUTl4nSJztU7wkhudc00+FiZXylS+O9EZHgL5ESiG+hmGLdK0TE7vfmDkucBukrVNJIt6iJRwzZnScHWR5UYzG+Jm74DkQ5Qn//v05bL24gHgzJXOyZv+Xxd9lHPmdcpIODRaYBSSvYe X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Both collect_procs_anon() and collect_procs_file() iterate over the VMA interval trees looking for a single pgoff, so it is wrong to look for the pgoff of the head page as is currently done. However, it is also wrong to look at page->mapping of the precise page as this is invalid for tail pages. Clear up the confusion by passing both the folio and the precise page to collect_procs(). Fixes: 415c64c1453a ("mm/memory-failure: split thp earlier in memory error handling") Cc: stable@vger.kernel.org Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Naoya Horiguchi --- mm/memory-failure.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 660c21859118..6953bda11e6e 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -595,10 +595,9 @@ struct task_struct *task_early_kill(struct task_struct *tsk, int force_early) /* * Collect processes when the error hit an anonymous page. */ -static void collect_procs_anon(struct page *page, struct list_head *to_kill, - int force_early) +static void collect_procs_anon(struct folio *folio, struct page *page, + struct list_head *to_kill, int force_early) { - struct folio *folio = page_folio(page); struct vm_area_struct *vma; struct task_struct *tsk; struct anon_vma *av; @@ -633,12 +632,12 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill, /* * Collect processes when the error hit a file mapped page. */ -static void collect_procs_file(struct page *page, struct list_head *to_kill, - int force_early) +static void collect_procs_file(struct folio *folio, struct page *page, + struct list_head *to_kill, int force_early) { struct vm_area_struct *vma; struct task_struct *tsk; - struct address_space *mapping = page->mapping; + struct address_space *mapping = folio->mapping; pgoff_t pgoff; i_mmap_lock_read(mapping); @@ -704,17 +703,17 @@ static void collect_procs_fsdax(struct page *page, /* * Collect the processes who have the corrupted page mapped to kill. */ -static void collect_procs(struct page *page, struct list_head *tokill, - int force_early) +static void collect_procs(struct folio *folio, struct page *page, + struct list_head *tokill, int force_early) { - if (!page->mapping) + if (!folio->mapping) return; if (unlikely(PageKsm(page))) collect_procs_ksm(page, tokill, force_early); else if (PageAnon(page)) - collect_procs_anon(page, tokill, force_early); + collect_procs_anon(folio, page, tokill, force_early); else - collect_procs_file(page, tokill, force_early); + collect_procs_file(folio, page, tokill, force_early); } struct hwpoison_walk { @@ -1602,7 +1601,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, * mapped in dirty form. This has to be done before try_to_unmap, * because ttu takes the rmap data structures down. */ - collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED); + collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED); if (PageHuge(hpage) && !PageAnon(hpage)) { /* @@ -1772,7 +1771,7 @@ static int mf_generic_kill_procs(unsigned long long pfn, int flags, * SIGBUS (i.e. MF_MUST_KILL) */ flags |= MF_ACTION_REQUIRED | MF_MUST_KILL; - collect_procs(&folio->page, &to_kill, true); + collect_procs(folio, &folio->page, &to_kill, true); unmap_and_kill(&to_kill, pfn, folio->mapping, folio->index, flags); unlock: