From patchwork Tue Feb 21 08:59:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naoya Horiguchi X-Patchwork-Id: 13147507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37A2CC61DA3 for ; Tue, 21 Feb 2023 08:59:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C58D46B0071; Tue, 21 Feb 2023 03:59:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C089B6B0072; Tue, 21 Feb 2023 03:59:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF7C96B0073; Tue, 21 Feb 2023 03:59:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9E9E16B0071 for ; Tue, 21 Feb 2023 03:59:32 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 64A3381020 for ; Tue, 21 Feb 2023 08:59:32 +0000 (UTC) X-FDA: 80490700584.13.17481C7 Received: from out-15.mta0.migadu.com (out-15.mta0.migadu.com [91.218.175.15]) by imf28.hostedemail.com (Postfix) with ESMTP id AE179C0002 for ; Tue, 21 Feb 2023 08:59:29 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=N6pZTSQs; spf=pass (imf28.hostedemail.com: domain of naoya.horiguchi@linux.dev designates 91.218.175.15 as permitted sender) smtp.mailfrom=naoya.horiguchi@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676969970; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=6LLYYj+4EDN3EITgWE7yjfrzPho+WEZBUlLZnSwu/kU=; b=wqEtuVbT8d4m5DG0wkYzkjHKjctjcfV+3meSyUrXzyA24FdB1QaWWiVd5n3eLPR+OMQ0XD Y3o7BQVwogyxW2rtG0GgsWTEGuz3pTEZcj0rtcLk/DsQEhJRrLCmVv+rRURpzc3L735EnH Tk5fFkMDN4tTYsWmbLUXeEfx0aPbd1E= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=N6pZTSQs; spf=pass (imf28.hostedemail.com: domain of naoya.horiguchi@linux.dev designates 91.218.175.15 as permitted sender) smtp.mailfrom=naoya.horiguchi@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676969970; a=rsa-sha256; cv=none; b=czgLlPrvTSrOJkyAptveO7lcyMNSY0mWF0zIR9XAHxfVa6ihVs6pRa7M/NhgijwuPYKjKa 8Z2udkYCEXlEWe7KBqK9dmTNvlordLehxEXqArE9tnB+l87b0u2ZO0OGvFN636V7cbQVmU ETgVjOLleyUWT12RO4zEQ2wjHkmrMXA= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1676969967; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=6LLYYj+4EDN3EITgWE7yjfrzPho+WEZBUlLZnSwu/kU=; b=N6pZTSQsdS1lgxPbuJr6cw74S1jUDX7OaVOdQgqLVqmX4TaUynhrwNlmKqJgUSACVY47YS F11eSK3d0sFA0ug7RsY1LN471LEX3Tfg/3Ii4+ARjCTRjfliiaY4kTwdld5LaODJCEL4LN n7NU9atHzol5IMTGSJbpe5D2joEA0ys= From: Naoya Horiguchi To: linux-mm@kvack.org Cc: Andrew Morton , Miaohe Lin , David Hildenbrand , Matthew Wilcox , Vlastimil Babka , Hugh Dickins , Minchan Kim , Naoya Horiguchi , linux-kernel@vger.kernel.org Subject: [PATCH v1] mm/hwpoison: convert TTU_IGNORE_HWPOISON to TTU_HWPOISON Date: Tue, 21 Feb 2023 17:59:05 +0900 Message-Id: <20230221085905.1465385-1-naoya.horiguchi@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: AE179C0002 X-Stat-Signature: thci6pdihdhtf3u6gqz1a3txgaphqbbw X-HE-Tag: 1676969969-474995 X-HE-Meta: U2FsdGVkX1/eG0Uzgu/4ZIxFEfAUw2s4pMymK4V1qF3rNhokyfV3YX/iGxPD9DLGacc94yqKtQdxWygdrWrwh2LfEm+XDWhhq9Eo5hPHw7749thO2l6Zda2JPTalYgr8B5lzCKkzvB0L325xXKvZfG066YS/4KA7U+9Tunc0aVUAX9uCWexrxF/GCGQWnMdcjjwXGU53H+hR1HN9hY6WX/ZAHrQ1zvL9e1rPIAADw6aLtUY4a90PAAl3pV7/A8pBq7hjMp4So5FukPMmOmG+E5xNg/B1sL0Jq3AV8jK8AzttrSfe+OEUS4fMTFDShGRO6J8UTodflqSQLJt+6uYlbnNzfVM9yfuo/kOUuH3Co2/kjD7jIHwiB7IMIKNPemroPU77HVnrDH3lu7ye7E6b6Z3v4Lfyo+ayNIU+1e4vyzL20UnLXPb2BVmyVcK6Q8dQxaRxA5O2qJpYyP3FUfVqkLwwzvvztzeg9i7sSK5UBXFTS2/CO+k1KzDrClWB556jlkEiFrsFd9WbjpRiYh0tP2jzvnudEwNlHYWrBthFGqoExyIVYXHo3uQj9VDmjBMZLR1fYIs3FwxyZDP1gRI9PvzUclBVMlITpuCZU0PqD5vDAVYXNJ6q1tvTNssqNFw/kIK3YZCaHwIeU3hRBFnTqYc7if3u9MzWUq+gE6zuiNXGsuS813ZBPThog3Ir47BTygDDhErpmcc6ypPe6WsOTy8gc/4alwt76keY5MRBs8FCFDRBXJf7kV3yYEufrXLIxzpzS6zcx+QeZ6lusjZCYRQ7ffP8yyzT6OXcudoxgfLyK70ZRD/ABuoPHLjqAy7uxCC7cUUMHL9aFq+HO7p88fDKJQOb/M1IJTn0plxse1XSyMjaVo8oxohAhX/vpgwdIECKdyPhmcwWiE948WOhqpm1JKZCPAhTWq5wPAE/nSWci7gSsy97tUZ4CZjL3xyvyVgsgKB2CRXfboalhjw X8fKDrD/ eZfHV8M8CdthkzKRvINhjx9AgcAzMPmTLBSK1R7d5zx2PNbJJFY/qoFKiNCfQ8G5LOWD6tYArUn5c2+fylZWIM9/xCkPl34pTqiQVNCUttbQXH9LfJyOJjU/sVdIoaRFWntRA5Kg/0dggMg9kd8b6gkCwLIuDc3BGHDJzlKaLHESX4DiUd5Ae4jUxPOohV2TNEFuTYLhUhrgrKgrE6gE38v65lDHoUS2PScY2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Naoya Horiguchi After a memory error happens on a clean folio, a process unexpectedly receives SIGBUS when it accesses to the error page. This SIGBUS killing is pointless and simply degrades the level of RAS of the system, because the clean folio can be dropped without any data lost on memory error handling as we do for a clean pagecache. When memory_failure() is called on a clean folio, try_to_unmap() is called twice (one from split_huge_page() and one from hwpoison_user_mappings()). The root cause of the issue is that pte conversion to hwpoisoned entry is now done in the first call of try_to_unmap() because PageHWPoison is already set at this point, while it's actually expected to be done in the second call. This behavior disturbs the error handling operation like removing pagecache, which results in the malfunction described above. So convert TTU_IGNORE_HWPOISON into TTU_HWPOISON and set TTU_HWPOISON only when we really intend to convert pte to hwpoison entry. This can prevent other callers of try_to_unmap() from accidentally converting to hwpoison entries. Fixes: a42634a6c07d ("readahead: Use a folio in read_pages()") Signed-off-by: Naoya Horiguchi Cc: stable@vger.kernel.org # 5.18+ --- include/linux/rmap.h | 2 +- mm/memory-failure.c | 8 ++++---- mm/rmap.c | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index a4570da03e58..b87d01660412 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -94,7 +94,7 @@ enum ttu_flags { TTU_SPLIT_HUGE_PMD = 0x4, /* split huge PMD if any */ TTU_IGNORE_MLOCK = 0x8, /* ignore mlock */ TTU_SYNC = 0x10, /* avoid racy checks with PVMW_SYNC */ - TTU_IGNORE_HWPOISON = 0x20, /* corrupted page is recoverable */ + TTU_HWPOISON = 0x20, /* do convert pte to hwpoison entry */ TTU_BATCH_FLUSH = 0x40, /* Batch TLB flushes where possible * and caller guarantees they will * do a final flush if necessary */ diff --git a/mm/memory-failure.c b/mm/memory-failure.c index a1ede7bdce95..fae9baf3be16 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1069,7 +1069,7 @@ static int me_pagecache_dirty(struct page_state *ps, struct page *p) * cache and swap cache(ie. page is freshly swapped in). So it could be * referenced concurrently by 2 types of PTEs: * normal PTEs and swap PTEs. We try to handle them consistently by calling - * try_to_unmap(TTU_IGNORE_HWPOISON) to convert the normal PTEs to swap PTEs, + * try_to_unmap(!TTU_HWPOISON) to convert the normal PTEs to swap PTEs, * and then * - clear dirty bit to prevent IO * - remove from LRU @@ -1486,7 +1486,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, int flags, struct page *hpage) { struct folio *folio = page_folio(hpage); - enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC; + enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON; struct address_space *mapping; LIST_HEAD(tokill); bool unmap_success; @@ -1516,7 +1516,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, if (PageSwapCache(p)) { pr_err("%#lx: keeping poisoned page in swap cache\n", pfn); - ttu |= TTU_IGNORE_HWPOISON; + ttu &= ~TTU_HWPOISON; } /* @@ -1531,7 +1531,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, if (page_mkclean(hpage)) { SetPageDirty(hpage); } else { - ttu |= TTU_IGNORE_HWPOISON; + ttu &= ~TTU_HWPOISON; pr_info("%#lx: corrupted page was clean: dropped without side effects\n", pfn); } diff --git a/mm/rmap.c b/mm/rmap.c index 15ae24585fc4..8632e02661ac 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1602,7 +1602,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, /* Update high watermark before we lower rss */ update_hiwater_rss(mm); - if (PageHWPoison(subpage) && !(flags & TTU_IGNORE_HWPOISON)) { + if (PageHWPoison(subpage) && (flags & TTU_HWPOISON)) { pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); if (folio_test_hugetlb(folio)) { hugetlb_count_sub(folio_nr_pages(folio), mm);