From patchwork Thu Dec 1 00:59:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiaqi Yan X-Patchwork-Id: 13060865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8573C47089 for ; Thu, 1 Dec 2022 00:59:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E8B926B0074; Wed, 30 Nov 2022 19:59:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E17B76B0075; Wed, 30 Nov 2022 19:59:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B7C0A6B0078; Wed, 30 Nov 2022 19:59:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AA5766B0074 for ; Wed, 30 Nov 2022 19:59:39 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 7E619AB96B for ; Thu, 1 Dec 2022 00:59:39 +0000 (UTC) X-FDA: 80191929678.19.280ACA6 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf07.hostedemail.com (Postfix) with ESMTP id 1F9CA40004 for ; Thu, 1 Dec 2022 00:59:38 +0000 (UTC) Received: by mail-pl1-f201.google.com with SMTP id u15-20020a170902e5cf00b001899d29276eso186228plf.10 for ; Wed, 30 Nov 2022 16:59:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=o3vsgqe/FbLTZ9XUKE9GuKTRndN1+UYriAlr/Ar/6zY=; b=W1acN25GLpQwHrzsR+B36a1Spi5bk8ozgHcCWIkR9LYQew1Pap6jBA474k2GGdh+ZL 7mR2I9tbXbk3y4XmyUqFJq7jOFgKWEb8GZpvNgVX9cc/fSF3PrPdxl+/j9KPGe7pWY98 gfLCzdyaJSR1hu0UgCk9Wf6rwlq5rgTeENZfrNxgW7DQRNcDn89jv8bKow/ozkFzO2NR kET1ZMjgRqKxIe3Mc5uxtpfjaD0k06M7MfTGSM0deKZ5PJMtDsgGKoKwKUV4CPpVYi6K vuPe/pCZU6JnGeqr5PcFjjrKDHexpDYGKkRzGIdNHozFAlfCgT2KI2Z7ZV0XZp98P6UL lEFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=o3vsgqe/FbLTZ9XUKE9GuKTRndN1+UYriAlr/Ar/6zY=; b=ThTTx0y+9ABOvHqw/bB0KLHvkNmRFx2M8UsLmL2iVwgSTjFLWypge9kJidvMM1Cise p12ZYWmA6xdAy5DuWiqxg5tXdcZ9yhpFlajkOfrBW35/YmTaILrURZK+CMsaJV/mpL7X tWQ9345tKR1nwAei0BCe5Lxm4bQ996wbCehYLxn2LmkTUxsD2zr4AgvD5JrySzZIDutm oQe3+VEWZIf/63yp6oeCdYKWqP/kdzxWgJwh9aqUyEUgzp6oYFKGKPuFDZ9TDZPl/zay CsgPr86+SgihJQUTvfRWa7Ey2GsNhXq9jy/XzJxaR5KLIs8P5zRQGcpuUo7cf/xKVcmb Gcqg== X-Gm-Message-State: ANoB5plRPyJfV3me8gpJany/6WNaNplfYo05g8Trowd8KKDhIxsPMzvH gh0o/uK5pIXG+LquDjCKr+G4l3pqzExiDQ== X-Google-Smtp-Source: AA0mqf5i+QC1EHju9HasRHcziok44mRkvyWIbVv1zOSwjv8iMsuYI6Kkd2Aqw3B20IGtYTGGHAJS2PDZBO0uqA== X-Received: from yjqkernel.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1837]) (user=jiaqiyan job=sendgmr) by 2002:a17:903:44b:b0:189:9749:d3fe with SMTP id iw11-20020a170903044b00b001899749d3femr13238984plb.70.1669856378152; Wed, 30 Nov 2022 16:59:38 -0800 (PST) Date: Wed, 30 Nov 2022 16:59:31 -0800 In-Reply-To: <20221201005931.3877608-1-jiaqiyan@google.com> Mime-Version: 1.0 References: <20221201005931.3877608-1-jiaqiyan@google.com> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog Message-ID: <20221201005931.3877608-3-jiaqiyan@google.com> Subject: [PATCH v8 2/2] mm/khugepaged: recover from poisoned file-backed memory From: Jiaqi Yan To: kirill.shutemov@linux.intel.com, kirill@shutemov.name, shy828301@gmail.com, tongtiangen@huawei.com, tony.luck@intel.com Cc: naoya.horiguchi@nec.com, linmiaohe@huawei.com, jiaqiyan@google.com, linux-mm@kvack.org, akpm@linux-foundation.org, osalvador@suse.de ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=W1acN25G; spf=pass (imf07.hostedemail.com: domain of 3evyHYwgKCLUedVldtVibjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--jiaqiyan.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3evyHYwgKCLUedVldtVibjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--jiaqiyan.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669856379; a=rsa-sha256; cv=none; b=OuzLnID5BRZC31MRXYzBaj3N/Dh1d3Fd9s+mxNe3dny0o0UI3xY039Qe8vWJEJUmfldOid n2kpNCK6EUZxUQG/6VFRPrvziEz8PiWg3u6GJz5/bscoKdGLnC7GrM0+0WCyRWyQHcs+hy UU4/ZM3qk1yEYXy8/6lXhGtOf+YfQIQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669856379; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=o3vsgqe/FbLTZ9XUKE9GuKTRndN1+UYriAlr/Ar/6zY=; b=RcQFoP+sTbqEfstd7F6vaSLdIIvOKOfhTsbehBQ9azmSTh/sbRJSNjAE+gzNVXPvmrAoIN IXWwQqm7USbJDkK+owsnz7olPtWuUqkNerhACu0oOLfCG4N9x9GY1QxIOltN/GYWa4uZsb g6yaqyrtpINw0AfW5NCqUvhr69DE88c= Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=W1acN25G; spf=pass (imf07.hostedemail.com: domain of 3evyHYwgKCLUedVldtVibjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--jiaqiyan.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3evyHYwgKCLUedVldtVibjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--jiaqiyan.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 1F9CA40004 X-Stat-Signature: 884m45mjrhfe5sfepqa4kfr648pjp8aj X-Rspam-User: X-HE-Tag: 1669856378-733100 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make collapse_file roll back when copying pages failed. More concretely: - extract copying operations into a separate loop - postpone the updates for nr_none until both scanning and copying succeeded - postpone joining small xarray entries until both scanning and copying succeeded - postpone the update operations to NR_XXX_THPS until both scanning and copying succeeded - for non-SHMEM file, roll back filemap_nr_thps_inc if scan succeeded but copying failed Tested manually: 0. Enable khugepaged on system under test. Mount tmpfs at /mnt/ramdisk. 1. Start a two-thread application. Each thread allocates a chunk of non-huge memory buffer from /mnt/ramdisk. 2. Pick 4 random buffer address (2 in each thread) and inject uncorrectable memory errors at physical addresses. 3. Signal both threads to make their memory buffer collapsible, i.e. calling madvise(MADV_HUGEPAGE). 4. Wait and then check kernel log: khugepaged is able to recover from poisoned pages by skipping them. 5. Signal both thread to inspect their buffer contents and make sure no data corruption. Signed-off-by: Jiaqi Yan --- mm/khugepaged.c | 77 ++++++++++++++++++++++++++++++------------------- 1 file changed, 48 insertions(+), 29 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index d9ff99980daea..410458181fb16 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1817,6 +1817,9 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, { struct address_space *mapping = file->f_mapping; struct page *hpage; + struct page *page; + struct page *tmp; + struct folio *folio; pgoff_t index = 0, end = start + HPAGE_PMD_NR; LIST_HEAD(pagelist); XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER); @@ -1861,8 +1864,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, xas_set(&xas, start); for (index = start; index < end; index++) { - struct page *page = xas_next(&xas); - struct folio *folio; + page = xas_next(&xas); VM_BUG_ON(index != xas.xa_index); if (is_shmem) { @@ -2044,10 +2046,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, } nr = thp_nr_pages(hpage); - if (is_shmem) - __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr); - else { - __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr); + if (!is_shmem) { filemap_nr_thps_inc(mapping); /* * Paired with smp_mb() in do_dentry_open() to ensure @@ -2058,21 +2057,10 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, smp_mb(); if (inode_is_open_for_write(mapping->host)) { result = SCAN_FAIL; - __mod_lruvec_page_state(hpage, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); goto xa_locked; } } - - if (nr_none) { - __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none); - /* nr_none is always 0 for non-shmem. */ - __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none); - } - - /* Join all the small entries into a single multi-index entry */ - xas_set_order(&xas, start, HPAGE_PMD_ORDER); - xas_store(&xas, hpage); xa_locked: xas_unlock_irq(&xas); xa_unlocked: @@ -2085,20 +2073,35 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, try_to_unmap_flush(); if (result == SCAN_SUCCEED) { - struct page *page, *tmp; - /* * Replacing old pages with new one has succeeded, now we - * need to copy the content and free the old pages. + * attempt to copy the contents. */ index = start; - list_for_each_entry_safe(page, tmp, &pagelist, lru) { + list_for_each_entry(page, &pagelist, lru) { while (index < page->index) { clear_highpage(hpage + (index % HPAGE_PMD_NR)); index++; } - copy_highpage(hpage + (page->index % HPAGE_PMD_NR), - page); + if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), + page) > 0) { + result = SCAN_COPY_MC; + break; + } + index++; + } + while (result == SCAN_SUCCEED && index < end) { + clear_highpage(hpage + (index % HPAGE_PMD_NR)); + index++; + } + } + + if (result == SCAN_SUCCEED) { + /* + * Copying old pages to huge one has succeeded, now we + * need to free the old pages. + */ + list_for_each_entry_safe(page, tmp, &pagelist, lru) { list_del(&page->lru); page->mapping = NULL; page_ref_unfreeze(page, 1); @@ -2106,12 +2109,23 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, ClearPageUnevictable(page); unlock_page(page); put_page(page); - index++; } - while (index < end) { - clear_highpage(hpage + (index % HPAGE_PMD_NR)); - index++; + + xas_lock_irq(&xas); + if (is_shmem) + __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr); + else + __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr); + + if (nr_none) { + __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none); + /* nr_none is always 0 for non-shmem. */ + __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none); } + /* Join all the small entries into a single multi-index entry. */ + xas_set_order(&xas, start, HPAGE_PMD_ORDER); + xas_store(&xas, hpage); + xas_unlock_irq(&xas); SetPageUptodate(hpage); page_ref_add(hpage, HPAGE_PMD_NR - 1); @@ -2127,8 +2141,6 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, unlock_page(hpage); hpage = NULL; } else { - struct page *page; - /* Something went wrong: roll back page cache changes */ xas_lock_irq(&xas); if (nr_none) { @@ -2162,6 +2174,13 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, xas_lock_irq(&xas); } VM_BUG_ON(nr_none); + /* + * Undo the updates of filemap_nr_thps_inc for non-SHMEM file only. + * This undo is not needed unless failure is due to SCAN_COPY_MC. + */ + if (!is_shmem && result == SCAN_COPY_MC) + filemap_nr_thps_dec(mapping); + xas_unlock_irq(&xas); hpage->mapping = NULL;