From patchwork Fri Apr 29 00:09:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiaqi Yan X-Patchwork-Id: 12831341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D98BBC433EF for ; Fri, 29 Apr 2022 00:09:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6EEF16B0073; Thu, 28 Apr 2022 20:09:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6776D6B0074; Thu, 28 Apr 2022 20:09:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 47C7B6B0075; Thu, 28 Apr 2022 20:09:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 3777F6B0073 for ; Thu, 28 Apr 2022 20:09:53 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0C223633 for ; Fri, 29 Apr 2022 00:09:53 +0000 (UTC) X-FDA: 79407983466.12.7C40F0B Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf26.hostedemail.com (Postfix) with ESMTP id CC674140055 for ; Fri, 29 Apr 2022 00:09:50 +0000 (UTC) Received: by mail-pl1-f202.google.com with SMTP id k2-20020a170902ba8200b0015613b12004so3428423pls.22 for ; Thu, 28 Apr 2022 17:09:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ocQ4QiG6Z1H71mtTf4/Vzq9scVVfYbY83xjr/pOOFc4=; b=RLuyTfp8sbUVhJUD3KhCpj4SmMuiEa4FbBJ8KitzV3FWImUggQ+A3N37Rv+rt70sj1 zsqnxprSCqarCUaWylf1B8dafJ5kq4BVgHKWuxGBwoV1UQVbnRKHe+w+/vp64f/74Gsl PmEcltfAynqZKJKrho+8eYaA3mShujkmbgLoKBlUp7Bjie8rkDXKaG7Je+Ov5dMYWSfR a+2RReg1SC2RtFtK+HrQltvp/oWRLG4gNk0fDwMJukmL7QsNCnMfI5KlREftnWaq7CbZ t3Lz7YjQZYUk5AK3dLrwD+VvpEP522JouYnicPJTdSpk3NaeTrc/fXgws6JY098KUTKh I8Cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ocQ4QiG6Z1H71mtTf4/Vzq9scVVfYbY83xjr/pOOFc4=; b=ds76dqTULZg6ZMHIbu0ylKAdMvgtwXd4DoDspiuJCduS9b77m1GVDfosZLpR7eMVM8 j0/zvfYAoUdx2nielYHOqM94hhNBfETdtzygBRDjZjljWbOiPGEO72ZNyCD5wiizH3DH D+54X08CwUkD1Tt/eKbM+dI4Tm0NXN2GycWxSZpv/kR4UgVMbhuWHpCDi1zHt1USpzG8 GhS0sSmODOlmjalVEJk9LAKREWUsjopZt67VoCa32OC2rInHB83PxXk2Z/btSFe2H45h 5egruzjumYjPUhZmG2hVT8kSiIer7//EZriQMyTRz1tPaQBVyjhgUfQUwlYvWMHSCGTi WB5w== X-Gm-Message-State: AOAM532wzuDJiHqNNFMMYpz6yWOFRAbfLNvUJ9LbaxNiZqgDFRcX60Ax yB+FFAKelBgY2C+41ZjiGId62lA6hTfSfA== X-Google-Smtp-Source: ABdhPJw2maSKJclzcNo3i+WP7FsLsrLckUexdVzGwJA8XJljxlXZ1mRxdVxRSNrxfmnYizwPyg5psMeyOSedsg== X-Received: from yjqkernel.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1837]) (user=jiaqiyan job=sendgmr) by 2002:a17:902:8f94:b0:151:64c5:7759 with SMTP id z20-20020a1709028f9400b0015164c57759mr36165288plo.4.1651190991428; Thu, 28 Apr 2022 17:09:51 -0700 (PDT) Date: Thu, 28 Apr 2022 17:09:46 -0700 In-Reply-To: <20220429000947.2172219-1-jiaqiyan@google.com> Message-Id: <20220429000947.2172219-2-jiaqiyan@google.com> Mime-Version: 1.0 References: <20220429000947.2172219-1-jiaqiyan@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [RFC v2 1/2] mm: khugepaged: recover from poisoned anonymous memory From: Jiaqi Yan To: shy828301@gmail.com, tongtiangen@huawei.com Cc: linux-mm@kvack.org, tony.luck@intel.com, naoya.horiguchi@nec.com, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, juew@google.com, jiaqiyan@google.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: CC674140055 X-Stat-Signature: m6rgcgfzsawkj5b9o8kokpuze6d7j7se Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=RLuyTfp8; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 3zyxrYggKCO4ZYQgYoQdWeeWbU.SecbYdkn-ccalQSa.ehW@flex--jiaqiyan.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3zyxrYggKCO4ZYQgYoQdWeeWbU.SecbYdkn-ccalQSa.ehW@flex--jiaqiyan.bounces.google.com X-Rspam-User: X-HE-Tag: 1651190990-959762 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make __collapse_huge_page_copy return whether collapsing/copying anonymous pages succeeded, and make collapse_huge_page handle the return status. Break existing PTE scan loop into two for-loops. The first loop copies source pages into target huge page, and can fail gracefully when running into memory errors in source pages. Roll back the page table and page states in the 2nd loop when copying failed: 1) re-establish the PTEs-to-PMD connection. 2) release pages back to their LRU list. Signed-off-by: Jiaqi Yan --- include/linux/highmem.h | 19 ++++++ mm/khugepaged.c | 138 ++++++++++++++++++++++++++++++---------- 2 files changed, 124 insertions(+), 33 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 39bb9b47fa9cd..0ccb1e92c4b06 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -298,6 +298,25 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif +/* + * Machine check exception handled version of copy_highpage. + * Return true if copying page content failed; otherwise false. + * Note handling #MC requires arch opt-in. + */ +static inline bool copy_highpage_mc(struct page *to, struct page *from) +{ + char *vfrom, *vto; + unsigned long ret; + + vfrom = kmap_local_page(from); + vto = kmap_local_page(to); + ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE); + kunmap_local(vto); + kunmap_local(vfrom); + + return ret > 0; +} + static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off, size_t len) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 131492fd1148b..8e69a0640e551 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -52,6 +52,7 @@ enum scan_result { SCAN_CGROUP_CHARGE_FAIL, SCAN_TRUNCATED, SCAN_PAGE_HAS_PRIVATE, + SCAN_COPY_MC, }; #define CREATE_TRACE_POINTS @@ -739,44 +740,98 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, return 0; } -static void __collapse_huge_page_copy(pte_t *pte, struct page *page, - struct vm_area_struct *vma, - unsigned long address, - spinlock_t *ptl, - struct list_head *compound_pagelist) +/* + * __collapse_huge_page_copy - attempts to copy memory contents from normal + * pages to a hugepage. Cleanup the normal pages if copying succeeds; + * otherwise restore the original pmd page table. Returns true if copying + * succeeds, otherwise returns false. + * + * @pte: starting of the PTEs to copy from + * @page: the new hugepage to copy contents to + * @pmd: pointer to the new hugepage's PMD + * @rollback: the original normal PTEs' PMD + * @address: starting address to copy + * @pte_ptl: lock on normal pages' PTEs + * @compound_pagelist: list that stores compound pages + */ +static bool __collapse_huge_page_copy(pte_t *pte, + struct page *page, + pmd_t *pmd, + pmd_t rollback, + struct vm_area_struct *vma, + unsigned long address, + spinlock_t *pte_ptl, + struct list_head *compound_pagelist) { struct page *src_page, *tmp; pte_t *_pte; - for (_pte = pte; _pte < pte + HPAGE_PMD_NR; - _pte++, page++, address += PAGE_SIZE) { - pte_t pteval = *_pte; + pte_t pteval; + unsigned long _address; + spinlock_t *pmd_ptl; + bool copy_succeeded = true; - if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { + /* + * Copying pages' contents is subject to memory poison at any iteration. + */ + for (_pte = pte, _address = address; + _pte < pte + HPAGE_PMD_NR; + _pte++, page++, _address += PAGE_SIZE) { + pteval = *_pte; + + if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) clear_user_highpage(page, address); - add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1); - if (is_zero_pfn(pte_pfn(pteval))) { - /* - * ptl mostly unnecessary. - */ - spin_lock(ptl); - ptep_clear(vma->vm_mm, address, _pte); - spin_unlock(ptl); + else { + src_page = pte_page(pteval); + if (copy_highpage_mc(page, src_page)) { + copy_succeeded = false; + break; + } + } + } + + if (!copy_succeeded) { + /* + * Copying failed, re-establish the regular PMD that + * points to regular page table. Since PTEs are still + * isolated and locked, acquiring anon_vma_lock is unnecessary. + */ + pmd_ptl = pmd_lock(vma->vm_mm, pmd); + pmd_populate(vma->vm_mm, pmd, pmd_pgtable(rollback)); + spin_unlock(pmd_ptl); + } + + for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR; + _pte++, _address += PAGE_SIZE) { + pteval = *_pte; + if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { + if (copy_succeeded) { + add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1); + if (is_zero_pfn(pte_pfn(pteval))) { + /* + * ptl mostly unnecessary. + */ + spin_lock(pte_ptl); + pte_clear(vma->vm_mm, _address, _pte); + spin_unlock(pte_ptl); + } } } else { src_page = pte_page(pteval); - copy_user_highpage(page, src_page, address, vma); if (!PageCompound(src_page)) release_pte_page(src_page); - /* - * ptl mostly unnecessary, but preempt has to - * be disabled to update the per-cpu stats - * inside page_remove_rmap(). - */ - spin_lock(ptl); - ptep_clear(vma->vm_mm, address, _pte); - page_remove_rmap(src_page, false); - spin_unlock(ptl); - free_page_and_swap_cache(src_page); + + if (copy_succeeded) { + /* + * ptl mostly unnecessary, but preempt has to + * be disabled to update the per-cpu stats + * inside page_remove_rmap(). + */ + spin_lock(pte_ptl); + pte_clear(vma->vm_mm, _address, _pte); + page_remove_rmap(src_page, false); + spin_unlock(pte_ptl); + free_page_and_swap_cache(src_page); + } } } @@ -784,6 +839,8 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page, list_del(&src_page->lru); release_pte_page(src_page); } + + return copy_succeeded; } static void khugepaged_alloc_sleep(void) @@ -1066,6 +1123,7 @@ static void collapse_huge_page(struct mm_struct *mm, struct vm_area_struct *vma; struct mmu_notifier_range range; gfp_t gfp; + bool copied = false; VM_BUG_ON(address & ~HPAGE_PMD_MASK); @@ -1177,9 +1235,13 @@ static void collapse_huge_page(struct mm_struct *mm, */ anon_vma_unlock_write(vma->anon_vma); - __collapse_huge_page_copy(pte, new_page, vma, address, pte_ptl, - &compound_pagelist); + copied = __collapse_huge_page_copy(pte, new_page, pmd, _pmd, + vma, address, pte_ptl, &compound_pagelist); pte_unmap(pte); + if (!copied) { + result = SCAN_COPY_MC; + goto out_up_write; + } /* * spin_lock() below is not the equivalent of smp_wmb(), but * the smp_wmb() inside __SetPageUptodate() can be reused to @@ -1364,9 +1426,14 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, pte_unmap_unlock(pte, ptl); if (ret) { node = khugepaged_find_target_node(); - /* collapse_huge_page will return with the mmap_lock released */ - collapse_huge_page(mm, address, hpage, node, - referenced, unmapped); + /* + * collapse_huge_page will return with the mmap_r+w_lock released. + * It is uncertain if *hpage is NULL or not when collapse_huge_page + * returns, so keep ret=1 to jump to breakouterloop_mmap_lock + * in khugepaged_scan_mm_slot, then *hpage will be freed + * if collapse failed. + */ + collapse_huge_page(mm, address, hpage, node, referenced, unmapped); } out: trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, @@ -2168,6 +2235,11 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, khugepaged_scan_file(mm, file, pgoff, hpage); fput(file); } else { + /* + * mmap_read_lock is + * 1) released if both scan and collapse succeeded; + * 2) still held if either scan or collapse failed. + */ ret = khugepaged_scan_pmd(mm, vma, khugepaged_scan.address, hpage);