From patchwork Fri Apr 28 00:41:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jiaqi Yan X-Patchwork-Id: 13225914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1365DC77B61 for ; Fri, 28 Apr 2023 00:41:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5266B6B0071; Thu, 27 Apr 2023 20:41:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D622900002; Thu, 27 Apr 2023 20:41:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F5866B0074; Thu, 27 Apr 2023 20:41:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 30CFB6B0071 for ; Thu, 27 Apr 2023 20:41:48 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C41A51401E1 for ; Fri, 28 Apr 2023 00:41:47 +0000 (UTC) X-FDA: 80728947054.19.BBED501 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf30.hostedemail.com (Postfix) with ESMTP id 1642580007 for ; Fri, 28 Apr 2023 00:41:45 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=j1pjcXeH; spf=pass (imf30.hostedemail.com: domain of 3SRZLZAgKCMozyq6yEq3w44w1u.s421y3AD-220Bqs0.47w@flex--jiaqiyan.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3SRZLZAgKCMozyq6yEq3w44w1u.s421y3AD-220Bqs0.47w@flex--jiaqiyan.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682642506; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=Vt8UHocOfme6+v43sNp8IDOABRyG4m+oUHCMBiTHbAY=; b=Qi25CdVw+aj/g5ygVgskFIkUrk8udlgGcz74Dj5M3HR7AFR8acIVmR7ruCVklNY7RbPaQY depWFzJMhyPTG9Qg+oP3XtHqxAIYO/wVDu4qVduWwaujqabYAbUIMbkq4rtvgth/wsTUrF RNcX4Kd3wfY9gcSubxuxxsVBnIErDRE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682642506; a=rsa-sha256; cv=none; b=pv7Y9sJkJTd0u9S5vDtDHrhC/kwz5RsHD0ODPSG9Y0EzzLHiPQ4RplrY9AP0vdD3rriaEd /4tEYOsmKZKjY55r9uVZeuHeessWjrZpBJHVlDNx9g9EAgXZzJQE6ZdjPLPf2a4h5rT/5K DdV8MXU8bgCnOkpg9ClhQLHmozQO4GA= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=j1pjcXeH; spf=pass (imf30.hostedemail.com: domain of 3SRZLZAgKCMozyq6yEq3w44w1u.s421y3AD-220Bqs0.47w@flex--jiaqiyan.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3SRZLZAgKCMozyq6yEq3w44w1u.s421y3AD-220Bqs0.47w@flex--jiaqiyan.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-b9a79db4e7fso1591367276.0 for ; Thu, 27 Apr 2023 17:41:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682642505; x=1685234505; h=content-transfer-encoding:cc:to:from:subject:message-id :mime-version:date:from:to:cc:subject:date:message-id:reply-to; bh=Vt8UHocOfme6+v43sNp8IDOABRyG4m+oUHCMBiTHbAY=; b=j1pjcXeHNZekfQnY9WfUhFNWm4AZPk1FMP3aujwEifqVqCXDUB/6PU1/QtAWZh6UyR o1oX2953HqpAvD81qId75eNTX/3bH0Ig61QfDr6ep2qVJ9z04GSkNyyB7mvWEtLg+gSX 0oQ76RuUQt6snu3KZtxKzn7YgKRnlHGp4UV+mTwPjNl9qIXfF9QjC0JTIIUcx0vKDdjS nUwwV6asUa0wScwbnyM/AC0QiUGDP7RJaMDv1x5CK/tyc6GOfOc+27u3Wuzr7hOiV/3i G5KnoIIzNUaW97FuCcxg0KQBzDficy+Ws4eidjMgP4h7pbo4ugEg8zFm5kyeS/9Pl3FG 64yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682642505; x=1685234505; h=content-transfer-encoding:cc:to:from:subject:message-id :mime-version:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=Vt8UHocOfme6+v43sNp8IDOABRyG4m+oUHCMBiTHbAY=; b=XJHuMHYkqp0sZFnjShROHWDYTuxysEnBZtrb1vQRFFhVvjJXFqnjWS49rp4iI6/iQ7 adVB+GKuQq9/OatI+TUyALPoSDZ97IzaEHDWrC1hCjeN4cspM5ArWBFw9wr/gb5wGi89 YU/eXi2EobUk1pabJpKyfzPe+7ZKRCgQ1h7pjzkWeGNVr5Y9WqQvpA0cU8AJ5UPYY6al VRndd7PIsM0SDBPGSRb5ElN++uQHD3Q29i8m4ohMFfia+l1dJSiH/0RMF4Kh+VESDlRf PwPAvaEXkvK+hbQUIzIH49Rnpcqim9lMNe1t9YlNmnGsyuOeEMUSsObKmi4DO2+JUvmf i/qw== X-Gm-Message-State: AC+VfDzD5SLIzTkFtmTbu/MdDj68vX1qSBmntPdbN08v5AGBMVKGTYmK xo4omlOKmLa9Lzbyl8HAfGLhiJ7YoIC/Nw== X-Google-Smtp-Source: ACHHUZ7JzvOFQR+ukn2MD6aNHYM4isXYR1FdTo7SOPd/4VRpdGln5aoc98L35o1D/kwBuvKsrTXr7py5mskPsQ== X-Received: from yjq3.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:272f]) (user=jiaqiyan job=sendgmr) by 2002:a05:6902:1003:b0:b8f:54f5:89ff with SMTP id w3-20020a056902100300b00b8f54f589ffmr2013735ybt.11.1682642505134; Thu, 27 Apr 2023 17:41:45 -0700 (PDT) Date: Fri, 28 Apr 2023 00:41:32 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230428004139.2899856-1-jiaqiyan@google.com> Subject: [RFC PATCH v1 0/7] PAGE_SIZE Unmapping in Memory Failure Recovery for HugeTLB Pages From: Jiaqi Yan To: mike.kravetz@oracle.com, peterx@redhat.com, naoya.horiguchi@nec.com Cc: songmuchun@bytedance.com, duenwen@google.com, axelrasmussen@google.com, jthoughton@google.com, rientjes@google.com, linmiaohe@huawei.com, shy828301@gmail.com, baolin.wang@linux.alibaba.com, wangkefeng.wang@huawei.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jiaqi Yan X-Rspam-User: X-Rspamd-Queue-Id: 1642580007 X-Rspamd-Server: rspam09 X-Stat-Signature: q8kom1sr6r19egz537meetk3eaw94jgw X-HE-Tag: 1682642505-908914 X-HE-Meta: U2FsdGVkX1/AfSR3vB27BrG60G13U/txjpcAf0JfkRnIwFL8YNuVKx5cAikvMNzwwJzpvrX6M9bQd6aF1FmwHeSWC6TPYV7nXIqEf++5c/l04fAYt4Fp8JV29r0ij8Gy0nEI42mT9gVipGP5q+Hncr/xfYZcdzIxXs00CtzDHkH1qHyasaNYcEzcBul4sbM1eVRARLQ+E4TN13CXmWsfIo8QChZ+ylz0jpLi+IjnO3vU7xtKhqEcMg6wdE8SDWwQPMRd3y3oYQbECVXE9TprHxZ06/aUSyJ+BsYnalnN/9KgNFah1kO0FxeaDx3vsZdEOK+ky0900pNYdroSx+DlHuOSJNZpWuzwwQDAJKgN7+mOvov9gMd7Vd4BQFrs7qOv4DS4QhRUPcZj2kXuCG7wo4Q0i77Xrv16E2QXS9coWALgN5yn9bHgZE6Gac/WhoQs8ZMiw93e6u+FpeeRZBe7+4QIFjjKiLTXbiu+3or5IyroyzK2vFzxPHTTpAvy46S3309V3q4XXhVcvvvHaq4YzwC0JylEcNmDre1tkeiln6M8L1whAHQkEKMv3IDfpwyH0QLQUFRWuy8NaroEMHkC62R9cUBVcvfQsBuQAEYIOXR9x5bZMRHed03++9d3CBYVDEUWYlTyI9+3UJFe/nNCGgAi+U3jADVEIWJRRKutuCsXAhyHQ4vG8n6wsPZW7p410QGzG9JhKYxgIMn8lMjBlr/lJZZMju7roBKmLQ8efJZyrkws3mSs4VckP4UYKeW3fX1cpSbmiAGhrCStfMIcoTaURazpeDJ8mMdSmDZZtPRzJTyWgGixKzf0v8LhRZj/YuRnW4bKqiMqH4Aa/JCs/UsCMN1BXlp8FphHXy8ThYZr80X6ZVhRSnVSZFUxf74ZippC9OFwyxEuDHrn3ca5EryEUIMQPiiX9Xd/VXD8RfwwwYlc4YJFFmREj95ZtBuiyHbbCRkuyeCJNW7K95J vC1S3l7f OUfEstCmilC7cyhohp7ORKzSEIx/+1MKcZ9IAX8GYaczn/cF5Ei4NSv8EJoHrLzuY7l5ppUQTQGmfa8BHA4kXUaA9WCkaFIXQbQhs3+xrIkjjxvxW8Zazi/mhbJHlUX7RdCQQAe/YH7FnS1YDTg8bOXBqofVYouF9YR/oG6aYcvaChQYkhFkpBqsWC/bleLclt+lmErLICmGk3/lpStX0pJYTLj/oJesDCxqml/UtvBlXPG/Xw6NlFoI1A1y3iX3TryGR3g6+zlx3Lz/n2nymbhtAAvb0sy4DA7gzWMc7S65wGbkmR9ZZIsQrlCeSdrr3rmOF0R1/gzkeZp2S3S84KBb+Dha/t3k55TkmXifS/Vp8xS8uLUW6jW4IMzyFJWfivhptSsakMNKPbCWJZVvjzBdB+z+T/JclQAMYijBTDtcGBED3qnpNUiyqDwHO+rFikqv2vi5b4Hih3F7TbHFhrbhCFKsFsRUh/aqn5k+K/arZikuFg4t9sTZHlBfbFLYd4beOtoq+3og6r1yCYh/uLMlUEguxMG0u6F65tO5Hx/X9NJIxHUZmNZmF0CCTCwC1PjcU6ZjlyJhUjdQdGzSqq/KTefe+QkgbRQBPRPPgbhDTlOlczwT7Yr24HOr+lY4cUMQhTP6prunxjv9cSc8UbbDuuml1hN5c/FGB1Icj6rHiu+jtcbMWOt5t7GVgRvePsRxbT5WAJa6JuhVlDe6mUszajZZJ+272Mbt9vrkvwyv+9GE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Goal ==== Currently once a byte in a HugeTLB hugepage becomes HWPOISON, the whole hugepage will be unmapped from the page table because that is the finest granularity of the mapping. High granularity mapping (HGM) [1], the functionality to map memory addresses at finer granularities (extreme case is PAGE_SIZE), is recently proposed upstream, and provides the opportunity to handle memory error more efficiently: instead of unmapping the whole hugepage, only the raw subpage in the hugepage needs to be thrown away and all the healthy subpages can still be kept available for users. Idea ==== Today memory failure recovery for HugeTLB pages (hugepage) is different from raw and THP pages. We are only interested in in-use hugepages, which is dealt with in these simplified steps: 1. Increment the refcount on the compound head of the hugepage. 2. Insert the raw HWPOISON page to the compound head’s raw_hwp_list (_hugetlb_hwpoison) if it is not already in the list. 3. Unmap the entire hugepage from HugeTLB’s page table. 4. Kill the processes that are accessing the poisoned hugepage. HGM can greatly improve this recovery mechanism. Step #3 (unmapping entire hugepage) can be replaced by 3.1 Map the entire hugepage at finer granularity, so that the exact HWPOISON address is mapped by a PAGE_SIZE PTE, and the rest of the address spaces optimally mapped by either smaller P*Ds or PTEs. In other words, the original HugeTLB PTE is split into smaller P*Ds and PTEs. 3.2 Only unmap the newly mapped PTE that maps the HWPOISON address. For shared mappings, current HGM patches is already a solid basis for splitting functionality in step #3.1. This RFC drafts a complete solution for shared mapping. The splitting-based idea can be applied to private mappings as well, but additional subtle complexity needs to be dealt with. We defer the private mapping case as future work. Splitting HugeTLB PTEs (Step #3.1) ================================== The general process of splitting a present leaf HugeTLB PTE is 1. Get and clear the original HugeTLB PTE old_pte. 2. Initialize curr with the start address range corresponding to old_pte. 3. Find the optimal level we should map curr at. 4. Perform HGM walk on curr with the optimal level found in step 3, potentially allocating a new PTE at the optimal level. 5. Populate the newly allocated PTE with bits from old_pte, including dirty, write, and UFFD_WP. 6. Update curr += the newly created PTE size, repeat step 3 until the entire VMA is covered. The functionality of splitting hugepage mapping is not meaningful for mostly none PTEs. We handle none or userfaultfd write protect (UFFD_WP) marker HugeTLB PTEs at the time of page faulting. Migration and HWPOISON PTEs are better left not touched. Memory Failure Recovery and Unmapping (Step #3.2) ================================================= A few changes are made in memory_failure and rmap to only unmap raw HWPOISON pages: 1. as long as HGM is turned on in CONFIG, memory_failure attempts to enable HGM on the VMA containing the poisoned hugepage 2. memory_failure attempts to split the HugeTLB PTE so that poisoned address is mapped by a PAGE_SIZE PTE, for all the VMAs containing the poisoned hugepage. 3. get_huge_page_for_hwpoison only returns -EHWPOISON if the raw page is already in the compound head’s raw_hwp_list. This makes unmapping work correctly when multiple raw pages in the same hugepage become HWPOISON. 4. rmap utilizes compound head’s raw_hwp_list to 1) avoid unmapping raw pages not in the list, and 2) keep track if the raw pages in the list are already unmapped. 5. page refcount check in me_huge_page is skipped. Between mmap() and Page Fault ========================== Memory error can occur between the time when userspace maps a hugepage and the time when userspace faults in the mapped hugepage. General idea is to not create any raw-page-size page table entry for HWPOISON memory, and render memory in healthy raw pages still available to userspace (via normal fault handling). At the time of hugetlb_no_page: - If the entire hugepage doesn’t contain any HWPOISON page, the normal page fault handler continues. - If the memory address being faulted is within a HWPOISON raw page, hugetlb_no_page returns VM_FAULT_HWPOISON_LARGE (so that page fault handler sends a BUS_MCEERR_AR SIGBUS to the faulting process). - If the memory address being faulted is within a healthy raw page, hugetlb_no_page utilize HGM to create a new HugeTLB PTE so that its hugetlb_pte_size cannot be larger and at the same time it doesn’t map any HWPOISON address. Then the normal page fault handler continues. Failure Handling ================ - If the kernel still fails to allocate a new raw_hwp_page after a retry, memory_failure returns MF_IGNORED with MF_MSG_UNKNOWN. - For each VMA that maps the HWPOISON hugepage - If the VMA is not eligible for HGM, the old behavior is taken: unmap the entire hugepage from that VMA. - If memory_failure fails to enable HGM on the VMA, or if memory_failure fails to split any VMA that mapped the HWPOISON page, the recovery returns MF_IGNORED with MF_MSG_UNMAP_FAILED. - For a particular VMA, if splitting HugeTLB PTE fails, the original PTE will be restored to the page table. Code Changes ============ The code patches in this RFC is based on HGM patchset V2 [1], composed of two parts. The first part implements the idea laid out in the cover letter; the second part tests two major scenarios: HWPOISON on already faulted pages and HWPOISON between mapped and faulted. Future Changes ============== There is a pending improvement to hugetlbfs_read_iter. If a hugepage is found from page cache and it contains HWPOISON subpages, today kernel returns -EIO immediately. With the new splitting-then-unmap behavior, kernel can return userspace every byte until up to the first raw HWPOISON byte. If userspace wants the read to start within a raw HWPOISON page, kernel will have to return -EIO. This improvement and its selftest will be done in the future patch series. [1] https://lore.kernel.org/all/20230218002819.1486479-1-jthoughton@google.com/ Jiaqi Yan (7): hugetlb: add HugeTLB splitting functionality hugetlb: create PTE level mapping when possible mm: publish raw_hwp_page in mm.h mm/memory_failure: unmap raw HWPoison PTEs when possible hugetlb: only VM_FAULT_HWPOISON_LARGE raw page selftest/mm: test PAGESIZE unmapping HWPOISON pages selftest/mm: test PAGESIZE unmapping UFFD WP marker HWPOISON pages include/linux/hugetlb.h | 14 + include/linux/mm.h | 36 ++ mm/hugetlb.c | 405 ++++++++++++++++++++++- mm/memory-failure.c | 206 ++++++++++-- mm/rmap.c | 38 ++- tools/testing/selftests/mm/hugetlb-hgm.c | 364 ++++++++++++++++++-- 6 files changed, 1004 insertions(+), 59 deletions(-)