From patchwork Fri Jan 10 18:40:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B463E77188 for ; Fri, 10 Jan 2025 18:41:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 844AA6B00C9; Fri, 10 Jan 2025 13:41:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 778E16B00CA; Fri, 10 Jan 2025 13:41:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 556976B00CB; Fri, 10 Jan 2025 13:41:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 34AEB6B00C9 for ; Fri, 10 Jan 2025 13:41:12 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E486D446FF for ; Fri, 10 Jan 2025 18:41:11 +0000 (UTC) X-FDA: 82992409542.10.E63C312 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf04.hostedemail.com (Postfix) with ESMTP id ED78140008 for ; Fri, 10 Jan 2025 18:41:09 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ggQY2fxB; spf=pass (imf04.hostedemail.com: domain of 3xGmBZwgKCN8KBDLNBOCHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--jackmanb.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3xGmBZwgKCN8KBDLNBOCHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736534470; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ffc/jeUEwcE9NsWtGdDMnxjVfG4g8pYiskW7P1F5M34=; b=n76aCg0rT3w0rNFe25782fnB0vW4dNBV/M9V8GmT76BMP4C9Yzx5oB8pVQ/3E/vJnAbH7s 6lbzETTbWm70HRo9dqCQmYJS2WgIETCT6j71XTHhTfg/Qbr06wZY8lZZj0KrforjR+iijZ lJJQWgGYxxrFePVzJe49m8KuUnVCaNI= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ggQY2fxB; spf=pass (imf04.hostedemail.com: domain of 3xGmBZwgKCN8KBDLNBOCHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--jackmanb.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3xGmBZwgKCN8KBDLNBOCHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736534470; a=rsa-sha256; cv=none; b=eMEEqMS0KcF/6x3+4TW1IYKcJ1TgK13ePK/ofGSoWITKjamz2Zy0Bx+80ZNjtv0WNswIJr P+RHhf8GzLU4wH0hbD52GVgQecNQ0cHyoFk4fgBb1Wnf5kmVO1Inyr1ewnWzzA1RLDA2Mg PYQ08/qUZYPX6FADAbQnuUVnVysXKq4= Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43673af80a6so19102545e9.1 for ; Fri, 10 Jan 2025 10:41:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534468; x=1737139268; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ffc/jeUEwcE9NsWtGdDMnxjVfG4g8pYiskW7P1F5M34=; b=ggQY2fxBxSUIt+R3poxDnkfWg5eOdPEpgEDrjvm7b71zkV0UWNYAlwhYt6A9b+hp0T PkAKRp2CLT5t8ad0mVkS9jdqQimbzO30uhqdqel1DgLCK6HsngffO8DPaElGt9oKF8uG NHmxYv9vWvl2BKDUPilMmE/o3KIxXL1E5YTJ5Cv/Ssqx4Y6i1QnG7lwztmvWN9oOiubV 4gHNtqbPlPWgw9vHvmRngAjA9XrQdLi1nojqbmr7xIeXoAjB/j5YVEQmfO7OLDt7IFfg OVYtTV8KiHhQ376a2izNCDr7UoImLehJlhSWG+uXaoykAUkf19WyMIYjkmrln6Bs34mx 2v9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534468; x=1737139268; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ffc/jeUEwcE9NsWtGdDMnxjVfG4g8pYiskW7P1F5M34=; b=Wf3Jv5VmEFGO/emTUrFwYqM55yqYj5sld+wUHPAW7x460CQFhsYhcW+wF2O//Yc6P9 F3YZBdrkXxgDe2t74CA1LUIFzOGynPdrErbzIEe0XCTrroNDT1DxyQ6xG9HtOOJtYZrW snkQQ11ht9FBu1D1JXfuC6Zj0KumngTKVtQbNOkY63l71vplImWxgKkwdAAzSDzNTXJy Kazf742+WtL2t6BOIPFXNDlkkJ42TinpvxqnNFwyXFfq4AIGp+6xUpNxzHwIsmqornNC zdp+hrB0JdI8U/1kk9hL42A3YIRDtm2mnwhp7NAFq8YsR7cV5nuzLZNFgJj7YCxgSpcB J/Aw== X-Forwarded-Encrypted: i=1; AJvYcCU4gA1tAwHaaXX8gRBdqsVqv1iAwTXg1SqP5YI7Zk5+c3IwwcxJvIie7zn+hfwBp3/0RJZDtZZ6sQ==@kvack.org X-Gm-Message-State: AOJu0YwadfquEedtD0FRlQw57FgknmxlzuvEB2JB1Ei1OlWGAsvyA7u5 nNt+EvKnBXRL/RZLTXDekpmRyyVhfHdmGMsrGnUYv3U0WZQLZQHXgrwh2iO4aOKpR3tTMaeBFxh sV0lqQ0XHOw== X-Google-Smtp-Source: AGHT+IF/q1QCp5xw/vI+ho6WzVZa4JA3GTBWKWx8cvSyaKNRoxJopmgrWWLxru20teHN6JHe+1fTI5ygNpFhkw== X-Received: from wmso37.prod.google.com ([2002:a05:600c:5125:b0:434:a98d:6a1c]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:524f:b0:435:d22:9c9e with SMTP id 5b1f17b1804b1-436e26d0cf9mr103592725e9.19.1736534468098; Fri, 10 Jan 2025 10:41:08 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:36 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-10-8419288bc805@google.com> Subject: [PATCH RFC v2 10/29] mm: asi: asi_exit() on PF, skip handling if address is accessible From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Ofir Weisse X-Rspamd-Server: rspam05 X-Stat-Signature: nw9aooxb7eq56zznu3ybkkthwyy3r4dh X-Rspamd-Queue-Id: ED78140008 X-Rspam-User: X-HE-Tag: 1736534469-631920 X-HE-Meta: U2FsdGVkX19D57GvSfii2xcdZcNxonmP+byefmVMAzX72SNoEx26P0XGbi8iHCaW7uCA5Fd/62BsiNkZ9PsVNpCP+BwsAyTFY2ody+xtaNwVK29aIW+wDzaaVfgAuRmcBGkxWHgiJIDB+SnfQ1sfLiSVVpkrxw6T2QNBnoZINetdOG0ZNpblSmhwhBqyjgs6SzkUqYoj3u+cYSMk4ORifdKQ1NvRKWSSbuJqo3VGc7MBDQKF/YRGTAtpsirJ5aRLImkBDPPLQU/65r3TO5r3juAYJl/VQjZkv1psLifI76eW71qyAmYrHTglCFtB3yPdXJ1HR981bf5ui0WzF7K4KNgpewS6Lm2gKHBSd/R7oGs0K9n2BbWwoQ1wM3/E4P1FUUANZ6quavxPGKplPdgYWl1/Ps87+q+DVVuSBcHHdgePVb8i1576CQ7NW5+AkfO74EbR27j8NgCMF4y7sh8TTNqOBbK6e3TNUglWj0SKGz6JboIO6HYs9ilio5Z4Gn50h5SKFluRdKtaO5bgHQqtm68EiXINggSdSudti3p+t3XPsMB8eTNYeC8sd3SDXc4c9/qPaj6hqnNs8h61SRYrIr+3mm5YUw4WbmS0jmC/oDWdtwLAhCEYddPdjF0HhWHR2RLUH/EL8STtyZsKu2rWnCVvzlPoZjcCQFzoNz/xgkMHJ13mRoCxkcIxEJJ/OftwUyjdlDv53Z892NPUR/CZkE5QS4sjj8+T/CpVLeos6//eHSBRY+i4QX7Rlh1sdKjijDzOS4B2cGZopgFr9DBIaMvlHIsTF8AkRtjWw2ZDSkV83ONkL8EOo85X9OK7VYzZiLlMACfDtYjxPz7aIpggziZiypdJb9QgUVyPtNpw0vVtSYbsmIEnaPOdmOOq4Sc5zsWPhFmgY5VDfXpZ1A9K7oyH+x/01mlJ6/WP2UU5KgrcIE9IP7SLWsaOjXYUYYMRe/+NwtvMK/4u0ZQXVp6 hCFPozGY 1vaBT6lXT7Yjb47lk4CSPOwIKvs+4239OfUeedCfHD+Z2bBJabrugcSwuzGL8v9yNUWDEdeE3ldZzmEkIcpGa8aUJanuk8EC5QfeT5N00NlerjnDo7bczNkEOENBAr6d60d6j3u+dpjud6hpG+4vMvfUUWrZyPOuGEwyByaDLrqqCgTZdjXVhb0HM9uvXGIdrPf0fNsdospPu727n8jKcLDneoHgOfjkAENFNxaAN7pxdTPCjMLKMktwX6/7otZ7Ufy3LII33OOxRS68TgZ0BLMSY3YL8rDDxdFvkGWccbYD++iBvjPHfaycjytpT2tK7TlPpQUiOVuBHpuZzyq/ETSgS07N2i4GSjrsRE8SrcfvvLnzB/Az3mXS5mqiZspE4up9TZWDPBuwRP7ljy+5NOW13GaG0DQzcBKItpLMjtAH16mv5g74IDSLi842IbQbrpEUKtTu3caSeg0r9TpFXLCrqhQl4Bs1sh4vcvsH6zrKx1BVqWFfJ6Stk+DHBXziY1u9h5n+lRmYLl8JpkD1BvbM4Qkcs0lndMGclzpsWbaaLO8gmRb9R0Z8kBA6tpQIKUadD9ekoZPz1Pw+u2mDm/VPw0jBIZLntxoEnXXaYoKpYAHs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Ofir Weisse On a page-fault - do asi_exit(). Then check if now after the exit the address is accessible. We do this by refactoring spurious_kernel_fault() into two parts: 1. Verify that the error code value is something that could arise from a lazy TLB update. 2. Walk the page table and verify permissions, which is now called is_address_accessible(). We also define PTE_PRESENT() and PMD_PRESENT() which are suitable for checking userspace pages. For the sake of spurious faults, pte_present() and pmd_present() are only good for kernelspace pages. This is because these macros might return true even if the present bit is 0 (only relevant for userspace). checkpatch.pl VSPRINTF_SPECIFIER_PX - it's in a WARN that only fires in a debug build of the kernel when we hit a disastrous bug, seems OK to leak addresses. RFC note: A separate refactoring/prep commit should be split out of this patch. Checkpatch-args: --ignore=VSPRINTF_SPECIFIER_PX Signed-off-by: Ofir Weisse Signed-off-by: Brendan Jackman --- arch/x86/mm/fault.c | 118 +++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 103 insertions(+), 15 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index e6c469b323ccb748de22adc7d9f0a16dd195edad..ee8f5417174e2956391d538f41e2475553ca4972 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -948,7 +948,7 @@ do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address, force_sig_fault(SIGBUS, BUS_ADRERR, (void __user *)address); } -static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte) +static __always_inline int kernel_protection_ok(unsigned long error_code, pte_t *pte) { if ((error_code & X86_PF_WRITE) && !pte_write(*pte)) return 0; @@ -959,6 +959,8 @@ static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte) return 1; } +static int kernel_access_ok(unsigned long error_code, unsigned long address, pgd_t *pgd); + /* * Handle a spurious fault caused by a stale TLB entry. * @@ -984,11 +986,6 @@ static noinline int spurious_kernel_fault(unsigned long error_code, unsigned long address) { pgd_t *pgd; - p4d_t *p4d; - pud_t *pud; - pmd_t *pmd; - pte_t *pte; - int ret; /* * Only writes to RO or instruction fetches from NX may cause @@ -1004,6 +1001,50 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) return 0; pgd = init_mm.pgd + pgd_index(address); + return kernel_access_ok(error_code, address, pgd); +} +NOKPROBE_SYMBOL(spurious_kernel_fault); + +/* + * For kernel addresses, pte_present and pmd_present are sufficient for + * is_address_accessible. For user addresses these functions will return true + * even though the pte is not actually accessible by hardware (i.e _PAGE_PRESENT + * is not set). This happens in cases where the pages are physically present in + * memory, but they are not made accessible to hardware as they need software + * handling first: + * + * - ptes/pmds with _PAGE_PROTNONE need autonuma balancing (see pte_protnone(), + * change_prot_numa(), and do_numa_page()). + * + * - pmds with _PAGE_PSE & !_PAGE_PRESENT are undergoing splitting (see + * split_huge_page()). + * + * Here, we care about whether the hardware can actually access the page right + * now. + * + * These issues aren't currently present for PUD but we also have a custom + * PUD_PRESENT for a layer of future-proofing. + */ +#define PUD_PRESENT(pud) (pud_flags(pud) & _PAGE_PRESENT) +#define PMD_PRESENT(pmd) (pmd_flags(pmd) & _PAGE_PRESENT) +#define PTE_PRESENT(pte) (pte_flags(pte) & _PAGE_PRESENT) + +/* + * Check if an access by the kernel would cause a page fault. The access is + * described by a page fault error code (whether it was a write/instruction + * fetch) and address. This doesn't check for types of faults that are not + * expected to affect the kernel, e.g. PKU. The address can be user or kernel + * space, if user then we assume the access would happen via the uaccess API. + */ +static noinstr int +kernel_access_ok(unsigned long error_code, unsigned long address, pgd_t *pgd) +{ + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + int ret; + if (!pgd_present(*pgd)) return 0; @@ -1012,27 +1053,27 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) return 0; if (p4d_leaf(*p4d)) - return spurious_kernel_fault_check(error_code, (pte_t *) p4d); + return kernel_protection_ok(error_code, (pte_t *) p4d); pud = pud_offset(p4d, address); - if (!pud_present(*pud)) + if (!PUD_PRESENT(*pud)) return 0; if (pud_leaf(*pud)) - return spurious_kernel_fault_check(error_code, (pte_t *) pud); + return kernel_protection_ok(error_code, (pte_t *) pud); pmd = pmd_offset(pud, address); - if (!pmd_present(*pmd)) + if (!PMD_PRESENT(*pmd)) return 0; if (pmd_leaf(*pmd)) - return spurious_kernel_fault_check(error_code, (pte_t *) pmd); + return kernel_protection_ok(error_code, (pte_t *) pmd); pte = pte_offset_kernel(pmd, address); - if (!pte_present(*pte)) + if (!PTE_PRESENT(*pte)) return 0; - ret = spurious_kernel_fault_check(error_code, pte); + ret = kernel_protection_ok(error_code, pte); if (!ret) return 0; @@ -1040,12 +1081,11 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) * Make sure we have permissions in PMD. * If not, then there's a bug in the page tables: */ - ret = spurious_kernel_fault_check(error_code, (pte_t *) pmd); + ret = kernel_protection_ok(error_code, (pte_t *) pmd); WARN_ONCE(!ret, "PMD has incorrect permission bits\n"); return ret; } -NOKPROBE_SYMBOL(spurious_kernel_fault); int show_unhandled_signals = 1; @@ -1490,6 +1530,29 @@ handle_page_fault(struct pt_regs *regs, unsigned long error_code, } } +static __always_inline void warn_if_bad_asi_pf( + unsigned long error_code, unsigned long address) +{ +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + struct asi *target; + + /* + * It's a bug to access sensitive data from the "critical section", i.e. + * on the path between asi_enter and asi_relax, where untrusted code + * gets run. #PF in this state sees asi_intr_nest_depth() as 1 because + * #PF increments it. We can't think of a better way to determine if + * this has happened than to check the ASI pagetables, hence we can't + * really have this check in non-debug builds unfortunately. + */ + VM_WARN_ONCE( + (target = asi_get_target(current)) != NULL && + asi_intr_nest_depth() == 1 && + !kernel_access_ok(error_code, address, asi_pgd(target)), + "ASI-sensitive data access from critical section, addr=%px error_code=%lx class=%s", + (void *) address, error_code, asi_class_name(target->class_id)); +#endif +} + DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) { irqentry_state_t state; @@ -1497,6 +1560,31 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2(); + if (static_asi_enabled() && !user_mode(regs)) { + pgd_t *pgd; + + /* Can be a NOP even for ASI faults, because of NMIs */ + asi_exit(); + + /* + * handle_page_fault() might oops if we run it for a kernel + * address in kernel mode. This might be the case if we got here + * due to an ASI fault. We avoid this case by checking whether + * the address is now, after asi_exit(), accessible by hardware. + * If it is - there's nothing to do. Note that this is a bit of + * a shotgun; we can also bail early from user-address faults + * here that weren't actually caused by ASI. So we might wanna + * move this logic later in the handler. In particular, we might + * be losing some stats here. However for now this keeps ASI + * page faults nice and fast. + */ + pgd = (pgd_t *)__va(read_cr3_pa()) + pgd_index(address); + if (!user_mode(regs) && kernel_access_ok(error_code, address, pgd)) { + warn_if_bad_asi_pf(error_code, address); + return; + } + } + prefetchw(¤t->mm->mmap_lock); /*