From patchwork Fri Jan 10 18:40:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7C88CE77188 for ; Fri, 10 Jan 2025 23:19:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WbBL9denNPQSaUzuEINalKajU8CNSZU0S3zvqssodu8=; b=YcwZgb36FLvKSAO32IiTWdKtgA tJJi5R9+YusZ/76YlnWRQom/8OCnr197spWqVHwvsg/rpcUGk9VM/mYIhNGmmqSkr4GzRaYlSQAoV vXJS7N4j36v2kVL3IIFjTE/6vbtagtgpLHc+9kWFUoRL3dTE1GDja6Fr28v3Mv7JnpcEPntPRtvYw HsYnSNq8a+UI3YZLtw5ar6NN4QBwdLYGsijnfffGsu0ngIYFxh/vDADja+no+JVhCmOqSysX2vN+/ l9ScpTlwzQeWjmzDAAVUXExZg6c7B+soeqz2Wac0q8fz0oPOhoLaTK96ZJ7TwF1zGpr2Ny/noDPY0 NUMXybUg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOI5-0000000HEQ3-2akN; Fri, 10 Jan 2025 23:19:45 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwF-0000000GbB4-1wz9 for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:40:55 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=lMV7uJ7I18OKYw8GMGFQsmzHxbOybRxQHWQLaqO2yfM=; b=FGe+SI25defFW6EcNmMwCFS6Jc eKE0g/8HKxq0kEBuNsqBoeP9DTuMblnVR9Vayl0+AGDQbFhRCPM4shOnPG7n6Nr7VP77O1mqcj2Bt UnGw4PHb+yVQF98SJRx9hGxqzyz2B05BbUa0X+2eYrIwjveoW5VV1b6Bl0vr6gGG/fYMU5HimpwI2 FuJnbZwYkLrunHp1TxuZ+qdE/JGzPPaQJqD8fiCm7a3BB3ZlmRs3dg9L0sUj6MeARanZp8ayJFlpb rZYb5G86zmcxL3frjCXewib/AGAInl9BcLLQ0OGLANO9uHerz4pRN1C7keUEuOP6syuNZqT6H6pll ktcgiXmg==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwB-00000009sDG-0c22 for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:40:54 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-436289a570eso19444205e9.0 for ; Fri, 10 Jan 2025 10:40:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534448; x=1737139248; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lMV7uJ7I18OKYw8GMGFQsmzHxbOybRxQHWQLaqO2yfM=; b=AeDmocrQ6kKyHVY0ETjASJms9JGBDnaH1a7CSHBzIlvK9YxiniTrGP+KM3mVM+DhhA 57nMvQis6dVYBSjOxPjnVf0Axi82dbj3Et4DSbM8NToI+pwmZItibhz3vb7G7No4caYV B5pblVuehnd2zr7oWGQo1g+ic1QwhUe0q07PCK9DIeNe6HpJH596+P6LjWg4BQKg6ivP wQAv9OJ3Xu0HcMK2jSQ57hwhKI89umdLy56/Mm01CrHuqnRYTGenms25oC0lVbSeT88E Ja+fG+Af89pZbB7SwdH3cfde5GlV0LJo42iqtuVriwpmnUVvcj58V+z0aQ89eba6VVVT 9u1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534448; x=1737139248; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lMV7uJ7I18OKYw8GMGFQsmzHxbOybRxQHWQLaqO2yfM=; b=sOgU1o9e2wCdrJxireswiJ6rEdqObK1MLa4QdDgyXjHjwmkqejRwj4kP/iFwyzyLeU +pIcTUTSbaNVkTjmT9kwMqCoSwrd+Np3UKxriJnDY1VD9piDjNGCQlk9wKOaAIsWZjpd h3YeICPaamfNIu/YlmkkrRO/xR1TD1AO2O420ml3VhJoWoa8QFQHE1h9ti8qv43J1kFv D0k7YxZJMEXBw3HxKGTNDtgudV7aFD/j5STz94G95in7+dlGjLJl+OkBRYGnPGL+echB +e/qD+s0YvfY3Ero7PNyQ+1LTw10hdEKmJF0wi1jDt/sYsSnCHshSXsWANt0ZgTSQgtx WOeA== X-Forwarded-Encrypted: i=1; AJvYcCXHFVbPiC9qAeQQBe+LDRvvlMwc1/vpSyhUFXs0fQvvPfcc/WzTr7APqdXNy4GpK0+xvpTqUK5+adEX3g==@lists.infradead.org X-Gm-Message-State: AOJu0YyreM3pUbHQ8XXSFVpx5hKyXBZglmtL+VaNJPjKALpzZXcn3MRw ygoGuLYcHp+FukMU1K+nILxsLJK7YyBgib/qJu0UwBVI/v2GOcgwiQrzcuJt60htReAuLuiP69q LDxKvkjRpEw== X-Google-Smtp-Source: AGHT+IFp39E9krgcILEXjwgS0eG/yaDAJpZCMdfJgN+pTbMyBclx0Pttf9qnW8OHp1hJ5TpOh65GX4riAPR9zA== X-Received: from wmbbd12.prod.google.com ([2002:a05:600c:1f0c:b0:434:fd41:173c]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d03:b0:434:a781:f5d9 with SMTP id 5b1f17b1804b1-436e2697b32mr62972955e9.11.1736534447848; Fri, 10 Jan 2025 10:40:47 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:27 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-1-8419288bc805@google.com> Subject: [PATCH RFC v2 01/29] mm: asi: Make some utility functions noinstr compatible From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184051_287175_F32A3D27 X-CRM114-Status: GOOD ( 21.18 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Some existing utility functions would need to be called from a noinstr context in the later patches. So mark these as either noinstr or __always_inline. An earlier version of this by Junaid had a macro that was intended to tell the compiler "either inline this function, or call it in the noinstr section", which basically boiled down to: #define inline_or_noinstr noinline __section(".noinstr.text") Unfortunately Thomas pointed out this will prevent the function from being inlined at call sites in .text. So far I haven't been able[1] to find a formulation that lets us : 1. avoid calls from .noinstr.text -> .text, 2. while also letting the compiler freely decide what to inline. 1 is a functional requirement so here I'm just giving up on 2. Existing callsites of this code are just forced inline. For the incoming code that needs to call it from noinstr, they will be out-of-line calls. [1] https://lore.kernel.org/lkml/CA+i-1C1z35M8wA_4AwMq7--c1OgjNoLGTkn4+Td5gKg7QQAzWw@mail.gmail.com/ Checkpatch-args: --ignore=COMMIT_LOG_LONG_LINE Signed-off-by: Brendan Jackman --- arch/x86/include/asm/processor.h | 2 +- arch/x86/include/asm/special_insns.h | 8 ++++---- arch/x86/include/asm/tlbflush.h | 3 +++ arch/x86/mm/tlb.c | 13 +++++++++---- 4 files changed, 17 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 4a686f0e5dbf6d906ed38276148b186e920927b3..1a1b7ea5d7d32a47d783d9d62cd2a53672addd6f 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -220,7 +220,7 @@ void print_cpu_msr(struct cpuinfo_x86 *); /* * Friendlier CR3 helpers. */ -static inline unsigned long read_cr3_pa(void) +static __always_inline unsigned long read_cr3_pa(void) { return __read_cr3() & CR3_ADDR_MASK; } diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index aec6e2d3aa1d52e5c8f513e188015a45e9eeaeb2..6e103358966f6f1333aa07be97aec5f8af794120 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -42,14 +42,14 @@ static __always_inline void native_write_cr2(unsigned long val) asm volatile("mov %0,%%cr2": : "r" (val) : "memory"); } -static inline unsigned long __native_read_cr3(void) +static __always_inline unsigned long __native_read_cr3(void) { unsigned long val; asm volatile("mov %%cr3,%0\n\t" : "=r" (val) : __FORCE_ORDER); return val; } -static inline void native_write_cr3(unsigned long val) +static __always_inline void native_write_cr3(unsigned long val) { asm volatile("mov %0,%%cr3": : "r" (val) : "memory"); } @@ -153,12 +153,12 @@ static __always_inline void write_cr2(unsigned long x) * Careful! CR3 contains more than just an address. You probably want * read_cr3_pa() instead. */ -static inline unsigned long __read_cr3(void) +static __always_inline unsigned long __read_cr3(void) { return __native_read_cr3(); } -static inline void write_cr3(unsigned long x) +static __always_inline void write_cr3(unsigned long x) { native_write_cr3(x); } diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 69e79fff41b800a0a138bcbf548dde9d72993105..c884174a44e119a3c027c44ada6c5cdba14d1282 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -423,4 +423,7 @@ static inline void __native_tlb_flush_global(unsigned long cr4) native_write_cr4(cr4 ^ X86_CR4_PGE); native_write_cr4(cr4); } + +unsigned long build_cr3_noinstr(pgd_t *pgd, u16 asid, unsigned long lam); + #endif /* _ASM_X86_TLBFLUSH_H */ diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 86593d1b787d8a5b9fa4bd492356898ec8870938..f0428e5e1f1947903ee87c4c6444844ee11b45c3 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -108,7 +108,7 @@ /* * Given @asid, compute kPCID */ -static inline u16 kern_pcid(u16 asid) +static __always_inline u16 kern_pcid(u16 asid) { VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE); @@ -153,9 +153,9 @@ static inline u16 user_pcid(u16 asid) return ret; } -static inline unsigned long build_cr3(pgd_t *pgd, u16 asid, unsigned long lam) +static __always_inline unsigned long build_cr3(pgd_t *pgd, u16 asid, unsigned long lam) { - unsigned long cr3 = __sme_pa(pgd) | lam; + unsigned long cr3 = __sme_pa_nodebug(pgd) | lam; if (static_cpu_has(X86_FEATURE_PCID)) { cr3 |= kern_pcid(asid); @@ -166,6 +166,11 @@ static inline unsigned long build_cr3(pgd_t *pgd, u16 asid, unsigned long lam) return cr3; } +noinstr unsigned long build_cr3_noinstr(pgd_t *pgd, u16 asid, unsigned long lam) +{ + return build_cr3(pgd, asid, lam); +} + static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid, unsigned long lam) { @@ -1084,7 +1089,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) * It's intended to be used for code like KVM that sneakily changes CR3 * and needs to restore it. It needs to be used very carefully. */ -unsigned long __get_current_cr3_fast(void) +noinstr unsigned long __get_current_cr3_fast(void) { unsigned long cr3 = build_cr3(this_cpu_read(cpu_tlbstate.loaded_mm)->pgd, From patchwork Fri Jan 10 18:40:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6655E7719C for ; Fri, 10 Jan 2025 23:19:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kEq8SLvUgS4dWStNvypfAZYAgxPy+lJKfvB8vMxpa9k=; b=BgCQidDXfx+c/Y2asE2tVOds/l cQiQ+LlnSMs7g8RgvOuyDSNQFIWa24iNEXtJ1iM/km58AGDQIrxdrEIRzbgdlzGNzhfJFUhsii8Hl Gp14YGixz0Y2u6BmnVhVCayBFmmaE4hRpTg9Ak4OdNVhNCfEdVqAEzp++qmcL4NVVH1NM1lM6vwVm 4G0z7yEAnqNB+2tM9QkwHSmqm9S4d7clUTIIErIoJl5qyURnTB/4GSk2ID1W2jSehBNM81vxU/vJe uc9pf43lNyWtFJccNXN2QhEnbanV6G1MVImFSIw0y28oQx9fWAx0wIpcElkqLM/L2a+v+NEQLiPZl PHQaRGDg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOI6-0000000HEQO-0qMv; Fri, 10 Jan 2025 23:19:46 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwH-0000000GbCF-1pSV for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:40:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=LLeVb0ZgyFkqRQqFmxHZC1txeWSVEDEElINoAb1GRXw=; b=V32c/Hoxvwnq2zYHh3cuNpdook op1DQrbtIpFMrC/LMh9DnmbWY3WnKsn8YwtYSWFgOGvrjZioY5tILbRp7CN9DkDvce47AnP7JrKB/ y++isn59gRwrvnI/Vmrv62AqkN4RmhnYhGnDaIwTOtyNKSgikLaRrJQSqDmjzArLef+yH5ixxQ/VU zR+tUtwFBpBdcRVCxPP+0JuPf9xE0XG/mg5kXBwKHb98vs3xkuDTVXhc0ujWJ1DUyPie8n2Va8oPU 7RE8+4YXT/6yGra0N+Wmy8TjQ4qN2BjqeFuxJkLk7rxkuRmPFU5HswjcVqoMUhFLIAYBBmuWRodfs M2n8MJ9g==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwC-00000009sEv-1HnZ for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:40:55 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-43619b135bcso12091085e9.1 for ; Fri, 10 Jan 2025 10:40:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534450; x=1737139250; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LLeVb0ZgyFkqRQqFmxHZC1txeWSVEDEElINoAb1GRXw=; b=VzuE1vMALJ/7YjtBOvu71mn5o23XkdQBVxkm/1q64oMXiJV6VdfXD0vcmbWO3Onl2e KuJxBAB9et7Yn8YNQZrMPExRhx/k7sDBl2yvZX/oO06rBG29MtmzfaLeK2HPGYDb2QcZ ioaoBlFG2ISt3QTflx+Hv/mz43hF8/VvPeW+W1x+Oq4B/NjUl/P7J/c78yG35qAUEN0c UK6xO3mlM47SDnb9urGtV0ttqapvfso4GOKr5ENbbR+Tn7l4/qNMmhnPlOZY/qu1G62j qgMMcYx6QJeleGON/ItSXJFTrtisQdgKs72His00OejZHpeVGFydaXGdRk2Fv+laLevu RkGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534450; x=1737139250; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LLeVb0ZgyFkqRQqFmxHZC1txeWSVEDEElINoAb1GRXw=; b=BJgrr23He8q/hpH6cIXQS9NWCYeolnlYqmqZAAavV1B0gnbzGrT53RbZw4KoVznP44 lb/1qO8u3iW1C1LtMKukg9/r3LJwm1GpuV2ZTracRr0kzyngXXSRWyeOR41AkFH45mnV 71FNdCK3eIY1kIGkCa5CvYTe3EFhg3SxzAR1NTXbgIRvB/wfOTBcNsYoX2qWws8DDeW4 u09jID12DL35IWWKzZUjRI/FpMAGxymPuYAnfYQEzYSGaeQXnUXvsZgv6ci+ZTpwcOf0 fHg90quNXqmwvrzYnJw3mz+5X65JP91Ms4DUYrJyzOiORkK3w0lGoEO9d/bdmCLCWdTT bLfg== X-Forwarded-Encrypted: i=1; AJvYcCUWLB5cAvzmpBOA+8snigTE0akCf6b6cxNhP3En8uPUF91Yk/7+Uz4+KrNeSn1JFCduMh9cHBiGhJ+Ffg==@lists.infradead.org X-Gm-Message-State: AOJu0Yx+OkuGpXHefFp0+8SEaqsbCSHQdqQSRNMBHlf/oOD9d2De/yEV egRSCky0ytHrQLKc3LRroYdQm2AkXpCY5NVQ+CogTzCP9ZT3IuyViepJqoO+Coqro0HV2xsB1bV rcOFvL6vEuA== X-Google-Smtp-Source: AGHT+IEYbvktq49LdsReUEtwVFMZ4u0AErNwMdQm9eBuwPrj68sv++2uU8TAXJvwJwbpoJD68/9N90LfrlOyOw== X-Received: from wmbay14.prod.google.com ([2002:a05:600c:1e0e:b0:434:a8d7:e59b]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d07:b0:434:9934:575 with SMTP id 5b1f17b1804b1-436e26a8f4dmr128290085e9.16.1736534450259; Fri, 10 Jan 2025 10:40:50 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:28 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-2-8419288bc805@google.com> Subject: [PATCH RFC v2 02/29] x86: Create CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Junaid Shahid X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184052_429263_DD68E106 X-CRM114-Status: GOOD ( 15.91 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Currently a nop config. Keeping as a separate commit for easy review of the boring bits. Later commits will use and enable this new config. This config is only added for non-UML x86_64 as other architectures do not yet have pending implementations. It also has somewhat artificial dependencies on !PARAVIRT and !KASAN which are explained in the Kconfig file. Co-developed-by: Junaid Shahid Signed-off-by: Junaid Shahid Signed-off-by: Brendan Jackman --- arch/alpha/include/asm/Kbuild | 1 + arch/arc/include/asm/Kbuild | 1 + arch/arm/include/asm/Kbuild | 1 + arch/arm64/include/asm/Kbuild | 1 + arch/csky/include/asm/Kbuild | 1 + arch/hexagon/include/asm/Kbuild | 1 + arch/loongarch/include/asm/Kbuild | 3 +++ arch/m68k/include/asm/Kbuild | 1 + arch/microblaze/include/asm/Kbuild | 1 + arch/mips/include/asm/Kbuild | 1 + arch/nios2/include/asm/Kbuild | 1 + arch/openrisc/include/asm/Kbuild | 1 + arch/parisc/include/asm/Kbuild | 1 + arch/powerpc/include/asm/Kbuild | 1 + arch/riscv/include/asm/Kbuild | 1 + arch/s390/include/asm/Kbuild | 1 + arch/sh/include/asm/Kbuild | 1 + arch/sparc/include/asm/Kbuild | 1 + arch/um/include/asm/Kbuild | 2 +- arch/x86/Kconfig | 14 ++++++++++++++ arch/xtensa/include/asm/Kbuild | 1 + include/asm-generic/asi.h | 5 +++++ 22 files changed, 41 insertions(+), 1 deletion(-) diff --git a/arch/alpha/include/asm/Kbuild b/arch/alpha/include/asm/Kbuild index 396caece6d6d99c7a428f439322a0a18452e1a42..ca72ce3baca13a32913ac9e01a8f86ef42180b1c 100644 --- a/arch/alpha/include/asm/Kbuild +++ b/arch/alpha/include/asm/Kbuild @@ -5,3 +5,4 @@ generic-y += agp.h generic-y += asm-offsets.h generic-y += kvm_para.h generic-y += mcs_spinlock.h +generic-y += asi.h diff --git a/arch/arc/include/asm/Kbuild b/arch/arc/include/asm/Kbuild index 49285a3ce2398cc7442bc44172de76367dc33dda..68604480864bbcb58d896da6bdf71591006ab2f6 100644 --- a/arch/arc/include/asm/Kbuild +++ b/arch/arc/include/asm/Kbuild @@ -6,3 +6,4 @@ generic-y += kvm_para.h generic-y += mcs_spinlock.h generic-y += parport.h generic-y += user.h +generic-y += asi.h diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild index 03657ff8fbe3d202563184b8902aa181e7474a5e..1e2c3d8dbbd99bdf95dbc6b47c2c78092c68b808 100644 --- a/arch/arm/include/asm/Kbuild +++ b/arch/arm/include/asm/Kbuild @@ -6,3 +6,4 @@ generic-y += parport.h generated-y += mach-types.h generated-y += unistd-nr.h +generic-y += asi.h diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild index 4e350df9a02dd8de387b912740af69035da93e34..15f8aaaa96b80b5657b789ecf3529b1f18d16d80 100644 --- a/arch/arm64/include/asm/Kbuild +++ b/arch/arm64/include/asm/Kbuild @@ -14,6 +14,7 @@ generic-y += qrwlock.h generic-y += qspinlock.h generic-y += parport.h generic-y += user.h +generic-y += asi.h generated-y += cpucap-defs.h generated-y += sysreg-defs.h diff --git a/arch/csky/include/asm/Kbuild b/arch/csky/include/asm/Kbuild index 9a9bc65b57a9d73dadc9d597700d7229f8554ddf..4f497118fb172d1f2bf0f9e472479f24227f42f4 100644 --- a/arch/csky/include/asm/Kbuild +++ b/arch/csky/include/asm/Kbuild @@ -11,3 +11,4 @@ generic-y += qspinlock.h generic-y += parport.h generic-y += user.h generic-y += vmlinux.lds.h +generic-y += asi.h diff --git a/arch/hexagon/include/asm/Kbuild b/arch/hexagon/include/asm/Kbuild index 8c1a78c8f5271ebd47f1baad7b85e87220d1bbe8..b26f186bc03c2e135f8d125a4805b95a41513655 100644 --- a/arch/hexagon/include/asm/Kbuild +++ b/arch/hexagon/include/asm/Kbuild @@ -5,3 +5,4 @@ generic-y += extable.h generic-y += iomap.h generic-y += kvm_para.h generic-y += mcs_spinlock.h +generic-y += asi.h diff --git a/arch/loongarch/include/asm/Kbuild b/arch/loongarch/include/asm/Kbuild index 5b5a6c90e6e20771b1074a6262230861cc51bcb4..dd3d0c6891369a9dfa35ccfb8b81c8697c2a3e90 100644 --- a/arch/loongarch/include/asm/Kbuild +++ b/arch/loongarch/include/asm/Kbuild @@ -11,3 +11,6 @@ generic-y += ioctl.h generic-y += mmzone.h generic-y += statfs.h generic-y += param.h +generic-y += asi.h +generic-y += posix_types.h +generic-y += resource.h diff --git a/arch/m68k/include/asm/Kbuild b/arch/m68k/include/asm/Kbuild index 0dbf9c5c6faeb30eeb38bea52ab7fade99bbd44a..faf0f135df4ab946ef115f3a2fc363f370fc7491 100644 --- a/arch/m68k/include/asm/Kbuild +++ b/arch/m68k/include/asm/Kbuild @@ -4,3 +4,4 @@ generic-y += extable.h generic-y += kvm_para.h generic-y += mcs_spinlock.h generic-y += spinlock.h +generic-y += asi.h diff --git a/arch/microblaze/include/asm/Kbuild b/arch/microblaze/include/asm/Kbuild index a055f5dbe00a31616592c3a848b49bbf9ead5d17..012e4bf83c13497dc296b66cd5e0fd519274306b 100644 --- a/arch/microblaze/include/asm/Kbuild +++ b/arch/microblaze/include/asm/Kbuild @@ -8,3 +8,4 @@ generic-y += parport.h generic-y += syscalls.h generic-y += tlb.h generic-y += user.h +generic-y += asi.h diff --git a/arch/mips/include/asm/Kbuild b/arch/mips/include/asm/Kbuild index 7ba67a0d6c97b2879fb710aca05ae1e2d47c8ce2..3191699298d80735920481eecc64dd2d1dbd2e54 100644 --- a/arch/mips/include/asm/Kbuild +++ b/arch/mips/include/asm/Kbuild @@ -13,3 +13,4 @@ generic-y += parport.h generic-y += qrwlock.h generic-y += qspinlock.h generic-y += user.h +generic-y += asi.h diff --git a/arch/nios2/include/asm/Kbuild b/arch/nios2/include/asm/Kbuild index 0d09829ed14454f2f15a32bf713fa1eb213e85ea..03a5ec74e28b3679a5ef7271606af3c07bb7a198 100644 --- a/arch/nios2/include/asm/Kbuild +++ b/arch/nios2/include/asm/Kbuild @@ -7,3 +7,4 @@ generic-y += kvm_para.h generic-y += mcs_spinlock.h generic-y += spinlock.h generic-y += user.h +generic-y += asi.h diff --git a/arch/openrisc/include/asm/Kbuild b/arch/openrisc/include/asm/Kbuild index cef49d60d74c0f46f01cf46cc35e1e52404185f3..6a81a58bf59e20cafa563c422df4dfa6f9f791ec 100644 --- a/arch/openrisc/include/asm/Kbuild +++ b/arch/openrisc/include/asm/Kbuild @@ -9,3 +9,4 @@ generic-y += spinlock.h generic-y += qrwlock_types.h generic-y += qrwlock.h generic-y += user.h +generic-y += asi.h diff --git a/arch/parisc/include/asm/Kbuild b/arch/parisc/include/asm/Kbuild index 4fb596d94c8932dd1e12a765a21af5b5099fbafd..3cbb4eb14712c7bd6c248dd26ab91cc41da01825 100644 --- a/arch/parisc/include/asm/Kbuild +++ b/arch/parisc/include/asm/Kbuild @@ -5,3 +5,4 @@ generic-y += agp.h generic-y += kvm_para.h generic-y += mcs_spinlock.h generic-y += user.h +generic-y += asi.h diff --git a/arch/powerpc/include/asm/Kbuild b/arch/powerpc/include/asm/Kbuild index e5fdc336c9b22527f824ed30d06b5e8c0fa8a1ef..e86cc027f35564c7b301c283043bde0e5d2d3b6a 100644 --- a/arch/powerpc/include/asm/Kbuild +++ b/arch/powerpc/include/asm/Kbuild @@ -7,3 +7,4 @@ generic-y += kvm_types.h generic-y += mcs_spinlock.h generic-y += qrwlock.h generic-y += early_ioremap.h +generic-y += asi.h diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild index 1461af12da6e2bbbff6cf737a7babf33bd298cdd..82060ed50d9beb1ea72d3570ad236d1e08d9d8c6 100644 --- a/arch/riscv/include/asm/Kbuild +++ b/arch/riscv/include/asm/Kbuild @@ -13,3 +13,4 @@ generic-y += qrwlock.h generic-y += qrwlock_types.h generic-y += user.h generic-y += vmlinux.lds.h +generic-y += asi.h diff --git a/arch/s390/include/asm/Kbuild b/arch/s390/include/asm/Kbuild index 297bf7157968907d6e4c4ff8b65deeef02dbd630..e15c2a138392b57b186633738ddda913474aa8c4 100644 --- a/arch/s390/include/asm/Kbuild +++ b/arch/s390/include/asm/Kbuild @@ -8,3 +8,4 @@ generic-y += asm-offsets.h generic-y += kvm_types.h generic-y += mcs_spinlock.h generic-y += mmzone.h +generic-y += asi.h diff --git a/arch/sh/include/asm/Kbuild b/arch/sh/include/asm/Kbuild index fc44d9c88b41915a7021042eb8b462517cfdbd2c..ea19e4515828552f436d67f764607dd5d15cb19f 100644 --- a/arch/sh/include/asm/Kbuild +++ b/arch/sh/include/asm/Kbuild @@ -3,3 +3,4 @@ generated-y += syscall_table.h generic-y += kvm_para.h generic-y += mcs_spinlock.h generic-y += parport.h +generic-y += asi.h diff --git a/arch/sparc/include/asm/Kbuild b/arch/sparc/include/asm/Kbuild index 43b0ae4c2c2112d4d4d3cb3c60e787b175172dea..cb9062c9be17fe276cc92d2ac99d8b165f6297bf 100644 --- a/arch/sparc/include/asm/Kbuild +++ b/arch/sparc/include/asm/Kbuild @@ -4,3 +4,4 @@ generated-y += syscall_table_64.h generic-y += agp.h generic-y += kvm_para.h generic-y += mcs_spinlock.h +generic-y += asi.h diff --git a/arch/um/include/asm/Kbuild b/arch/um/include/asm/Kbuild index 18f902da8e99769da857d34af43141ea97a0ca63..6054972f1babdaebae64040b05ab48893915cb04 100644 --- a/arch/um/include/asm/Kbuild +++ b/arch/um/include/asm/Kbuild @@ -27,4 +27,4 @@ generic-y += trace_clock.h generic-y += kprobes.h generic-y += mm_hooks.h generic-y += vga.h -generic-y += video.h +generic-y += asi.h diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 7b9a7e8f39acc8e9aeb7d4213e87d71047865f5c..5a50582eb210e9d1309856a737d32b76fa1bfc85 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2519,6 +2519,20 @@ config MITIGATION_PAGE_TABLE_ISOLATION See Documentation/arch/x86/pti.rst for more details. +config MITIGATION_ADDRESS_SPACE_ISOLATION + bool "Allow code to run with a reduced kernel address space" + default n + depends on X86_64 && !PARAVIRT && !UML + help + This feature provides the ability to run some kernel code + with a reduced kernel address space. This can be used to + mitigate some speculative execution attacks. + + The !PARAVIRT dependency is only because of lack of testing; in theory + the code is written to work under paravirtualization. In practice + there are likely to be unhandled cases, in particular concerning TLB + flushes. + config MITIGATION_RETPOLINE bool "Avoid speculative indirect branches in kernel" select OBJTOOL if HAVE_OBJTOOL diff --git a/arch/xtensa/include/asm/Kbuild b/arch/xtensa/include/asm/Kbuild index fa07c686cbcc2153776a478ac4093846f01eddab..07cea6902f98053be244d026ed594fe7246755a6 100644 --- a/arch/xtensa/include/asm/Kbuild +++ b/arch/xtensa/include/asm/Kbuild @@ -8,3 +8,4 @@ generic-y += parport.h generic-y += qrwlock.h generic-y += qspinlock.h generic-y += user.h +generic-y += asi.h diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h new file mode 100644 index 0000000000000000000000000000000000000000..c4d9a5ff860a96428422a15000c622aeecc2d664 --- /dev/null +++ b/include/asm-generic/asi.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_GENERIC_ASI_H +#define __ASM_GENERIC_ASI_H + +#endif From patchwork Fri Jan 10 18:40:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 63024E7719D for ; Fri, 10 Jan 2025 23:19:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=IGj3J+892XQCUEocD1b1aCRdTI39EQlSgtQlsE16Qh0=; b=xK5XCb0BPatDjO+UhDnl+ptU+x U1TxGJrxCGpyCmCarTCUve8bW8qqMdEo5zvVV1HJCulIIx3/bzRTZn0Ljp1EzW+MoZYS/odwK7F/4 vkxjbWXuEXM6sLOkwC8SGyWnSXjyUcJjnG4Ocgij/JTrAKEC6BMRnGm+gXU7pK/k+EZE7u1KtrJ0u DNg7lVdx4agmk+P5qbWsEWufgvbdQ3yc05fwNqLo6mop03V8oecWAT5ukXee0imWjA8WAerDVk7m/ 2QOuFFffrKO1K7XsrAgpZsMZJBEb6kVV/W5ceGtj/dG2MSFV2vcGiFbA9bomyyfcu0yqWbDJHdAYD h3Yy4vbg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOI6-0000000HEQr-3Of8; Fri, 10 Jan 2025 23:19:46 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwJ-0000000GbDs-1Wwq for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:40:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=XoGlz9R9SPVMFZpNV+d4L6dWHbzE359F5HGhLwEH9Ng=; b=BZ1V+PYLvg/puSSsWcvP/v88uV JjvrX1ix5jvhmkgljg7GuL1xpFDuDE8WZAf/pOsgOtvS5mSmO5P2Vk5/69kCg8YczJ2Dt+XT4R/ll lcNJm+hCFyHAohaikeHpsmZkN8SVUIkgaD/FV7yhr15nRwzfIiA2ppG1WYuD3etFybPKGWMfyuM0N Mxtwi+NGKjCw0TxtVgTgTOJQ4kbBPCvGHBbSRU4P20Yhq1+Bh/5LHNb0mmQ2GBhjPEgBEkKkx0dk4 m5w6gRc3DmKdr7bCO1eyNVLovttnX8HWMWP+mFtTgi+iuqGy+PCjI91jnPdb5hCkspehKHf7rIsRT PFa3GLGQ==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwE-00000009sGU-0JFX for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:40:58 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-43673af80a6so19099875e9.1 for ; Fri, 10 Jan 2025 10:40:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534453; x=1737139253; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XoGlz9R9SPVMFZpNV+d4L6dWHbzE359F5HGhLwEH9Ng=; b=XP7q8TumKoReS0kuvbl6dWAJwu2sEkirnLoqpT4frmv57xhSeLahMzX3CCgPq/ZXKT XFmGb068uym2EyKITSBsKDKLb+W2nzmJF1hcI8xVbm5/RQ7zq0DQ6Xh5Llt98CBVUg08 ZUwP9QrOrbcGCOZyjztHd9jIyozj06XMV4wXOqgAO7MxurphKqNi82KGfQsCkQqj6dUG usxAZXq+GF/0HC3g6f//pKHU20KET8LhXa0ey+52EpNmTi/jY7inLnTG6heeZ7MYnQxx DMpWCozFZ7PvY28FaeCdcTex2ylzLm5kcNPtesYnBzZ13AJceTmoT5PeFRUzJC9yE2JK 5lDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534453; x=1737139253; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XoGlz9R9SPVMFZpNV+d4L6dWHbzE359F5HGhLwEH9Ng=; b=uB38CsRjTz71yNaIlJRQO2plikb9jWZz7yrZpOMzR8g5cRsgt5P0i9l5RZH7biij0Z AMN8rHGWQrqCTpvf59e+Hh22dUGWVjrGNgZ5ocEBhOF58F45nzevMD2r2AjENJXc0QnH iqLUZ56j7x8Q32y9RPUyG7WEu5ItHBKqHzoNkMf2Ju+6h23PHG/mP5nI3hgcpaR9+ol4 UVh5HrrnuzPEI8ratgBhsHCBpbm/SCczBhfLHHFOfXLGj2YUwI7E0Z/b13qp8XU/43k4 vzlSrAQpmM8RtittVE9ZMjghRYxkVweWMprZFgHZtahQA49XQJGAeQiU7b8zlnviDvQf PJtQ== X-Forwarded-Encrypted: i=1; AJvYcCW1EnpsfPV66TkutLBTumpk06koAXH5esF5qiBrhWzVLFJmYm5uiJjFYQUDD3KLX/5T731/lWMOIRJq5w==@lists.infradead.org X-Gm-Message-State: AOJu0Yyz/dq0stcGogS1jmzfLDKctY5MwTQZSpVZbK3fogYx3YviZf2r Dd2QHj6CpxrXL1VVPw9YJ7qmFHE4JzXB5S1VAbtsT+5bmkt2mjkFpsn2u+8X2zDA7Sr9cOGXsdu KJzfNKSI2Ew== X-Google-Smtp-Source: AGHT+IFPE8+5SCTXqr49FlBKd+8kKzimRqkvhpstM5ymbp/ZYkpFIu6SM4TYwUHOcXTpPXWmM6SB9Eylsg8TKg== X-Received: from wmbhg22.prod.google.com ([2002:a05:600c:5396:b0:434:e90f:9fd3]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4745:b0:434:9c60:95a3 with SMTP id 5b1f17b1804b1-436e26c4218mr116881415e9.11.1736534452359; Fri, 10 Jan 2025 10:40:52 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:29 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-3-8419288bc805@google.com> Subject: [PATCH RFC v2 03/29] mm: asi: Introduce ASI core API From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Ofir Weisse , Junaid Shahid X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184054_405672_0335E95F X-CRM114-Status: GOOD ( 34.84 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Introduce core API for Address Space Isolation (ASI). Kernel address space isolation provides the ability to run some kernel code with a restricted kernel address space. There can be multiple classes of such restricted kernel address spaces (e.g. KPTI, KVM-PTI etc.). Each ASI class is identified by an index. The ASI class can register some hooks to be called when entering/exiting the restricted address space. Currently, there is a fixed maximum number of ASI classes supported. In addition, each process can have at most one restricted address space from each ASI class. Neither of these are inherent limitations and are merely simplifying assumptions for the time being. To keep things simpler for the time being, we disallow context switches within the restricted address space. In the future, we would be able to relax this limitation for the case of context switches to different threads within the same process (or to the idle thread and back). Note that this doesn't really support protecting sibling VM guests within the same VMM process from one another. From first principles it seems unlikely that anyone who cares about VM isolation would do that, but there could be a use-case to think about. In that case need something like the OTHER_MM logic might be needed, but specific to intra-process guest separation. [0]: https://lore.kernel.org/kvm/1562855138-19507-1-git-send-email-alexandre.chartre@oracle.com Notes about RFC-quality implementation details: - Ignoring checkpatch.pl AVOID_BUG. - The dynamic registration of classes might be pointless complexity. This was kept from RFCv1 without much thought. - The other-mm logic is also perhaps overly complex, suggestions are welcome for how best to tackle this (or we could just forget about it for the moment, and rely on asi_exit() happening in process switch). - The taint flag definitions would probably be clearer with an enum or something. Checkpatch-args: --ignore=AVOID_BUG,COMMIT_LOG_LONG_LINE,EXPORT_SYMBOL Co-developed-by: Ofir Weisse Signed-off-by: Ofir Weisse Co-developed-by: Junaid Shahid Signed-off-by: Junaid Shahid Signed-off-by: Brendan Jackman --- arch/x86/include/asm/asi.h | 208 +++++++++++++++++++++++ arch/x86/include/asm/processor.h | 8 + arch/x86/mm/Makefile | 1 + arch/x86/mm/asi.c | 350 +++++++++++++++++++++++++++++++++++++++ arch/x86/mm/init.c | 3 +- arch/x86/mm/tlb.c | 1 + include/asm-generic/asi.h | 67 ++++++++ include/linux/mm_types.h | 7 + kernel/fork.c | 3 + kernel/sched/core.c | 9 + mm/init-mm.c | 4 + 11 files changed, 660 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/asi.h b/arch/x86/include/asm/asi.h new file mode 100644 index 0000000000000000000000000000000000000000..7cc635b6653a3970ba9dbfdc9c828a470e27bd44 --- /dev/null +++ b/arch/x86/include/asm/asi.h @@ -0,0 +1,208 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_X86_ASI_H +#define _ASM_X86_ASI_H + +#include + +#include + +#include +#include +#include + +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + +/* + * Overview of API usage by ASI clients: + * + * Setup: First call asi_init() to create a domain. At present only one domain + * can be created per mm per class, but it's safe to asi_init() this domain + * multiple times. For each asi_init() call you must call asi_destroy() AFTER + * you are certain all CPUs have exited the restricted address space (by + * calling asi_exit()). + * + * Runtime usage: + * + * 1. Call asi_enter() to switch to the restricted address space. This can't be + * from an interrupt or exception handler and preemption must be disabled. + * + * 2. Execute untrusted code. + * + * 3. Call asi_relax() to inform the ASI subsystem that untrusted code execution + * is finished. This doesn't cause any address space change. This can't be + * from an interrupt or exception handler and preemption must be disabled. + * + * 4. Either: + * + * a. Go back to 1. + * + * b. Call asi_exit() before returning to userspace. This immediately + * switches to the unrestricted address space. + * + * The region between 1 and 3 is called the "ASI critical section". During the + * critical section, it is a bug to access any sensitive data, and you mustn't + * sleep. + * + * The restriction on sleeping is not really a fundamental property of ASI. + * However for performance reasons it's important that the critical section is + * absolutely as short as possible. So the ability to do sleepy things like + * taking mutexes oughtn't to confer any convenience on API users. + * + * Similarly to the issue of sleeping, the need to asi_exit in case 4b is not a + * fundamental property of the system but a limitation of the current + * implementation. With further work it is possible to context switch + * from and/or to the restricted address space, and to return to userspace + * directly from the restricted address space, or _in_ it. + * + * Note that the critical section only refers to the direct execution path from + * asi_enter to asi_relax: it's fine to access sensitive data from exceptions + * and interrupt handlers that occur during that time. ASI will re-enter the + * restricted address space before returning from the outermost + * exception/interrupt. + * + * Note: ASI does not modify KPTI behaviour; when ASI and KPTI run together + * there are 2+N address spaces per task: the unrestricted kernel address space, + * the user address space, and one restricted (kernel) address space for each of + * the N ASI classes. + */ + +/* + * ASI uses a per-CPU tainting model to track what mitigation actions are + * required on domain transitions. Taints exist along two dimensions: + * + * - Who touched the CPU (guest, unprotected kernel, userspace). + * + * - What kind of state might remain: "data" means there might be data owned by + * a victim domain left behind in a sidechannel. "Control" means there might + * be state controlled by an attacker domain left behind in the branch + * predictor. + * + * In principle the same domain can be both attacker and victim, thus we have + * both data and control taints for userspace, although there's no point in + * trying to protect against attacks from the kernel itself, so there's no + * ASI_TAINT_KERNEL_CONTROL. + */ +#define ASI_TAINT_KERNEL_DATA ((asi_taints_t)BIT(0)) +#define ASI_TAINT_USER_DATA ((asi_taints_t)BIT(1)) +#define ASI_TAINT_GUEST_DATA ((asi_taints_t)BIT(2)) +#define ASI_TAINT_OTHER_MM_DATA ((asi_taints_t)BIT(3)) +#define ASI_TAINT_USER_CONTROL ((asi_taints_t)BIT(4)) +#define ASI_TAINT_GUEST_CONTROL ((asi_taints_t)BIT(5)) +#define ASI_TAINT_OTHER_MM_CONTROL ((asi_taints_t)BIT(6)) +#define ASI_NUM_TAINTS 6 +static_assert(BITS_PER_BYTE * sizeof(asi_taints_t) >= ASI_NUM_TAINTS); + +#define ASI_TAINTS_CONTROL_MASK \ + (ASI_TAINT_USER_CONTROL | ASI_TAINT_GUEST_CONTROL | ASI_TAINT_OTHER_MM_CONTROL) + +#define ASI_TAINTS_DATA_MASK \ + (ASI_TAINT_KERNEL_DATA | ASI_TAINT_USER_DATA | ASI_TAINT_OTHER_MM_DATA) + +#define ASI_TAINTS_GUEST_MASK (ASI_TAINT_GUEST_DATA | ASI_TAINT_GUEST_CONTROL) +#define ASI_TAINTS_USER_MASK (ASI_TAINT_USER_DATA | ASI_TAINT_USER_CONTROL) + +/* The taint policy tells ASI how a class interacts with the CPU taints */ +struct asi_taint_policy { + /* + * What taints would necessitate a flush when entering the domain, to + * protect it from attack by prior domains? + */ + asi_taints_t prevent_control; + /* + * What taints would necessetate a flush when entering the domain, to + * protect former domains from attack by this domain? + */ + asi_taints_t protect_data; + /* What taints should be set when entering the domain? */ + asi_taints_t set; +}; + +/* + * An ASI domain (struct asi) represents a restricted address space. The + * unrestricted address space (and user address space under PTI) are not + * represented as a domain. + */ +struct asi { + pgd_t *pgd; + struct mm_struct *mm; + int64_t ref_count; + enum asi_class_id class_id; +}; + +DECLARE_PER_CPU_ALIGNED(struct asi *, curr_asi); + +void asi_init_mm_state(struct mm_struct *mm); + +int asi_init_class(enum asi_class_id class_id, struct asi_taint_policy *taint_policy); +void asi_uninit_class(enum asi_class_id class_id); +const char *asi_class_name(enum asi_class_id class_id); + +int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_asi); +void asi_destroy(struct asi *asi); + +/* Enter an ASI domain (restricted address space) and begin the critical section. */ +void asi_enter(struct asi *asi); + +/* + * Leave the "tense" state if we are in it, i.e. end the critical section. We + * will stay relaxed until the next asi_enter. + */ +void asi_relax(void); + +/* Immediately exit the restricted address space if in it */ +void asi_exit(void); + +/* The target is the domain we'll enter when returning to process context. */ +static __always_inline struct asi *asi_get_target(struct task_struct *p) +{ + return p->thread.asi_state.target; +} + +static __always_inline void asi_set_target(struct task_struct *p, + struct asi *target) +{ + p->thread.asi_state.target = target; +} + +static __always_inline struct asi *asi_get_current(void) +{ + return this_cpu_read(curr_asi); +} + +/* Are we currently in a restricted address space? */ +static __always_inline bool asi_is_restricted(void) +{ + return (bool)asi_get_current(); +} + +/* If we exit/have exited, can we stay that way until the next asi_enter? */ +static __always_inline bool asi_is_relaxed(void) +{ + return !asi_get_target(current); +} + +/* + * Is the current task in the critical section? + * + * This is just the inverse of !asi_is_relaxed(). We have both functions in order to + * help write intuitive client code. In particular, asi_is_tense returns false + * when ASI is disabled, which is judged to make user code more obvious. + */ +static __always_inline bool asi_is_tense(void) +{ + return !asi_is_relaxed(); +} + +static __always_inline pgd_t *asi_pgd(struct asi *asi) +{ + return asi ? asi->pgd : NULL; +} + +#define INIT_MM_ASI(init_mm) \ + .asi_init_lock = __MUTEX_INITIALIZER(init_mm.asi_init_lock), + +void asi_handle_switch_mm(void); + +#endif /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ + +#endif diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 1a1b7ea5d7d32a47d783d9d62cd2a53672addd6f..f02220e6b4df911d87e2fee4b497eade61a27161 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -5,6 +5,7 @@ #include /* Forward declaration, a strange C thing */ +struct asi; struct task_struct; struct mm_struct; struct io_bitmap; @@ -503,6 +504,13 @@ struct thread_struct { struct thread_shstk shstk; #endif +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + struct { + /* Domain to enter when returning to process context. */ + struct asi *target; + } asi_state; +#endif + /* Floating point and extended processor state */ struct fpu fpu; /* diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 690fbf48e8538b62a176ce838820e363575b7897..89ade7363798cc20d5e5643526eba7378174baa0 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -61,6 +61,7 @@ obj-$(CONFIG_ACPI_NUMA) += srat.o obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o obj-$(CONFIG_MITIGATION_PAGE_TABLE_ISOLATION) += pti.o +obj-$(CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION) += asi.o obj-$(CONFIG_X86_MEM_ENCRYPT) += mem_encrypt.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_amd.o diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c new file mode 100644 index 0000000000000000000000000000000000000000..105cd8b43eaf5c20acc80d4916b761559fb95d74 --- /dev/null +++ b/arch/x86/mm/asi.c @@ -0,0 +1,350 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +static struct asi_taint_policy *taint_policies[ASI_MAX_NUM_CLASSES]; + +const char *asi_class_names[] = { +#if IS_ENABLED(CONFIG_KVM) + [ASI_CLASS_KVM] = "KVM", +#endif +}; + +DEFINE_PER_CPU_ALIGNED(struct asi *, curr_asi); +EXPORT_SYMBOL(curr_asi); + +static inline bool asi_class_id_valid(enum asi_class_id class_id) +{ + return class_id >= 0 && class_id < ASI_MAX_NUM_CLASSES; +} + +static inline bool asi_class_initialized(enum asi_class_id class_id) +{ + if (WARN_ON(!asi_class_id_valid(class_id))) + return false; + + return !!(taint_policies[class_id]); +} + +int asi_init_class(enum asi_class_id class_id, struct asi_taint_policy *taint_policy) +{ + if (asi_class_initialized(class_id)) + return -EEXIST; + + WARN_ON(!(taint_policy->prevent_control & ASI_TAINTS_CONTROL_MASK)); + WARN_ON(!(taint_policy->protect_data & ASI_TAINTS_DATA_MASK)); + + taint_policies[class_id] = taint_policy; + + return 0; +} +EXPORT_SYMBOL_GPL(asi_init_class); + +void asi_uninit_class(enum asi_class_id class_id) +{ + if (!asi_class_initialized(class_id)) + return; + + taint_policies[class_id] = NULL; +} +EXPORT_SYMBOL_GPL(asi_uninit_class); + +const char *asi_class_name(enum asi_class_id class_id) +{ + if (WARN_ON_ONCE(!asi_class_id_valid(class_id))) + return ""; + + return asi_class_names[class_id]; +} + +static void __asi_destroy(struct asi *asi) +{ + lockdep_assert_held(&asi->mm->asi_init_lock); + +} + +int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_asi) +{ + struct asi *asi; + int err = 0; + + *out_asi = NULL; + + if (WARN_ON(!asi_class_initialized(class_id))) + return -EINVAL; + + asi = &mm->asi[class_id]; + + mutex_lock(&mm->asi_init_lock); + + if (asi->ref_count++ > 0) + goto exit_unlock; /* err is 0 */ + + BUG_ON(asi->pgd != NULL); + + /* + * For now, we allocate 2 pages to avoid any potential problems with + * KPTI code. This won't be needed once KPTI is folded into the ASI + * framework. + */ + asi->pgd = (pgd_t *)__get_free_pages( + GFP_KERNEL_ACCOUNT | __GFP_ZERO, PGD_ALLOCATION_ORDER); + if (!asi->pgd) { + err = -ENOMEM; + goto exit_unlock; + } + + asi->mm = mm; + asi->class_id = class_id; + +exit_unlock: + if (err) + __asi_destroy(asi); + else + *out_asi = asi; + + mutex_unlock(&mm->asi_init_lock); + + return err; +} +EXPORT_SYMBOL_GPL(asi_init); + +void asi_destroy(struct asi *asi) +{ + struct mm_struct *mm; + + if (!asi) + return; + + if (WARN_ON(!asi_class_initialized(asi->class_id))) + return; + + mm = asi->mm; + /* + * We would need this mutex even if the refcount was atomic as we need + * to block concurrent asi_init calls. + */ + mutex_lock(&mm->asi_init_lock); + WARN_ON_ONCE(asi->ref_count <= 0); + if (--(asi->ref_count) == 0) { + free_pages((ulong)asi->pgd, PGD_ALLOCATION_ORDER); + memset(asi, 0, sizeof(struct asi)); + } + mutex_unlock(&mm->asi_init_lock); +} +EXPORT_SYMBOL_GPL(asi_destroy); + +DEFINE_PER_CPU_ALIGNED(asi_taints_t, asi_taints); + +/* + * Flush out any potentially malicious speculative control flow (e.g. branch + * predictor) state if necessary when we are entering a new domain (which may be + * NULL when we are exiting to the restricted address space). + * + * This is "backwards-looking" mitigation, the attacker is in the past: we want + * then when logically transitioning from A to B and B doesn't trust A. + * + * This function must tolerate reentrancy. + */ +static __always_inline void maybe_flush_control(struct asi *next_asi) +{ + asi_taints_t taints = this_cpu_read(asi_taints); + + if (next_asi) { + taints &= taint_policies[next_asi->class_id]->prevent_control; + } else { + /* + * Going to the unrestricted address space, this has an implicit + * policy of flushing all taints. + */ + taints &= ASI_TAINTS_CONTROL_MASK; + } + + if (!taints) + return; + + /* + * This is where we'll do the actual dirty work of clearing uarch state. + * For now we just pretend, clear the taints. + */ + this_cpu_and(asi_taints, ~ASI_TAINTS_CONTROL_MASK); +} + +/* + * Flush out any data that might be hanging around in uarch state that can be + * leaked through sidechannels if necessary when we are entering a new domain. + * + * This is "forwards-looking" mitigation, the attacker is in the future: we want + * this when logically transitioning from A to B and A doesn't trust B. + * + * This function must tolerate reentrancy. + */ +static __always_inline void maybe_flush_data(struct asi *next_asi) +{ + asi_taints_t taints = this_cpu_read(asi_taints) + & taint_policies[next_asi->class_id]->protect_data; + + if (!taints) + return; + + /* + * This is where we'll do the actual dirty work of clearing uarch state. + * For now we just pretend, clear the taints. + */ + this_cpu_and(asi_taints, ~ASI_TAINTS_DATA_MASK); +} + +static noinstr void __asi_enter(void) +{ + u64 asi_cr3; + struct asi *target = asi_get_target(current); + + /* + * This is actually false restriction, it should be fine to be + * preemptible during the critical section. But we haven't tested it. We + * will also need to disable preemption during this function itself and + * perhaps elsewhere. This false restriction shouldn't create any + * additional burden for ASI clients anyway: the critical section has + * to be as short as possible to avoid unnecessary ASI transitions so + * disabling preemption should be fine. + */ + VM_BUG_ON(preemptible()); + + if (!target || target == this_cpu_read(curr_asi)) + return; + + VM_BUG_ON(this_cpu_read(cpu_tlbstate.loaded_mm) == + LOADED_MM_SWITCHING); + + /* + * Must update curr_asi before writing CR3 to ensure an interrupting + * asi_exit sees that it may need to switch address spaces. + * This is the real beginning of the ASI critical section. + */ + this_cpu_write(curr_asi, target); + maybe_flush_control(target); + + asi_cr3 = build_cr3_noinstr(target->pgd, + this_cpu_read(cpu_tlbstate.loaded_mm_asid), + tlbstate_lam_cr3_mask()); + write_cr3(asi_cr3); + + maybe_flush_data(target); + /* + * It's fine to set the control taints late like this, since we haven't + * actually got to the untrusted code yet. Waiting until now to set the + * data taints is less obviously correct: we've mapped in the incoming + * domain's secrets now so we can't guarantee they haven't already got + * into a sidechannel. However, preemption is off so the only way we can + * reach another asi_enter() is in the return from an interrupt - in + * that case the reentrant asi_enter() call is entering the same domain + * that we're entering at the moment, it doesn't need to flush those + * secrets. + */ + this_cpu_or(asi_taints, taint_policies[target->class_id]->set); +} + +noinstr void asi_enter(struct asi *asi) +{ + VM_WARN_ON_ONCE(!asi); + + /* Should not have an asi_enter() without a prior asi_relax(). */ + VM_WARN_ON_ONCE(asi_get_target(current)); + + asi_set_target(current, asi); + barrier(); + + __asi_enter(); +} +EXPORT_SYMBOL_GPL(asi_enter); + +noinstr void asi_relax(void) +{ + barrier(); + asi_set_target(current, NULL); +} +EXPORT_SYMBOL_GPL(asi_relax); + +noinstr void asi_exit(void) +{ + u64 unrestricted_cr3; + struct asi *asi; + + preempt_disable_notrace(); + + VM_BUG_ON(this_cpu_read(cpu_tlbstate.loaded_mm) == + LOADED_MM_SWITCHING); + + asi = this_cpu_read(curr_asi); + if (asi) { + maybe_flush_control(NULL); + + unrestricted_cr3 = + build_cr3_noinstr(this_cpu_read(cpu_tlbstate.loaded_mm)->pgd, + this_cpu_read(cpu_tlbstate.loaded_mm_asid), + tlbstate_lam_cr3_mask()); + + /* Tainting first makes reentrancy easier to reason about. */ + this_cpu_or(asi_taints, ASI_TAINT_KERNEL_DATA); + write_cr3(unrestricted_cr3); + /* + * Must not update curr_asi until after CR3 write, otherwise a + * re-entrant call might not enter this branch. (This means we + * might do unnecessary CR3 writes). + */ + this_cpu_write(curr_asi, NULL); + } + + preempt_enable_notrace(); +} +EXPORT_SYMBOL_GPL(asi_exit); + +void asi_init_mm_state(struct mm_struct *mm) +{ + memset(mm->asi, 0, sizeof(mm->asi)); + mutex_init(&mm->asi_init_lock); +} + +void asi_handle_switch_mm(void) +{ + /* + * We can't handle context switching in the restricted address space yet + * so this is pointless in practice (we asi_exit() in this path, which + * doesn't care about the fine details of who exactly got at the branch + * predictor), but just to illustrate how the tainting model is supposed + * to work, here we squash the per-domain (guest/userspace) taints into + * a general "other MM" taint. Other processes don't care if their peers + * are attacking them from a guest or from bare metal. + */ + asi_taints_t taints = this_cpu_read(asi_taints); + asi_taints_t new_taints = 0; + + if (taints & ASI_TAINTS_CONTROL_MASK) + new_taints |= ASI_TAINT_OTHER_MM_CONTROL; + if (taints & ASI_TAINTS_DATA_MASK) + new_taints |= ASI_TAINT_OTHER_MM_DATA; + + /* + * We can't race with asi_enter() or we'd clobber the taint it sets. + * Would be odd given this function says context_switch in the name but + * just be to sure... + */ + lockdep_assert_preemption_disabled(); + + /* + * Can'tt just this_cpu_write here as we could be racing with asi_exit() + * (at least, in the future where this function is actually necessary), + * we mustn't clobber ASI_TAINT_KERNEL_DATA. + */ + this_cpu_or(asi_taints, new_taints); + this_cpu_and(asi_taints, ~(ASI_TAINTS_GUEST_MASK | ASI_TAINTS_USER_MASK)); +} diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index eb503f53c3195ca4f299593c0112dab0fb09e7dd..de4227ed5169ff84d0ce80b677caffc475198fa6 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -250,7 +250,8 @@ static void __init probe_page_size_mask(void) /* By the default is everything supported: */ __default_kernel_pte_mask = __supported_pte_mask; /* Except when with PTI where the kernel is mostly non-Global: */ - if (cpu_feature_enabled(X86_FEATURE_PTI)) + if (cpu_feature_enabled(X86_FEATURE_PTI) || + IS_ENABLED(CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION)) __default_kernel_pte_mask &= ~_PAGE_GLOBAL; /* Enable 1 GB linear kernel mappings if available: */ diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index f0428e5e1f1947903ee87c4c6444844ee11b45c3..7c2309996d1d5a7cac23bd122f7b56a869d67d6a 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -608,6 +608,7 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, * Apply process to process speculation vulnerability * mitigations if applicable. */ + asi_handle_switch_mm(); cond_mitigation(tsk); /* diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h index c4d9a5ff860a96428422a15000c622aeecc2d664..6b84202837605fa57e4a910318c8353b3f816f06 100644 --- a/include/asm-generic/asi.h +++ b/include/asm-generic/asi.h @@ -2,4 +2,71 @@ #ifndef __ASM_GENERIC_ASI_H #define __ASM_GENERIC_ASI_H +#include + +#ifndef _ASSEMBLY_ + +/* + * An ASI class is a type of isolation that can be applied to a process. A + * process may have a domain for each class. + */ +enum asi_class_id { +#if IS_ENABLED(CONFIG_KVM) + ASI_CLASS_KVM, +#endif + ASI_MAX_NUM_CLASSES, +}; + +typedef u8 asi_taints_t; + +#ifndef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + +struct asi_hooks {}; +struct asi {}; + +static inline +int asi_init_class(enum asi_class_id class_id, + asi_taints_t control_taints, asi_taints_t data_taints) +{ + return 0; +} + +static inline void asi_uninit_class(enum asi_class_id class_id) { } + +struct mm_struct; +static inline void asi_init_mm_state(struct mm_struct *mm) { } + +static inline int asi_init(struct mm_struct *mm, enum asi_class_id class_id, + struct asi **out_asi) +{ + return 0; +} + +static inline void asi_destroy(struct asi *asi) { } + +static inline void asi_enter(struct asi *asi) { } + +static inline void asi_relax(void) { } + +static inline bool asi_is_relaxed(void) { return true; } + +static inline bool asi_is_tense(void) { return false; } + +static inline void asi_exit(void) { } + +static inline bool asi_is_restricted(void) { return false; } + +static inline struct asi *asi_get_current(void) { return NULL; } + +struct task_struct; +static inline struct asi *asi_get_target(struct task_struct *p) { return NULL; } + +static inline pgd_t *asi_pgd(struct asi *asi) { return NULL; } + +static inline void asi_handle_switch_mm(void) { } + +#endif /* !CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ + +#endif /* !_ASSEMBLY_ */ + #endif diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6e3bdf8e38bcaee66a71f5566ac7debb94c0ee78..391e32a41ca3df84a619f3ee8ea45d3729a43023 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -19,8 +19,10 @@ #include #include #include +#include #include +#include #ifndef AT_VECTOR_SIZE_ARCH #define AT_VECTOR_SIZE_ARCH 0 @@ -826,6 +828,11 @@ struct mm_struct { atomic_t membarrier_state; #endif +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + struct asi asi[ASI_MAX_NUM_CLASSES]; + struct mutex asi_init_lock; +#endif + /** * @mm_users: The number of users including userspace. * diff --git a/kernel/fork.c b/kernel/fork.c index 22f43721d031d48fd5be2606e86642334be9735f..bb73758790d08112265d398b16902ff9a4c2b8fe 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -112,6 +112,7 @@ #include #include #include +#include #include @@ -1296,6 +1297,8 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, if (mm_alloc_pgd(mm)) goto fail_nopgd; + asi_init_mm_state(mm); + if (init_new_context(p, mm)) goto fail_nocontext; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a1c353a62c5684e3e773dd100afbddb818c480be..b1f7f73730c1e56f700cd3611a8093f177184842 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -78,6 +78,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -5272,6 +5273,14 @@ static __always_inline struct rq * context_switch(struct rq *rq, struct task_struct *prev, struct task_struct *next, struct rq_flags *rf) { + /* + * It's possible to avoid this by tweaking ASI's domain management code + * and updating code that modifies CR3 to be ASI-aware. Even without + * that, it's probably possible to get rid of this in certain cases just + * by fiddling with the context switch path itself. + */ + asi_exit(); + prepare_task_switch(rq, prev, next); /* diff --git a/mm/init-mm.c b/mm/init-mm.c index 24c809379274503ac4f261fe7cfdbab3cb1ed1e7..e820e1c6edd48836a0ebe58e777046498d6a89ee 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -12,6 +12,7 @@ #include #include #include +#include #ifndef INIT_MM_CONTEXT #define INIT_MM_CONTEXT(name) @@ -44,6 +45,9 @@ struct mm_struct init_mm = { #endif .user_ns = &init_user_ns, .cpu_bitmap = CPU_BITS_NONE, +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + INIT_MM_ASI(init_mm) +#endif INIT_MM_CONTEXT(init_mm) }; From patchwork Fri Jan 10 18:40:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935568 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CBAE5E77188 for ; Fri, 10 Jan 2025 23:19:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=flT5aVTbl44099H7SStRIRvE2biCRFzyhL4gjh5Pu5A=; b=F8JWAlpBYSjpoFQyr3viMPjplU zncL8USsJl7uPpo+s2qEF2wE+6HngW/KBgRQcWBjgtgSl9b/XnvQnuZ4vAo8JTlA1MVggI5XGW9l1 Gi+UBwOBMalJ50iSS5rOeXG7n3HfcIEwnpYPLicOROldJTfKW5fKORNA1YEfthJ7O0acmH3Vli3cZ 8BkStYsToHRoiMpGHR937RhX/+DDspa077qhDRExWkm77yAMTM7XeyPRQdntlei7LbfL/zmQP30xx YVyoSCQT6lkcxJx43+7HDawStaCYksgTIGmLAa7eT7701d3PrUY6fqZZ4Wq9RUfaEFJGNYuA6VsIY KCKyeuXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOI8-0000000HERv-0XNS; Fri, 10 Jan 2025 23:19:48 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwM-0000000GbGo-1vsG for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:03 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=2/DwiwuishkWohq1Selw9AVX00i6WMTVrJEAuBvDTaE=; b=aq4ftzleEzxBlOmdWdhLRzwYMs kiepR7md764ktW0twHZQ4axGqR5m0b/O6zoKViqLe8FI+NUL4aHe+r2aDzr8ivmpdveNOBvu4zAgc EV9jKzS2ebl4PdoclL/1fWJ7L1yfiSkji0TV+weGd6ATMgodOOPwGK+bcmAVJmbtl3+1KZg7S0KYl bWIb8p08MwCxzsLfcKzEhhpbx158QLnEMyQvveVv5qCbkP71pgxmB1qXO/H4j6kPvO3BdgEB8jsjw ZykJtXagm0/LpGMVpR53eR2CzEAs1Y53+Q1YwzVoHON9u3jE85CUSbT0lUjH53S5zg9GuCIrNyoza q8BOXW+w==; Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwH-00000009sHq-3qtZ for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:01 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-436379713baso11445715e9.2 for ; Fri, 10 Jan 2025 10:40:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534455; x=1737139255; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2/DwiwuishkWohq1Selw9AVX00i6WMTVrJEAuBvDTaE=; b=qZkOWdWsYTtUGRPAQfXhRfOdq1ptn50xpusVVF1jaEroucu0MBgM4NqPPuhTupBO7b eGF++7hDVfJsIrVJU20dhiUnBf0umQNsRfXkB+zKbu3wNquBgOej7bxMcV6GuqnoDAEJ q3YYTC0DulUGTH+XjOlAuKnHnbkoRj4iu9xhA2bizea3czra0/di6pOFsLd7vd1Ca7fJ bpKwEydMo0mF/nMoSoXmzCoQfR707ZbryVOg8huEOUmp6t5Tt/3nrZ+3cK828HTw8n5j YtAGH5JoNwzf5OfUMvzJzIApoU5IhDra/E5Qd9vs7C+qqSy7cYTYPvs4GOOyPy/+CpaI YPbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534455; x=1737139255; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2/DwiwuishkWohq1Selw9AVX00i6WMTVrJEAuBvDTaE=; b=hoQz70J8oFmhn+DiszwptG+GckbCFEiz1NnwjvEDFdNuSUO0zkuWcJOm0rjzhsjdA+ rkEfmEe7H+/YDSbjhczxfQorrBHVJqdzRBaovYZ6DVaSWMWxOzHIfrS/FkuXUzgSnqaT NVB/ysmAKVYWTxjqohr1Vg4XJnjRn11q6N1gA+JGO3+bEJl5MeDxge8MHfFwvQP3DfJK TN0bz2ZqOOlKRQuWUjGkhSYzLc8tcdfZu/Mt/UMP5Y029KsAZUGo+lZuuXPoicHulWWp tvY2p/YtO3MJO/mo+d8a2KCBunzwpyVKLubUY6TWh2mkxZsw0RmQkuGWWoPWogVSctMJ bBBA== X-Forwarded-Encrypted: i=1; AJvYcCXm1X2DeMR4trLWgbYYU3xpfHEZqz1kGNhJEmKXdrrP/yxRKmu/PFb6NxrVQPRB71j0mntmTdEWkOOAzg==@lists.infradead.org X-Gm-Message-State: AOJu0YzTP+vc7L6BEHDyvu9kbjGKgUNshu2bBZu222V6d3w4W4mcuXTN sYfxUrRXuaxw+PVBgEl0oSIhtn+hsxFw7eo0mlEGutnDWUh1UvTNKbHNlA+8w+g53SiVDzjtjlk zh/xkBQ507A== X-Google-Smtp-Source: AGHT+IFG2gVbYjLEgd+rVsDr92xsiDXdhg3a2JRtZbNVmnh3EgFoUPLrUSfZq6jOhgEgZFCkNrLAplfHg1IQBQ== X-Received: from wmqa17.prod.google.com ([2002:a05:600c:3491:b0:434:fa72:f1bf]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d14:b0:436:5fc9:309d with SMTP id 5b1f17b1804b1-436e26f6d81mr55448175e9.30.1736534454514; Fri, 10 Jan 2025 10:40:54 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:30 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-4-8419288bc805@google.com> Subject: [PATCH RFC v2 04/29] mm: asi: Add infrastructure for boot-time enablement From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Junaid Shahid , Yosry Ahmed X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184058_206097_2C5DC715 X-CRM114-Status: GOOD ( 30.89 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add a boot time parameter to control the newly added X86_FEATURE_ASI. "asi=on" or "asi=off" can be used in the kernel command line to enable or disable ASI at boot time. If not specified, ASI enablement depends on CONFIG_ADDRESS_SPACE_ISOLATION_DEFAULT_ON, which is off by default. asi_check_boottime_disable() is modeled after pti_check_boottime_disable(). The boot parameter is currently ignored until ASI is fully functional. Once we have a set of ASI features checked in that we have actually tested, we will stop ignoring the flag. But for now let's just add the infrastructure so we can implement the usage code. Ignoring checkpatch.pl CONFIG_DESCRIPTION because the _DEFAULT_ON Kconfig is trivial to explain. Checkpatch-args: --ignore CONFIG_DESCRIPTION Co-developed-by: Junaid Shahid Signed-off-by: Junaid Shahid Co-developed-by: Yosry Ahmed Signed-off-by: Yosry Ahmed Signed-off-by: Brendan Jackman --- arch/x86/Kconfig | 9 +++++ arch/x86/include/asm/asi.h | 19 ++++++++-- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/disabled-features.h | 8 ++++- arch/x86/mm/asi.c | 61 +++++++++++++++++++++++++++----- arch/x86/mm/init.c | 4 ++- include/asm-generic/asi.h | 4 +++ 7 files changed, 92 insertions(+), 14 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 5a50582eb210e9d1309856a737d32b76fa1bfc85..1fcb52cb8cd5084ac3cef04af61b7d1653162bdb 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2533,6 +2533,15 @@ config MITIGATION_ADDRESS_SPACE_ISOLATION there are likely to be unhandled cases, in particular concerning TLB flushes. + +config ADDRESS_SPACE_ISOLATION_DEFAULT_ON + bool "Enable address space isolation by default" + default n + depends on MITIGATION_ADDRESS_SPACE_ISOLATION + help + If selected, ASI is enabled by default at boot if the asi=on or + asi=off are not specified. + config MITIGATION_RETPOLINE bool "Avoid speculative indirect branches in kernel" select OBJTOOL if HAVE_OBJTOOL diff --git a/arch/x86/include/asm/asi.h b/arch/x86/include/asm/asi.h index 7cc635b6653a3970ba9dbfdc9c828a470e27bd44..b9671ef2dd3278adceed18507fd260e21954d574 100644 --- a/arch/x86/include/asm/asi.h +++ b/arch/x86/include/asm/asi.h @@ -8,6 +8,7 @@ #include #include +#include #include #ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION @@ -66,6 +67,8 @@ * the N ASI classes. */ +#define static_asi_enabled() cpu_feature_enabled(X86_FEATURE_ASI) + /* * ASI uses a per-CPU tainting model to track what mitigation actions are * required on domain transitions. Taints exist along two dimensions: @@ -131,6 +134,8 @@ struct asi { DECLARE_PER_CPU_ALIGNED(struct asi *, curr_asi); +void asi_check_boottime_disable(void); + void asi_init_mm_state(struct mm_struct *mm); int asi_init_class(enum asi_class_id class_id, struct asi_taint_policy *taint_policy); @@ -155,7 +160,9 @@ void asi_exit(void); /* The target is the domain we'll enter when returning to process context. */ static __always_inline struct asi *asi_get_target(struct task_struct *p) { - return p->thread.asi_state.target; + return static_asi_enabled() + ? p->thread.asi_state.target + : NULL; } static __always_inline void asi_set_target(struct task_struct *p, @@ -166,7 +173,9 @@ static __always_inline void asi_set_target(struct task_struct *p, static __always_inline struct asi *asi_get_current(void) { - return this_cpu_read(curr_asi); + return static_asi_enabled() + ? this_cpu_read(curr_asi) + : NULL; } /* Are we currently in a restricted address space? */ @@ -175,7 +184,11 @@ static __always_inline bool asi_is_restricted(void) return (bool)asi_get_current(); } -/* If we exit/have exited, can we stay that way until the next asi_enter? */ +/* + * If we exit/have exited, can we stay that way until the next asi_enter? + * + * When ASI is disabled, this returns true. + */ static __always_inline bool asi_is_relaxed(void) { return !asi_get_target(current); diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 913fd3a7bac6506141de65f33b9ee61c615c7d7d..d6a808d10c3b8900d190ea01c66fc248863f05e2 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -474,6 +474,7 @@ #define X86_FEATURE_CLEAR_BHB_HW (21*32+ 3) /* BHI_DIS_S HW control enabled */ #define X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT (21*32+ 4) /* Clear branch history at vmexit using SW loop */ #define X86_FEATURE_FAST_CPPC (21*32 + 5) /* AMD Fast CPPC */ +#define X86_FEATURE_ASI (21*32+6) /* Kernel Address Space Isolation */ /* * BUG word(s) diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index c492bdc97b0595ec77f89dc9b0cefe5e3e64be41..c7964ed4fef8b9441e1c0453da587787d8008d9d 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -50,6 +50,12 @@ # define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31)) #endif +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION +# define DISABLE_ASI 0 +#else +# define DISABLE_ASI (1 << (X86_FEATURE_ASI & 31)) +#endif + #ifdef CONFIG_MITIGATION_RETPOLINE # define DISABLE_RETPOLINE 0 #else @@ -154,7 +160,7 @@ #define DISABLED_MASK17 0 #define DISABLED_MASK18 (DISABLE_IBT) #define DISABLED_MASK19 (DISABLE_SEV_SNP) -#define DISABLED_MASK20 0 +#define DISABLED_MASK20 (DISABLE_ASI) #define DISABLED_MASK21 0 #define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 22) diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index 105cd8b43eaf5c20acc80d4916b761559fb95d74..5baf563a078f5b3a6cd4b9f5e92baaf81b0774c4 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -4,6 +4,7 @@ #include #include +#include #include #include #include @@ -29,6 +30,9 @@ static inline bool asi_class_id_valid(enum asi_class_id class_id) static inline bool asi_class_initialized(enum asi_class_id class_id) { + if (!boot_cpu_has(X86_FEATURE_ASI)) + return 0; + if (WARN_ON(!asi_class_id_valid(class_id))) return false; @@ -51,6 +55,9 @@ EXPORT_SYMBOL_GPL(asi_init_class); void asi_uninit_class(enum asi_class_id class_id) { + if (!boot_cpu_has(X86_FEATURE_ASI)) + return; + if (!asi_class_initialized(class_id)) return; @@ -66,10 +73,36 @@ const char *asi_class_name(enum asi_class_id class_id) return asi_class_names[class_id]; } +void __init asi_check_boottime_disable(void) +{ + bool enabled = IS_ENABLED(CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION_DEFAULT_ON); + char arg[4]; + int ret; + + ret = cmdline_find_option(boot_command_line, "asi", arg, sizeof(arg)); + if (ret == 3 && !strncmp(arg, "off", 3)) { + enabled = false; + pr_info("ASI disabled through kernel command line.\n"); + } else if (ret == 2 && !strncmp(arg, "on", 2)) { + enabled = true; + pr_info("Ignoring asi=on param while ASI implementation is incomplete.\n"); + } else { + pr_info("ASI %s by default.\n", + enabled ? "enabled" : "disabled"); + } + + if (enabled) + pr_info("ASI enablement ignored due to incomplete implementation.\n"); +} + static void __asi_destroy(struct asi *asi) { - lockdep_assert_held(&asi->mm->asi_init_lock); + WARN_ON_ONCE(asi->ref_count <= 0); + if (--(asi->ref_count) > 0) + return; + free_pages((ulong)asi->pgd, PGD_ALLOCATION_ORDER); + memset(asi, 0, sizeof(struct asi)); } int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_asi) @@ -79,6 +112,9 @@ int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_ *out_asi = NULL; + if (!boot_cpu_has(X86_FEATURE_ASI)) + return 0; + if (WARN_ON(!asi_class_initialized(class_id))) return -EINVAL; @@ -122,7 +158,7 @@ void asi_destroy(struct asi *asi) { struct mm_struct *mm; - if (!asi) + if (!boot_cpu_has(X86_FEATURE_ASI) || !asi) return; if (WARN_ON(!asi_class_initialized(asi->class_id))) @@ -134,11 +170,7 @@ void asi_destroy(struct asi *asi) * to block concurrent asi_init calls. */ mutex_lock(&mm->asi_init_lock); - WARN_ON_ONCE(asi->ref_count <= 0); - if (--(asi->ref_count) == 0) { - free_pages((ulong)asi->pgd, PGD_ALLOCATION_ORDER); - memset(asi, 0, sizeof(struct asi)); - } + __asi_destroy(asi); mutex_unlock(&mm->asi_init_lock); } EXPORT_SYMBOL_GPL(asi_destroy); @@ -255,6 +287,9 @@ static noinstr void __asi_enter(void) noinstr void asi_enter(struct asi *asi) { + if (!static_asi_enabled()) + return; + VM_WARN_ON_ONCE(!asi); /* Should not have an asi_enter() without a prior asi_relax(). */ @@ -269,8 +304,10 @@ EXPORT_SYMBOL_GPL(asi_enter); noinstr void asi_relax(void) { - barrier(); - asi_set_target(current, NULL); + if (static_asi_enabled()) { + barrier(); + asi_set_target(current, NULL); + } } EXPORT_SYMBOL_GPL(asi_relax); @@ -279,6 +316,9 @@ noinstr void asi_exit(void) u64 unrestricted_cr3; struct asi *asi; + if (!static_asi_enabled()) + return; + preempt_disable_notrace(); VM_BUG_ON(this_cpu_read(cpu_tlbstate.loaded_mm) == @@ -310,6 +350,9 @@ EXPORT_SYMBOL_GPL(asi_exit); void asi_init_mm_state(struct mm_struct *mm) { + if (!boot_cpu_has(X86_FEATURE_ASI)) + return; + memset(mm->asi, 0, sizeof(mm->asi)); mutex_init(&mm->asi_init_lock); } diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index de4227ed5169ff84d0ce80b677caffc475198fa6..ded3a47f2a9c1f554824d4ad19f3b48bce271274 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -28,6 +28,7 @@ #include #include #include +#include /* * We need to define the tracepoints somewhere, and tlb.c @@ -251,7 +252,7 @@ static void __init probe_page_size_mask(void) __default_kernel_pte_mask = __supported_pte_mask; /* Except when with PTI where the kernel is mostly non-Global: */ if (cpu_feature_enabled(X86_FEATURE_PTI) || - IS_ENABLED(CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION)) + cpu_feature_enabled(X86_FEATURE_ASI)) __default_kernel_pte_mask &= ~_PAGE_GLOBAL; /* Enable 1 GB linear kernel mappings if available: */ @@ -754,6 +755,7 @@ void __init init_mem_mapping(void) unsigned long end; pti_check_boottime_disable(); + asi_check_boottime_disable(); probe_page_size_mask(); setup_pcid(); diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h index 6b84202837605fa57e4a910318c8353b3f816f06..eedc961ee916a9e1da631ca489ea4a7bc9e6089f 100644 --- a/include/asm-generic/asi.h +++ b/include/asm-generic/asi.h @@ -65,6 +65,10 @@ static inline pgd_t *asi_pgd(struct asi *asi) { return NULL; } static inline void asi_handle_switch_mm(void) { } +#define static_asi_enabled() false + +static inline void asi_check_boottime_disable(void) { } + #endif /* !CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ #endif /* !_ASSEMBLY_ */ From patchwork Fri Jan 10 18:40:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C0E2BE7719E for ; Fri, 10 Jan 2025 23:19:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=6+2bFD+mGVCDNgV8pyXRrplXTSk6m+fnOdtcH5v8Bfw=; b=4Lq71NyNyMYNyGr/35nQOpyl8J QW7ph39klnNYNbSxhWw7NK7eTcYgXwUOblhVYWBGSOfgo2jC+xA4e4nBzV2Upq0ZJ7idnk1p9Q/OO 8wUEAU7BKWBaU5x39+23fVg9MSIwtMrUVr/x2VZnBabiMkaNUx2tOu33/J8FaWDLZZ2zoxbEzlc/Z dVIF8ZRickn4G/c3MZybAfsZ7178zF0l272/ivLVKkqQhBk7JcY8ePer/IJ8Usfy28kSxQJw/B2Go B7VfuNxoGV9BngYBAKZgr5fxTdJOlym3NZL0/V0V2xg1vFoGKQNeBGPy56c6fVlfP4iUSwLYorGbb WFj9ZHXQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOI7-0000000HERF-1wK3; Fri, 10 Jan 2025 23:19:47 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwM-0000000GbHc-3V7u for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:03 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=06++54m2jQMoQajJn4JgiUDifyLkkN/hJfJdF7FjHRg=; b=LXcNFcS/AXVVoveUnv5Sy+3Pkg +GdAgQ84UtK9d31sIHcC6q5EecfZf2CjAqqyNFbhhE1c/YlNxpxHYmaZNcZLIZYcXLsgvv6m29I/C crVxoxNpx7kVDEAPycr18AcnNTS8EcSXSvgjuL78Nl9Fhi3EAE6SFIsYJRW4MbdH4vZ0/n2rbPGln CHEQFMHHkLUfyKUtO74dyL2drhBBObnomkZxjf4soP1cnGOZPU60juTeQ8Q6BDQtANWWnxmx4dqYY uYfYCBu5FGqxU/wjIpdai+CNnla11yl/221x8jx1SDU92TP4Qzt8xFJELHd/zHrNYKLU3kd9x9IX5 boYXbxWw==; Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by casper.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwK-0000000EBEf-0BKW for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:01 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-43624b08181so12083425e9.0 for ; Fri, 10 Jan 2025 10:40:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534457; x=1737139257; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=06++54m2jQMoQajJn4JgiUDifyLkkN/hJfJdF7FjHRg=; b=dszNkS7Vvwuyz9zGAH02G1UJMYU0XYfv8pFVKwRs9GSiXe6/U/Nh16R87nYkwxSqu8 X9w1TpcJ0X7NapYfeKRgYj6nyzy5zlKxCHyQ2P3GzVvjcmVubHDAnDl1jj04W44C7zun nilKXko8VjxuZBvCvbDEIhR4XaZntBx8PfdWaiRPEmdrt+zSRybzG2sQn8pxZ3jlHxr9 e6uj7U3eQo3qIDUSs3UPULB3wX/y+e9TRjXzN4q8duY57iNidPBCEKCGvmi8ETtBDEe1 6Dh4OmgohbrwADwQ+nM8LHJM0T+hAp1GdKAqBOhdo1JCi//gyzrFFX8VA1IPguk6ToKc BYzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534457; x=1737139257; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=06++54m2jQMoQajJn4JgiUDifyLkkN/hJfJdF7FjHRg=; b=MkX37CF8Oj3hXMqP72LawLd4yB9WR23Sl0li1K3jKJK3CrUCJVWRyYouUgvnDt+E9O Ya5FHNmDfbtAE05dvjztDv+6g2XxPEbhggcSEfXVi6t/JC4UHbR1BrnHkXKVtoTLRj8P KuByuITVDL9r/Gb56G9fL+oSt0C1DjqnJcuFXMn4h8nl0aKh60fG0e1ifwrkm+1ORcKq vQ+kWKn0aEKIwzGS00X99AjqGr+DGrlmE16XdelGx/DITiDw19Y8eEBqTGHs7mncpkmK xlt1i8fUTeoLHYdDisLNAkjeBxTT9A02JWDFnQVXRv5MclzqDUrZ2yf+z0lwbW5rOBAq w/4Q== X-Forwarded-Encrypted: i=1; AJvYcCUhq40PRCu6MAUeEpvijmGcR5t+8kxlKyUElRvQOZAE8TX9QzUaeaJxB+k09asW+ndInKmaHg8gfNe1kg==@lists.infradead.org X-Gm-Message-State: AOJu0YyQRlAiCBpTCCqWbVWfOhaKk0v9MdZK8q2sb6qoxyCX0u63fHO9 e0f05UzdogujAYwJWa7Up8MPhl8FmHzUwQvyrH8nfD1V+JGnVR5PdRMdX64oGQVJHuWdJ/Y8KZ4 kjDr8DxEzsQ== X-Google-Smtp-Source: AGHT+IGnlkbZgvto5V3RLsC0pJW1qd+uu81ptaoP4aB2W/3FKEzvBJWjRMlcB/euDId6QKmAnD2UbMWUf073rw== X-Received: from wmqd22.prod.google.com ([2002:a05:600c:34d6:b0:436:185e:c91d]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:444a:b0:434:fe3c:c662 with SMTP id 5b1f17b1804b1-436e9d7b99cmr59996085e9.12.1736534456699; Fri, 10 Jan 2025 10:40:56 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:31 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-5-8419288bc805@google.com> Subject: [PATCH RFC v2 05/29] mm: asi: ASI support in interrupts/exceptions From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Junaid Shahid X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184100_124814_818C02A6 X-CRM114-Status: GOOD ( 31.82 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add support for potentially switching address spaces from within interrupts/exceptions/NMIs etc. An interrupt does not automatically switch to the unrestricted address space. It can switch if needed to access some memory not available in the restricted address space, using the normal asi_exit call. On return from the outermost interrupt, if the target address space was the restricted address space (e.g. we were in the critical code path between ASI Enter and VM Enter), the restricted address space will be automatically restored. Otherwise, execution will continue in the unrestricted address space until the next explicit ASI Enter. In order to keep track of when to restore the restricted address space, an interrupt/exception nesting depth counter is maintained per-task. An alternative implementation without needing this counter is also possible, but the counter unlocks an additional nice-to-have benefit by allowing detection of whether or not we are currently executing inside an exception context, which would be useful in a later patch. Note that for KVM on SVM, this is not actually necessary as NMIs are in fact maskable via CLGI. It's not clear to me if VMX has something equivalent but we will need this infrastructure in place for userspace support anyway. RFC: Once userspace ASI is implemented, this idtentry integration looks a bit heavy-handed. For example, we don't need this logic for INT 80 emulation, so having it in DEFINE_IDTENTRY_RAW is confusing. It could lead to a bug if the order of interrupter counter modifications and ASI transition logic gets flipped around somehow. checkpatch.pl SPACING is false positive. AVOID_BUG ignored for RFC. Checkpatch-args: --ignore=SPACING,AVOID_BUG Signed-off-by: Junaid Shahid Signed-off-by: Brendan Jackman --- arch/x86/include/asm/asi.h | 68 ++++++++++++++++++++++++++++++++++++++-- arch/x86/include/asm/idtentry.h | 50 ++++++++++++++++++++++++----- arch/x86/include/asm/processor.h | 5 +++ arch/x86/kernel/process.c | 2 ++ arch/x86/kernel/traps.c | 22 +++++++++++++ arch/x86/mm/asi.c | 7 ++++- include/asm-generic/asi.h | 10 ++++++ 7 files changed, 153 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/asi.h b/arch/x86/include/asm/asi.h index b9671ef2dd3278adceed18507fd260e21954d574..9a9a139518289fc65f26a4d1cd311aa52cc5357f 100644 --- a/arch/x86/include/asm/asi.h +++ b/arch/x86/include/asm/asi.h @@ -157,6 +157,11 @@ void asi_relax(void); /* Immediately exit the restricted address space if in it */ void asi_exit(void); +static inline void asi_init_thread_state(struct thread_struct *thread) +{ + thread->asi_state.intr_nest_depth = 0; +} + /* The target is the domain we'll enter when returning to process context. */ static __always_inline struct asi *asi_get_target(struct task_struct *p) { @@ -197,9 +202,10 @@ static __always_inline bool asi_is_relaxed(void) /* * Is the current task in the critical section? * - * This is just the inverse of !asi_is_relaxed(). We have both functions in order to - * help write intuitive client code. In particular, asi_is_tense returns false - * when ASI is disabled, which is judged to make user code more obvious. + * This is just the inverse of !asi_is_relaxed(). We have both functions in + * order to help write intuitive client code. In particular, asi_is_tense + * returns false when ASI is disabled, which is judged to make user code more + * obvious. */ static __always_inline bool asi_is_tense(void) { @@ -211,6 +217,62 @@ static __always_inline pgd_t *asi_pgd(struct asi *asi) return asi ? asi->pgd : NULL; } +static __always_inline void asi_intr_enter(void) +{ + if (static_asi_enabled() && asi_is_tense()) { + current->thread.asi_state.intr_nest_depth++; + barrier(); + } +} + +void __asi_enter(void); + +static __always_inline void asi_intr_exit(void) +{ + if (static_asi_enabled() && asi_is_tense()) { + /* + * If an access to sensitive memory got reordered after the + * decrement, the #PF handler for that access would see a value + * of 0 for the counter and re-__asi_enter before returning to + * the faulting access, triggering an infinite PF loop. + */ + barrier(); + + if (--current->thread.asi_state.intr_nest_depth == 0) { + /* + * If the decrement got reordered after __asi_enter, an + * interrupt that came between __asi_enter and the + * decrement would always see a nonzero value for the + * counter so it wouldn't call __asi_enter again and we + * would return to process context in the wrong address + * space. + */ + barrier(); + __asi_enter(); + } + } +} + +/* + * Returns the nesting depth of interrupts/exceptions that have interrupted the + * ongoing critical section. If the current task is not in a critical section + * this is 0. + */ +static __always_inline int asi_intr_nest_depth(void) +{ + return current->thread.asi_state.intr_nest_depth; +} + +/* + * Remember that interrupts/exception don't count as the critical section. If + * you want to know if the current task is in the critical section use + * asi_is_tense(). + */ +static __always_inline bool asi_in_critical_section(void) +{ + return asi_is_tense() && !asi_intr_nest_depth(); +} + #define INIT_MM_ASI(init_mm) \ .asi_init_lock = __MUTEX_INITIALIZER(init_mm.asi_init_lock), diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h index ad5c68f0509d4dfd0834303c0f9dabc93ef73aa4..9e00da0a3b08f83ca5e603dc2abbfd5fa3059811 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -12,6 +12,7 @@ #include #include +#include typedef void (*idtentry_t)(struct pt_regs *regs); @@ -55,12 +56,15 @@ static __always_inline void __##func(struct pt_regs *regs); \ \ __visible noinstr void func(struct pt_regs *regs) \ { \ - irqentry_state_t state = irqentry_enter(regs); \ + irqentry_state_t state; \ \ + asi_intr_enter(); \ + state = irqentry_enter(regs); \ instrumentation_begin(); \ __##func (regs); \ instrumentation_end(); \ irqentry_exit(regs, state); \ + asi_intr_exit(); \ } \ \ static __always_inline void __##func(struct pt_regs *regs) @@ -102,12 +106,15 @@ static __always_inline void __##func(struct pt_regs *regs, \ __visible noinstr void func(struct pt_regs *regs, \ unsigned long error_code) \ { \ - irqentry_state_t state = irqentry_enter(regs); \ + irqentry_state_t state; \ \ + asi_intr_enter(); \ + state = irqentry_enter(regs); \ instrumentation_begin(); \ __##func (regs, error_code); \ instrumentation_end(); \ irqentry_exit(regs, state); \ + asi_intr_exit(); \ } \ \ static __always_inline void __##func(struct pt_regs *regs, \ @@ -139,7 +146,16 @@ static __always_inline void __##func(struct pt_regs *regs, \ * is required before the enter/exit() helpers are invoked. */ #define DEFINE_IDTENTRY_RAW(func) \ -__visible noinstr void func(struct pt_regs *regs) +static __always_inline void __##func(struct pt_regs *regs); \ + \ +__visible noinstr void func(struct pt_regs *regs) \ +{ \ + asi_intr_enter(); \ + __##func (regs); \ + asi_intr_exit(); \ +} \ + \ +static __always_inline void __##func(struct pt_regs *regs) /** * DEFINE_FREDENTRY_RAW - Emit code for raw FRED entry points @@ -178,7 +194,18 @@ noinstr void fred_##func(struct pt_regs *regs) * is required before the enter/exit() helpers are invoked. */ #define DEFINE_IDTENTRY_RAW_ERRORCODE(func) \ -__visible noinstr void func(struct pt_regs *regs, unsigned long error_code) +static __always_inline void __##func(struct pt_regs *regs, \ + unsigned long error_code); \ + \ +__visible noinstr void func(struct pt_regs *regs, unsigned long error_code)\ +{ \ + asi_intr_enter(); \ + __##func (regs, error_code); \ + asi_intr_exit(); \ +} \ + \ +static __always_inline void __##func(struct pt_regs *regs, \ + unsigned long error_code) /** * DECLARE_IDTENTRY_IRQ - Declare functions for device interrupt IDT entry @@ -209,14 +236,17 @@ static void __##func(struct pt_regs *regs, u32 vector); \ __visible noinstr void func(struct pt_regs *regs, \ unsigned long error_code) \ { \ - irqentry_state_t state = irqentry_enter(regs); \ + irqentry_state_t state; \ u32 vector = (u32)(u8)error_code; \ \ + asi_intr_enter(); \ + state = irqentry_enter(regs); \ kvm_set_cpu_l1tf_flush_l1d(); \ instrumentation_begin(); \ run_irq_on_irqstack_cond(__##func, regs, vector); \ instrumentation_end(); \ irqentry_exit(regs, state); \ + asi_intr_exit(); \ } \ \ static noinline void __##func(struct pt_regs *regs, u32 vector) @@ -255,13 +285,16 @@ static __always_inline void instr_##func(struct pt_regs *regs) \ \ __visible noinstr void func(struct pt_regs *regs) \ { \ - irqentry_state_t state = irqentry_enter(regs); \ + irqentry_state_t state; \ \ + asi_intr_enter(); \ + state = irqentry_enter(regs); \ kvm_set_cpu_l1tf_flush_l1d(); \ instrumentation_begin(); \ instr_##func (regs); \ instrumentation_end(); \ irqentry_exit(regs, state); \ + asi_intr_exit(); \ } \ \ void fred_##func(struct pt_regs *regs) \ @@ -294,13 +327,16 @@ static __always_inline void instr_##func(struct pt_regs *regs) \ \ __visible noinstr void func(struct pt_regs *regs) \ { \ - irqentry_state_t state = irqentry_enter(regs); \ + irqentry_state_t state; \ \ + asi_intr_enter(); \ + state = irqentry_enter(regs); \ kvm_set_cpu_l1tf_flush_l1d(); \ instrumentation_begin(); \ instr_##func (regs); \ instrumentation_end(); \ irqentry_exit(regs, state); \ + asi_intr_exit(); \ } \ \ void fred_##func(struct pt_regs *regs) \ diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index f02220e6b4df911d87e2fee4b497eade61a27161..a32a53405f45e4c0473fe081e216029cf5bd0cdd 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -508,6 +508,11 @@ struct thread_struct { struct { /* Domain to enter when returning to process context. */ struct asi *target; + /* + * The depth of interrupt/exceptions interrupting an ASI + * critical section + */ + int intr_nest_depth; } asi_state; #endif diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index f63f8fd00a91f3d1171f307b92179556ba2d716d..44abc161820153b7f68664b97267658b8e011101 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -96,6 +96,8 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) #ifdef CONFIG_VM86 dst->thread.vm86 = NULL; #endif + asi_init_thread_state(&dst->thread); + /* Drop the copied pointer to current's fpstate */ dst->thread.fpu.fpstate = NULL; diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index 2dbadf347b5f4f66625c4f49b76c41b412270d57..beea861da8d3e9a4e2afb3a92ed5f66f11d67bd6 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -65,6 +65,7 @@ #include #include #include +#include #include #include #include @@ -463,6 +464,27 @@ DEFINE_IDTENTRY_DF(exc_double_fault) } #endif + /* + * Do an asi_exit() only here because a #DF usually indicates + * the system is in a really bad state, and we don't want to + * cause any additional issue that would prevent us from + * printing a correct stack trace. + * + * The additional issues are not related to a possible triple + * fault, which can only occurs if a fault is encountered while + * invoking this handler, but here we are already executing it. + * Instead, an ASI-induced #PF here could potentially end up + * getting another #DF. For example, if there was some issue in + * invoking the #PF handler. The handler for the second #DF + * could then again cause an ASI-induced #PF leading back to the + * same recursion. + * + * This is not needed in the espfix64 case above, since that + * code is about turning a #DF into a #GP which is okay to + * handle in the restricted domain. That's also why we don't + * asi_exit() in the #GP handler. + */ + asi_exit(); irqentry_nmi_enter(regs); instrumentation_begin(); notify_die(DIE_TRAP, str, regs, error_code, X86_TRAP_DF, SIGSEGV); diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index 5baf563a078f5b3a6cd4b9f5e92baaf81b0774c4..054315d566c082c0925a00ce3a0877624c8b9957 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -235,7 +235,7 @@ static __always_inline void maybe_flush_data(struct asi *next_asi) this_cpu_and(asi_taints, ~ASI_TAINTS_DATA_MASK); } -static noinstr void __asi_enter(void) +noinstr void __asi_enter(void) { u64 asi_cr3; struct asi *target = asi_get_target(current); @@ -250,6 +250,7 @@ static noinstr void __asi_enter(void) * disabling preemption should be fine. */ VM_BUG_ON(preemptible()); + VM_BUG_ON(current->thread.asi_state.intr_nest_depth != 0); if (!target || target == this_cpu_read(curr_asi)) return; @@ -290,6 +291,7 @@ noinstr void asi_enter(struct asi *asi) if (!static_asi_enabled()) return; + VM_WARN_ON_ONCE(asi_intr_nest_depth()); VM_WARN_ON_ONCE(!asi); /* Should not have an asi_enter() without a prior asi_relax(). */ @@ -305,6 +307,7 @@ EXPORT_SYMBOL_GPL(asi_enter); noinstr void asi_relax(void) { if (static_asi_enabled()) { + VM_WARN_ON_ONCE(asi_intr_nest_depth()); barrier(); asi_set_target(current, NULL); } @@ -326,6 +329,8 @@ noinstr void asi_exit(void) asi = this_cpu_read(curr_asi); if (asi) { + WARN_ON_ONCE(asi_in_critical_section()); + maybe_flush_control(NULL); unrestricted_cr3 = diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h index eedc961ee916a9e1da631ca489ea4a7bc9e6089f..7f542c59c2b8a2b74432e4edb7199f9171db8a84 100644 --- a/include/asm-generic/asi.h +++ b/include/asm-generic/asi.h @@ -52,6 +52,8 @@ static inline bool asi_is_relaxed(void) { return true; } static inline bool asi_is_tense(void) { return false; } +static inline bool asi_in_critical_section(void) { return false; } + static inline void asi_exit(void) { } static inline bool asi_is_restricted(void) { return false; } @@ -65,6 +67,14 @@ static inline pgd_t *asi_pgd(struct asi *asi) { return NULL; } static inline void asi_handle_switch_mm(void) { } +static inline void asi_init_thread_state(struct thread_struct *thread) { } + +static inline void asi_intr_enter(void) { } + +static inline int asi_intr_nest_depth(void) { return 0; } + +static inline void asi_intr_exit(void) { } + #define static_asi_enabled() false static inline void asi_check_boottime_disable(void) { } From patchwork Fri Jan 10 18:40:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935566 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D2408E7719F for ; Fri, 10 Jan 2025 23:19:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Wwj/jxUZOGtHf7mCsJaIlCdiAyooGeUEeZynzK3+b0o=; b=TxhIgbs5Wl2WrMbrr9BnpW1Qvj hJ04yisVcQerGkv6jw2otO1/dnDskHnEZeIJd7VsYCPUd9JkhhR9Uy9nsPmUTu2ZoWv22LLxNWN7e EXu+ByAp3NG+lomznd/0XEdHmMYEPGmX5uleL2LtI+TBlXSJSmOpm8gLIAbwvqIwDLEzvogTiCzsn x6QJEnXfYjncsly9LKMEnEqqhqRjicEP5AY/oBRf2RpM4VKzzjM4qJLh8Cy9jQtMW537YFkeOEtT4 wCBEhQxYUzFbsnDCJAIppiLcwZ66IkpkejwTxBIQnOGkchOlwSez+zGCgaxnkvyg6GxrFrS3W5/5F EFaiEujQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOI8-0000000HESj-3w81; Fri, 10 Jan 2025 23:19:48 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwP-0000000GbKP-2AT7 for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:06 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=kYL7XeCfcudeBLlUDEGoh0o6eIbGWBwJtwhvHOdo8c0=; b=UcurtA9y/H0dO8UNBzD0A03+BW +/rOC3HY8uyF2XgdDaBonBhyQE03CyOIcSRNVcRMANXIAI0Q//E+RT4ML2z3Q3YB3623sRhKfS6Bp qzR59q6f4CB6fjREM7ouqDneMB7YQaWRzFd6qD7OkSKGT6cdXB4R0FtQ+JNyrVcBOQKIad+rCnZgm ZWwy3xgHS70SKzlZNb43tHoJzdijpU7Gmbzux7RN8lqHYuVsnakT4TAjhclEl+H6MGEfZ4X98EKOA x2lejww87vUKt+2wYpjJuLHBOatJszeulOYu6DDIvIx8v45isT7yFZsHzb8VS9rX0RSaLXpU1oJgE ZWgrkP6A==; Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwK-00000009sLT-3ynS for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:04 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-4362153dcd6so12412425e9.2 for ; Fri, 10 Jan 2025 10:41:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534459; x=1737139259; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kYL7XeCfcudeBLlUDEGoh0o6eIbGWBwJtwhvHOdo8c0=; b=rhavPpKSEWFpXwKaqRjMsEejHctlHHYw8I6Y5SzhbgS2u9nZ8oRfgyeZUSBTDvt5Xk ddoSM7HbXc7B8fYrnlsXJQJOzA9XCTEVdvngDjcKpGSuT64GCJlbRl67ZCpbblBWiokx pMgrxGXyhzSsZipX6CaQXQgcF/fIcnoX3B3N7+1G9ErQ9WcsM1KVhUHcyqUz4TGsGzoC aHyMD7rcmSXxJzuUvbfLQbD1YUpfIMxvvM2Q0ZWXoTDqOHqRFMKncTWU2OQyJxREgepp 46tPFy5bbdgDehU2H2cnB1JZPPBvKL51Pzq9L78SCkF5yJny8hi8Wt2wcQffuuxkkXML AElg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534459; x=1737139259; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kYL7XeCfcudeBLlUDEGoh0o6eIbGWBwJtwhvHOdo8c0=; b=KyymPi/VL48PgtWY2eFA6+jMMZwQHICOZfOI9vtnCSjlqhNKQxXQyjmNngxqnuPbtK ZiZ/cviRZ+Gx4ddLJFXbaDjuGkb0etrkLTjItQXiiWaSChEX1w2CkPMrbD3PNHBWrIhW +/ICs+nT9y3QjPCupJWUZeoTQEBgjAkVoZJAAG8bYG/iwONHzJBhrr1xO8Lp8ybuZ4z0 mk4TtaqsoaTOr7u4RCdogTVu+OcGLZirkwTUaEM9xA+7UyZMm7JAuqw4dkpjk85h7tkO w8nfGUzUK6BBAGeyKm2DuNV6e1oQGE/+AEHqMQw3UU38vTxiBnbjn955r3+ULUwRkmiX hdKg== X-Forwarded-Encrypted: i=1; AJvYcCWjLJIMZpnLxxqs6ojn5tOw7+q8eyj+3qaXUGdkewqrs2nrY/B1GFT1m/yIN5og0AhHEPl0XOcJOpZQ4g==@lists.infradead.org X-Gm-Message-State: AOJu0YxQ410o8cAeM5XBU8b4Yh/oAnMcyUoefx6TXnYZNCyGGMWJIr23 l/kycvtcESZ90hWVYoIAoWMzObZsGUfhCv6rzn4t8kUPxdFFbaMDuZOyKLXWP82vwyp8QE8ifyY +dsAH1ZlhiQ== X-Google-Smtp-Source: AGHT+IH0DyTQGUBeXAkb8pkdNHIPyF1JMCOQ9V4x2A26uY/IXi2bUp3/F2mGP9HMTIoPxxyBiM7AkavMfpYEag== X-Received: from wmdn10.prod.google.com ([2002:a05:600c:294a:b0:436:d819:e4eb]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4f09:b0:434:fafe:edb with SMTP id 5b1f17b1804b1-436e26f00fdmr96576145e9.24.1736534459144; Fri, 10 Jan 2025 10:40:59 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:32 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-6-8419288bc805@google.com> Subject: [PATCH RFC v2 06/29] mm: asi: Use separate PCIDs for restricted address spaces From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Yosry Ahmed , Junaid Shahid X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184101_138990_E2C793DA X-CRM114-Status: GOOD ( 25.95 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Yosry Ahmed Each restricted address space is assigned a separate PCID. Since currently only one ASI instance per-class exists for a given process, the PCID is just derived from the class index. This commit only sets the appropriate PCID when switching CR3, but does not actually use the NOFLUSH bit. That will be done by later patches. Co-developed-by: Junaid Shahid Signed-off-by: Junaid Shahid Signed-off-by: Yosry Ahmed Signed-off-by: Brendan Jackman --- arch/x86/include/asm/asi.h | 4 +-- arch/x86/include/asm/processor-flags.h | 24 +++++++++++++++++ arch/x86/include/asm/tlbflush.h | 3 +++ arch/x86/mm/asi.c | 10 +++---- arch/x86/mm/tlb.c | 49 +++++++++++++++++++++++++++++++--- include/asm-generic/asi.h | 2 ++ 6 files changed, 81 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/asi.h b/arch/x86/include/asm/asi.h index 9a9a139518289fc65f26a4d1cd311aa52cc5357f..a55e73f1b2bc84c41b9ab25f642a4d5f1aa6ba90 100644 --- a/arch/x86/include/asm/asi.h +++ b/arch/x86/include/asm/asi.h @@ -4,13 +4,13 @@ #include -#include - #include #include #include #include +#include + #ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION /* diff --git a/arch/x86/include/asm/processor-flags.h b/arch/x86/include/asm/processor-flags.h index e5f204b9b33dfaa92ed1b05faa6b604e50d5f2f3..42c5acb67c2d2a6b03deb548fe3dd088baa88842 100644 --- a/arch/x86/include/asm/processor-flags.h +++ b/arch/x86/include/asm/processor-flags.h @@ -55,4 +55,28 @@ # define X86_CR3_PTI_PCID_USER_BIT 11 #endif +/* + * An ASI identifier is included in the higher bits of PCID to use a different + * PCID for each restricted address space, different from the PCID of the + * unrestricted address space (see asi_pcid()). We use the bits directly after + * the bit used by PTI (if any). + */ +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + +#define X86_CR3_ASI_PCID_BITS 2 + +/* Use the highest available PCID bits after the PTI bit (if any) */ +#ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION +#define X86_CR3_ASI_PCID_END_BIT (X86_CR3_PTI_PCID_USER_BIT - 1) +#else +#define X86_CR3_ASI_PCID_END_BIT (X86_CR3_PCID_BITS - 1) +#endif + +#define X86_CR3_ASI_PCID_BITS_SHIFT (X86_CR3_ASI_PCID_END_BIT - X86_CR3_ASI_PCID_BITS + 1) +#define X86_CR3_ASI_PCID_MASK (((1UL << X86_CR3_ASI_PCID_BITS) - 1) << X86_CR3_ASI_PCID_BITS_SHIFT) + +#else +#define X86_CR3_ASI_PCID_BITS 0 +#endif + #endif /* _ASM_X86_PROCESSOR_FLAGS_H */ diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index c884174a44e119a3c027c44ada6c5cdba14d1282..f167feb5ebdfc7faba26b8b18ac65888cd9b0494 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -425,5 +425,8 @@ static inline void __native_tlb_flush_global(unsigned long cr4) } unsigned long build_cr3_noinstr(pgd_t *pgd, u16 asid, unsigned long lam); +unsigned long build_cr3_pcid_noinstr(pgd_t *pgd, u16 pcid, unsigned long lam, bool noflush); + +u16 asi_pcid(struct asi *asi, u16 asid); #endif /* _ASM_X86_TLBFLUSH_H */ diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index 054315d566c082c0925a00ce3a0877624c8b9957..8d060c633be68b508847e2c1c111761df1da92af 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -238,6 +238,7 @@ static __always_inline void maybe_flush_data(struct asi *next_asi) noinstr void __asi_enter(void) { u64 asi_cr3; + u16 pcid; struct asi *target = asi_get_target(current); /* @@ -266,9 +267,8 @@ noinstr void __asi_enter(void) this_cpu_write(curr_asi, target); maybe_flush_control(target); - asi_cr3 = build_cr3_noinstr(target->pgd, - this_cpu_read(cpu_tlbstate.loaded_mm_asid), - tlbstate_lam_cr3_mask()); + pcid = asi_pcid(target, this_cpu_read(cpu_tlbstate.loaded_mm_asid)); + asi_cr3 = build_cr3_pcid_noinstr(target->pgd, pcid, tlbstate_lam_cr3_mask(), false); write_cr3(asi_cr3); maybe_flush_data(target); @@ -335,8 +335,8 @@ noinstr void asi_exit(void) unrestricted_cr3 = build_cr3_noinstr(this_cpu_read(cpu_tlbstate.loaded_mm)->pgd, - this_cpu_read(cpu_tlbstate.loaded_mm_asid), - tlbstate_lam_cr3_mask()); + this_cpu_read(cpu_tlbstate.loaded_mm_asid), + tlbstate_lam_cr3_mask()); /* Tainting first makes reentrancy easier to reason about. */ this_cpu_or(asi_taints, ASI_TAINT_KERNEL_DATA); diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 7c2309996d1d5a7cac23bd122f7b56a869d67d6a..2601beed83aef182d88800c09d70e4c5e95e7ed0 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -13,6 +13,7 @@ #include #include +#include #include #include #include @@ -96,7 +97,10 @@ # define PTI_CONSUMED_PCID_BITS 0 #endif -#define CR3_AVAIL_PCID_BITS (X86_CR3_PCID_BITS - PTI_CONSUMED_PCID_BITS) +#define CR3_AVAIL_PCID_BITS (X86_CR3_PCID_BITS - PTI_CONSUMED_PCID_BITS - \ + X86_CR3_ASI_PCID_BITS) + +static_assert(BIT(CR3_AVAIL_PCID_BITS) > TLB_NR_DYN_ASIDS); /* * ASIDs are zero-based: 0->MAX_AVAIL_ASID are valid. -1 below to account @@ -125,6 +129,11 @@ static __always_inline u16 kern_pcid(u16 asid) */ VM_WARN_ON_ONCE(asid & (1 << X86_CR3_PTI_PCID_USER_BIT)); #endif + +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + BUILD_BUG_ON(TLB_NR_DYN_ASIDS >= (1 << X86_CR3_ASI_PCID_BITS_SHIFT)); + VM_WARN_ON_ONCE(asid & X86_CR3_ASI_PCID_MASK); +#endif /* * The dynamically-assigned ASIDs that get passed in are small * (class_id + 1) << X86_CR3_ASI_PCID_BITS_SHIFT); + // return kern_pcid(asid) | ((asi->index + 1) << X86_CR3_ASI_PCID_BITS_SHIFT); +} + +#else /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ + +u16 asi_pcid(struct asi *asi, u16 asid) { return kern_pcid(asid); } + +#endif /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ + void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned int stride_shift, bool freed_tables) diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h index 7f542c59c2b8a2b74432e4edb7199f9171db8a84..f777a6cf604b0656fb39087f6eba08f980b2cb6f 100644 --- a/include/asm-generic/asi.h +++ b/include/asm-generic/asi.h @@ -2,6 +2,7 @@ #ifndef __ASM_GENERIC_ASI_H #define __ASM_GENERIC_ASI_H +#include #include #ifndef _ASSEMBLY_ @@ -16,6 +17,7 @@ enum asi_class_id { #endif ASI_MAX_NUM_CLASSES, }; +static_assert(order_base_2(X86_CR3_ASI_PCID_BITS) <= ASI_MAX_NUM_CLASSES); typedef u8 asi_taints_t; From patchwork Fri Jan 10 18:40:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935570 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C2C72E7719C for ; Fri, 10 Jan 2025 23:19:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=uOzmdj1eujIQ55wchVUOYMEBS42M6vi2gbrkLZyA9HI=; b=Kxl6vNuDpaR10IA4AgnZDrgxbQ vN7aMVZZq5SPAIB5l1G4qBvj4curHa2RrGpbjkw21KtAecoJVWsW0/EZfQiRLZvLVAmB9QeklLbsv X1vFJp3Wm7sU8w8qcE1gISta9HwVLiKKNBKp+PKgH8z4gV6svZXrtjkqaLQYfNGtj1ILylLE4Ayqo TRtg5maN6CRrTR8T9kNQsXlW2OD2wviPrm+yS6qMyDqZvH6gdJ/7OG5/J/tvMzXRNL6Ju7VflbMPk hGwLgxpq8gX/SwimpdqarjVp2k0sJk/6xdxAgVQAuBq2ToBeniLtP3zrBcpuJ1cN6ANzQbdWu6WqL JpJMUlmg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIB-0000000HEV2-0NjY; Fri, 10 Jan 2025 23:19:51 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwU-0000000GbPp-1ZNb for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:11 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=/j+RB8/lAgzqGTtPca+XTmsHyqC3ld9qGwXYbAsCbQ0=; b=Uzg/paeyiFHSouf/QKk8XwHwF3 bbgdFYcmr8cLEd1s5VeVef1PCL0fT5ERwSi0PmWMVo2bv1ZAM4c9ObzAL5qJg6Ss1BP2F7mz7szTT PN9JAN3kWYmthyYzC3gY4M6EeduVEEvcyuZz93jz0IV9yiNsQYFIUo6Dr8Df9A5n0/aY8welrwAWw JYPDkwZjT9/OL6xy8JnwUSsTHOaD3ph7AB5RlaUTZtqGJJ6lYibkgEPYCcb0yFWnT4q4itctRn0LX k7dFruk7mmUGhMXSoQA0kwAoZH/kBfwF3zv9p7nVjdiGimBtVN1VnDvQNVGJrJXJbb07rc6EHxJde NHIGUTWw==; Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwQ-00000009sNa-2Rmv for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:09 +0000 Received: by mail-wr1-x449.google.com with SMTP id ffacd0b85a97d-38a9118c486so1465376f8f.1 for ; Fri, 10 Jan 2025 10:41:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534462; x=1737139262; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/j+RB8/lAgzqGTtPca+XTmsHyqC3ld9qGwXYbAsCbQ0=; b=bs/6TkwdSdvhkOXOrFUPjxi46IU/OI1oRkXNPg5zhsh4RwOQBAYfN1/6hCyNhDiB72 xbDjKd2XyxJiVAGoUMRGVuanB1PwvRFmySlOetd4eMV2z7R5+Vou0UkGMTJsReGLheWr zSbV7pVqGm2OCQiXiehTrDBiYYKYmsIJoqV2XhpCuSLxo1c42k8ivisdy4ft0zqFAuZP auE3Fih8B9ryEqNnmETEvIXZL1ARumah2DJiiFAOSWFwri39iL/1FrQEjJuNaAf22E/Z Ahkjf3TfFXBW9Zea/kE7ioS1R3MpeuNrsZhERRohkEMUcXS4+9CbGlJ9R/JgpfqQPpYk 1bPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534462; x=1737139262; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/j+RB8/lAgzqGTtPca+XTmsHyqC3ld9qGwXYbAsCbQ0=; b=GeOo351oMfoOWmXyFOi6e4TDhlk+W68FoLl/pn3agfVvSUJiQyPLLLq0bC2C3eDa3a Aro3L1LphEsB+9MP8BzjfGmMmA6/TKSeOadIYhgTGE4fcRJD2/LIF0wGKdVNDRMcCNXV 8K/i7ln44f84vlISMBJV/lfpssq0y759qp9UEj9bq3FfkCTUlomqfqVnDbpw/nshd5/L GlZ7WlBJGGjnIr54AtDcgSU2jbJBVIgwNMxlPYcSBgOosFLNSAfvwKXBvTvAx6nL7E+k LcqYZaF8ju9JbPwd8aGQ5ITbeEzZuYg+SOAGGzwinzgrIpSKPLriUz1tfICGUDOBcQnb VNfg== X-Forwarded-Encrypted: i=1; AJvYcCW5Z6SAh2Tt1tTXJ7yQx1779ei4YTonDY5PnfKJMblADGG40ssA4aN5X2KlYF+R6ItWkfzBuZtwrTluJw==@lists.infradead.org X-Gm-Message-State: AOJu0YyRa7IW7/ChOtxoqbqR3AMz9I+S59XgWopRyOeWSBQ3N54zFThs Q9vM7xqeBnsqd2Uu4pK/yAxcoO7kp51wRMhvCO6z0SqauvcndFQn/u6YxZzqRFERQMtmjAtiho5 MyestJTAYrQ== X-Google-Smtp-Source: AGHT+IEaBJ4kVsH9fdlEpqyA1vEXALZVlXyRkre6Nr4itvNL2JHwYHaMyiaIiF1xr8efeWfazalHNzAFq1Ad7A== X-Received: from wrbfi1.prod.google.com ([2002:a05:6000:4401:b0:386:333e:ad16]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:470d:b0:385:dedb:a156 with SMTP id ffacd0b85a97d-38a872cfdffmr10312148f8f.6.1736534461506; Fri, 10 Jan 2025 10:41:01 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:33 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-7-8419288bc805@google.com> Subject: [PATCH RFC v2 07/29] mm: asi: Make __get_current_cr3_fast() ASI-aware From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Junaid Shahid X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184106_758766_2A20B521 X-CRM114-Status: GOOD ( 19.24 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Junaid Shahid When ASI is active, __get_current_cr3_fast() adjusts the returned CR3 value accordingly to reflect the actual ASI CR3. Signed-off-by: Junaid Shahid Signed-off-by: Brendan Jackman --- arch/x86/mm/tlb.c | 37 +++++++++++++++++++++++++++++++------ 1 file changed, 31 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 2601beed83aef182d88800c09d70e4c5e95e7ed0..b2a13fdab0c6454c1d9d4e3338801f3402da4191 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include "mm_internal.h" @@ -197,8 +198,8 @@ static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid, return build_cr3(pgd, asid, lam) | CR3_NOFLUSH; } -noinstr unsigned long build_cr3_pcid_noinstr(pgd_t *pgd, u16 pcid, - unsigned long lam, bool noflush) +static __always_inline unsigned long build_cr3_pcid(pgd_t *pgd, u16 pcid, + unsigned long lam, bool noflush) { u64 noflush_bit = 0; @@ -210,6 +211,12 @@ noinstr unsigned long build_cr3_pcid_noinstr(pgd_t *pgd, u16 pcid, return __build_cr3(pgd, pcid, lam) | noflush_bit; } +noinstr unsigned long build_cr3_pcid_noinstr(pgd_t *pgd, u16 pcid, + unsigned long lam, bool noflush) +{ + return build_cr3_pcid(pgd, pcid, lam, noflush); +} + /* * We get here when we do something requiring a TLB invalidation * but could not go invalidate all of the contexts. We do the @@ -1133,14 +1140,32 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) */ noinstr unsigned long __get_current_cr3_fast(void) { - unsigned long cr3 = - build_cr3(this_cpu_read(cpu_tlbstate.loaded_mm)->pgd, - this_cpu_read(cpu_tlbstate.loaded_mm_asid), - tlbstate_lam_cr3_mask()); + unsigned long cr3; + pgd_t *pgd; + u16 asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid); + struct asi *asi = asi_get_current(); + u16 pcid; + + if (asi) { + pgd = asi_pgd(asi); + pcid = asi_pcid(asi, asid); + } else { + pgd = this_cpu_read(cpu_tlbstate.loaded_mm)->pgd; + pcid = kern_pcid(asid); + } + + cr3 = build_cr3_pcid(pgd, pcid, tlbstate_lam_cr3_mask(), false); /* For now, be very restrictive about when this can be called. */ VM_WARN_ON(in_nmi() || preemptible()); + /* + * Outside of the ASI critical section, an ASI-restricted CR3 is + * unstable because an interrupt (including an inner interrupt, if we're + * already in one) could cause a persistent asi_exit. + */ + VM_WARN_ON_ONCE(asi && asi_in_critical_section()); + VM_BUG_ON(cr3 != __read_cr3()); return cr3; } From patchwork Fri Jan 10 18:40:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CC289E7719D for ; Fri, 10 Jan 2025 23:19:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=e+AIs7QVx6H7G6tlA6JPAYJp18ExV03h8uSIrefn+Ks=; b=FnSLG1gIPq3a9SfJGmpEwSDeto 4m2RJqp7FdhJ7Llbf7Rf7u7nB0hEkEICOnh1mnRf0aknmVssdooghbFgbOC1GhwwdqlalCS4LbI+K Bz4MfU0emn+ql0i5F0DA3TdfHfCQOxnY7Ehzas3OtnDwIsiGHK73OW3kMaP7qEpWxsru/+CrKtAPu tk/lUIvIP4YXeh7u8muBQjt5OhFOdHXt3DPL0PkNEhC2N/PhPEFAUijAuXS182DxXkNI9Rn3i8tR2 63VVqpAZfhWGTLIejRHVUmYEWQ29mU+V307rRPQTBjt5xv8bQKD2asmERR2XAZ985D6yf9IqY6r+G LIJBWN1A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOI9-0000000HETK-2biW; Fri, 10 Jan 2025 23:19:49 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwU-0000000GbPg-1ILw for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:10 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=NdZ1cQRVb5G6t/AzbRzOJmiK1PmEXn2j4H5pJNPFFDE=; b=e0pCAiubKz45dKSDEo8kl6ig/G IyZPBwVlQCaAycCTdiflZNT/u4rU/UuGyOaSbgQdBKWnr/8t3GSzAcyL2CxnGzvfzZG5aN+jvY5NQ vqTK/yHm3YCb2aCWd2XfBiiV0yKCOxZRURx9ULMK829tfVud+/Wp3djMzsEap8iPj6n2TPYA4zsyz XrcqaHxN/LR/ignAs3wgdJ6n8WcJ+MOdGmnhojbJZXt4M+gamBFIqnIplJpr438t/2SR9/YFYAt43 P7oOav8QTS6pPJNoVP9g7xxHwtIjKCmLEiAhuGsRJ7U7Bkouih695ozrw1sUnmieis7RMJGsg/v9+ 5VCM33+w==; Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwP-00000009sOX-0yKn for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:07 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-436248d1240so11542625e9.0 for ; Fri, 10 Jan 2025 10:41:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534464; x=1737139264; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NdZ1cQRVb5G6t/AzbRzOJmiK1PmEXn2j4H5pJNPFFDE=; b=CJa5CHVoG9yWsZBCnnBg0j9DoM3daJVSXSYRzTYOePTn81FrIGdKoJ6J72urks7mJr LnUnI8whUQnfeA6FxKnSojy4s8jBpDyT4s9uuBMuoTRTjxo5nbh2FzHOnmywLoXXgjHV TGIRRA1DI/D9TAyPmym2BU6HQPgB8Hg54hiTx8DRf6pFk6k1y6jxIyMGh6ioAAmnZRA5 eOLKF7FV/+HcJIqeGAyAVKch684Xp6Jy/M7smH2fdeW0aEFK708o4Ano4e2fUxSEoHGC s2XElWf8OXVGWZSb9D8PaQn/3kHupp0Hi2DKyHcK9icjmZ4L0t+B6pbWv/lI0jbz6WxV hItA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534464; x=1737139264; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NdZ1cQRVb5G6t/AzbRzOJmiK1PmEXn2j4H5pJNPFFDE=; b=BJZqwPFbaO+5O1SbFbh85lnhde1EUhM0501rRM5hxuVOvld6OxPcGtcH2x3Omch6FD mQ+RmVTlq63MGCAsegq1vhVPcUqn5uayqw41biBAMG0N3kGupCVx0OXDSGKPV/2cI6xs aXG33bSvVKamQpA8Hnsw3MjoYDASKtH+Z/ptAqsPy7ngaOlYJIdKhQGc0rZ2KKczyVjU X51bv6pY6aveTqsWW0Y3RfKDByLh/daBDd2mdN2Orh47lHkRhs5zzmaZbGvCVBhoSm/o g1GfMhsLUrE4+2f3DN0hyhUs8o2EVtDo1kctil6wJiOSzCpdFK/3MCd8Qtmt9W/itlHU HIVw== X-Forwarded-Encrypted: i=1; AJvYcCXFC+RzThPhQOfta6vVx82VQB2IOYZQCUbcTUYzGYWxY3FMPZnA4oEh2TmALN+QSGxK/ZviFVNNty+uxw==@lists.infradead.org X-Gm-Message-State: AOJu0YyqdYN6hKL2S7lZmpR0j88KPRuA0iHxxbDlDObkZXGczxebSJzB HRBRgSJ8ELWVRPPe0xCkU4AMA5TFwmHoy1GzeDX6OblgAY3X+nnwXPqhrhcw26f5w+CIoTvtbG7 GjPgndfTEUw== X-Google-Smtp-Source: AGHT+IHOqPqFuZt9UmOoUeA2JDIFLsGLn/JF5TCYjoXI0XK4RQVEWWzyMiwW2MB3CIHY3dySJS8wD0E+4CgsAA== X-Received: from wmgg11.prod.google.com ([2002:a05:600d:b:b0:434:ff52:1c7]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4ed3:b0:434:f7e3:bfbd with SMTP id 5b1f17b1804b1-436e26dda8cmr98320145e9.23.1736534463780; Fri, 10 Jan 2025 10:41:03 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:34 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-8-8419288bc805@google.com> Subject: [PATCH RFC v2 08/29] mm: asi: Avoid warning from NMI userspace accesses in ASI context From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Junaid Shahid , Yosry Ahmed X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184106_211212_44BA84F9 X-CRM114-Status: GOOD ( 18.82 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org nmi_uaccess_okay() emits a warning if current CR3 != mm->pgd. Limit the warning to only when ASI is not active. Co-developed-by: Junaid Shahid Signed-off-by: Junaid Shahid Co-developed-by: Yosry Ahmed Signed-off-by: Yosry Ahmed Signed-off-by: Brendan Jackman --- arch/x86/mm/tlb.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index b2a13fdab0c6454c1d9d4e3338801f3402da4191..c41e083c5b5281684be79ad0391c1a5fc7b0c493 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1340,6 +1340,22 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) put_cpu(); } +static inline bool cr3_matches_current_mm(void) +{ + struct asi *asi = asi_get_current(); + pgd_t *pgd_asi = asi_pgd(asi); + pgd_t *pgd_cr3; + + /* + * Prevent read_cr3_pa -> [NMI, asi_exit] -> asi_get_current, + * otherwise we might find CR3 pointing to the ASI PGD but not + * find a current ASI domain. + */ + barrier(); + pgd_cr3 = __va(read_cr3_pa()); + return pgd_cr3 == current->mm->pgd || pgd_cr3 == pgd_asi; +} + /* * Blindly accessing user memory from NMI context can be dangerous * if we're in the middle of switching the current user task or @@ -1355,10 +1371,10 @@ bool nmi_uaccess_okay(void) VM_WARN_ON_ONCE(!loaded_mm); /* - * The condition we want to check is - * current_mm->pgd == __va(read_cr3_pa()). This may be slow, though, - * if we're running in a VM with shadow paging, and nmi_uaccess_okay() - * is supposed to be reasonably fast. + * The condition we want to check that CR3 points to either + * current_mm->pgd or an appropriate ASI PGD. Reading CR3 may be slow, + * though, if we're running in a VM with shadow paging, and + * nmi_uaccess_okay() is supposed to be reasonably fast. * * Instead, we check the almost equivalent but somewhat conservative * condition below, and we rely on the fact that switch_mm_irqs_off() @@ -1367,7 +1383,7 @@ bool nmi_uaccess_okay(void) if (loaded_mm != current_mm) return false; - VM_WARN_ON_ONCE(current_mm->pgd != __va(read_cr3_pa())); + VM_WARN_ON_ONCE(!cr3_matches_current_mm()); return true; } From patchwork Fri Jan 10 18:40:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DCFA6E7719E for ; Fri, 10 Jan 2025 23:19:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=6eX/i+mWV7qYIjKWPAZBqOu21Qk8XrZ9wUbYMuTIt5A=; b=kgrVQqOulEQEiLXlpgkzhDqSda dEaePh3iTDgoGB1S6HfhUQzD30AU2FA+Hy7mNiDtLdSsirLF9fkP/eI/6MQ8HJm3z4WI+bwEd9/Pz oQXOBpo7QY37U1G5oZG25DZeJUd/HPPidG5OTl5Crhb1qkjv1JX/ZvjemQakGn2mFBrOPEfHj37yy xqtpTL9aMvf3AyRGG7N9VuroWCJ0HZnVpqJxGiQhHiI35Un0NN1KRzIWH9R6g9GI1fnOSZqmqrHhm FW7JBJ3txbE6bbynyZIG6TgooZfZkl1lrHqe5sUoBn1u3eimr1N3ozwOQEMJrV4rFJ0v+vOCcxpZE Lq2I/RoA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIA-0000000HEU8-1FRi; Fri, 10 Jan 2025 23:19:50 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwU-0000000GbPi-1Nqd for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:11 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=WaFp0TOYaWSuGPhKgfBnenMecO/UsWYRxmgXZ26ETaU=; b=ihB9y3DPVBTVH9wJXl03PtPKt5 lVZ1qzVBUKYK6dKXSJBqft4/DYQze2K8MiDAYyt8xmNLMeb4xh83WAWyScrGNsB/x2xs78uOvSHxa 3Jvo/Gw86agqLZZPnVHfGNN5J00GnbhL6JL+Fgsuj9K5+gvxAy0TMj0jXNSeF27TsC6Tb/EYsExHE 53B2+dmaEop+iBJZB4xUtzq1FGp1INz/SeyUz1K/VbZ1k3wYUlk5FaK6vylUoityUQuWAGUJefEkm hcM8N21Dvg1YfjscPk/SuBieS0OC89SnyZ3cerdGWZEu47Xv+3WTMIiAyGT1REMcGwpkWK1Tpdygi v3B7DKiA==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by casper.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwR-0000000EBH7-2F4z for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:09 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4359206e1e4so19288795e9.2 for ; Fri, 10 Jan 2025 10:41:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534466; x=1737139266; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WaFp0TOYaWSuGPhKgfBnenMecO/UsWYRxmgXZ26ETaU=; b=VEAQxMPFpxhBEZTqoqlDOf6y8oieYRZ6DfiXEI/4X1AVjiUAscjWIriFM6s9e9EAZJ V2OlwEFmOkm9vR0ss42jzqCI0wRVWeN1dDZ7gOJ9sNnHrnRp8+XJ7NFNn5Hw3dZ4XhHh Z/UCxx/f3hnRw47al8rIdqbZC5P49kpCFEjB8jOJLOvQ47+ErdoEknTdkw9QvjNh2Cc5 46pQo7DMl43auDI5yWQpNdtLVG4DCzvyeNzfxdYNZkuU2HBLF+Tu04fujWxk1N+StBNF kK4cdATEnHfCGX8mUWSV0Ezd12eS7QZ81nVYZLx+PY+Da4SzMU5FfcuK55O8qwrDZv5G 2AyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534466; x=1737139266; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WaFp0TOYaWSuGPhKgfBnenMecO/UsWYRxmgXZ26ETaU=; b=AcbPyI0pKFiTWmjhCsxDxHHxQxT+MKCg6h+yhQtHiPPW450vdxoYv53qYe5aiuz5vb bz0LCtMxV+p4ZkYAYLIpaEMKPoggg7x3zLoe+HEEDH8XJwgazZcNrqSD4mvBbRXAgIVK iZeTqGd8oyVpyTaijTyeOSVQPUZ2jNsDZK2BEJoYmsyIRhVRQIGpGNfvPn7bXOzWtpLx wSM4qkH9sWcVVTZ2LwOMvA5Nzyb+Ap4UMY28Ecl2sCmg66Sk51zZFR3qUqzcYqK2HdHr pjni+RuWetjnmxO+ygQN3bKG950ycDralb98Putcw1sqw48IAxCMH8pJnQkfpr/8rr/y /8tA== X-Forwarded-Encrypted: i=1; AJvYcCUlHmKmZh6xrYICTI8ujiFRhVr+3N8BnmIKjax8WkV7eX/c+WvpbhGOWFVZIW352Ng1q1LvmdxThj2iZw==@lists.infradead.org X-Gm-Message-State: AOJu0YwO21jQKe/x8+UV3LCs5sO81X+JksQ/fp34ea2PL19rWaqaan6W lD+f4oLuQkB96OflWdlmohQiRyNEYPmQjm5EZdQk7y4K+VRqJXC5vayLwZNXsYg5jJkXmzt62kx Csh4J/AErFw== X-Google-Smtp-Source: AGHT+IEE4az+Oz6XpM1h3immNPii0BZQy1AjHZ6Oe5B/in2qx8QsaCvn5PLTDmCUbo/W1PshFIv+Soo2Sqz9XA== X-Received: from wmrn35.prod.google.com ([2002:a05:600c:5023:b0:434:f2eb:aa72]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3ca4:b0:434:a26c:8291 with SMTP id 5b1f17b1804b1-436e26e203emr101768035e9.24.1736534465947; Fri, 10 Jan 2025 10:41:05 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:35 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-9-8419288bc805@google.com> Subject: [PATCH RFC v2 09/29] mm: asi: ASI page table allocation functions From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Junaid Shahid X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184107_598255_3F094C05 X-CRM114-Status: GOOD ( 17.22 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Junaid Shahid This adds custom allocation and free functions for ASI page tables. The alloc functions support allocating memory using different GFP reclaim flags, in order to be able to support non-sensitive allocations from both standard and atomic contexts. They also install the page tables locklessly, which makes it slightly simpler to handle non-sensitive allocations from interrupts/exceptions. checkpatch.pl MACRO_ARG_UNUSED,SPACING is false positive. COMPLEX_MACRO - I dunno, suggestions welcome. Checkpatch-args: --ignore=MACRO_ARG_UNUSED,SPACING,COMPLEX_MACRO Signed-off-by: Junaid Shahid Signed-off-by: Brendan Jackman --- arch/x86/mm/asi.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index 8d060c633be68b508847e2c1c111761df1da92af..b15d043acedc9f459f17e86564a15061650afc3a 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -73,6 +73,65 @@ const char *asi_class_name(enum asi_class_id class_id) return asi_class_names[class_id]; } +#ifndef mm_inc_nr_p4ds +#define mm_inc_nr_p4ds(mm) do {} while (false) +#endif + +#ifndef mm_dec_nr_p4ds +#define mm_dec_nr_p4ds(mm) do {} while (false) +#endif + +#define pte_offset pte_offset_kernel + +/* + * asi_p4d_alloc, asi_pud_alloc, asi_pmd_alloc, asi_pte_alloc. + * + * These are like the normal xxx_alloc functions, but: + * + * - They use atomic operations instead of taking a spinlock; this allows them + * to be used from interrupts. This is necessary because we use the page + * allocator from interrupts and the page allocator ultimately calls this + * code. + * - They support customizing the allocation flags. + * + * On the other hand, they do not use the normal page allocation infrastructure, + * that means that PTE pages do not have the PageTable type nor the PagePgtable + * flag and we don't increment the meminfo stat (NR_PAGETABLE) as they do. + */ +static_assert(!IS_ENABLED(CONFIG_PARAVIRT)); +#define DEFINE_ASI_PGTBL_ALLOC(base, level) \ +__maybe_unused \ +static level##_t * asi_##level##_alloc(struct asi *asi, \ + base##_t *base, ulong addr, \ + gfp_t flags) \ +{ \ + if (unlikely(base##_none(*base))) { \ + ulong pgtbl = get_zeroed_page(flags); \ + phys_addr_t pgtbl_pa; \ + \ + if (!pgtbl) \ + return NULL; \ + \ + pgtbl_pa = __pa(pgtbl); \ + \ + if (cmpxchg((ulong *)base, 0, \ + pgtbl_pa | _PAGE_TABLE) != 0) { \ + free_page(pgtbl); \ + goto out; \ + } \ + \ + mm_inc_nr_##level##s(asi->mm); \ + } \ +out: \ + VM_BUG_ON(base##_leaf(*base)); \ + return level##_offset(base, addr); \ +} + +DEFINE_ASI_PGTBL_ALLOC(pgd, p4d) +DEFINE_ASI_PGTBL_ALLOC(p4d, pud) +DEFINE_ASI_PGTBL_ALLOC(pud, pmd) +DEFINE_ASI_PGTBL_ALLOC(pmd, pte) + void __init asi_check_boottime_disable(void) { bool enabled = IS_ENABLED(CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION_DEFAULT_ON); From patchwork Fri Jan 10 18:40:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 08417C02183 for ; Fri, 10 Jan 2025 23:19:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=9jm/+jzQy7P+FJjENWdpij6nH9pPQfGLnF6ao5WaCF0=; b=Q5ORMLarMj7ku9IUKCI7Pwgw0o Gb/vAlNSCt6zGzsWsFj1PPAtnFpKb0pIuOuN3a5stvpha/fTdNp/ibvyodPAgc77OTmHsDciu5OlO BJkOJQ94712s0G4c/Z27r43A/t9XSx2XiSeaYhyKGeutNeDvv2KHPI/1F1vKxh+Arwu+3vlN9/RfM l9DnuZr/jBq7jlUylxNj0NV4jz4/LRGWUwNOJsY/ZDVErtHVJx/+9wM+BmP/2zHqC1d/JcV6B3a+L eiv+frL8Kr63/r9Kor+zdcCOsNu8PuyrJq+hqXJreCcQm62qx0LzBAPLxjmRyr/nlenTfD0DRR2xI rvadtH/Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIB-0000000HEVs-35Pe; Fri, 10 Jan 2025 23:19:51 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwY-0000000GbTn-3KL2 for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:15 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ffc/jeUEwcE9NsWtGdDMnxjVfG4g8pYiskW7P1F5M34=; b=JL2I89ip58w1FPxUu73U9BxLnR JN4jyi0cec2JdwWhBM/bV+rVlpzzoofEyCP6icWRw794cZuYAWdE6DoCXlpSeCjFVXEjkPx9D4c3V KseHvVYtxEBU1SgXLRyviFDEs8D3Dj71WKifKzmFdtxsPkbUhJQOKpnbgPSwHGFJI4yBUTfje/7YY 0yjQp51EezfgvjzbYRfwGLIxBgnSEcDrgomLyGhuEVeGjEp8Z5E54dtB1WdVvn3JOdsv4rSrlGir3 7VtwgcHU+u5DlFiI1Oo6kEZHagbrAVWHS7yFXw+n6+qQTn9FjyPOgI1RwnVoSecUa1fFrl0zn61Fa +gukLZDA==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwU-00000009sSg-1smo for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:13 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4361d4e8359so19029345e9.3 for ; Fri, 10 Jan 2025 10:41:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534468; x=1737139268; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ffc/jeUEwcE9NsWtGdDMnxjVfG4g8pYiskW7P1F5M34=; b=fPJB+yvqOmDp+s29h8r1voz2/l2ryGHDAVhu8Wy7Jl1VHmbG5JF1UAV7IKii8kTP6k X43l/zy3+bm7ZoVkUnapHTfI+BjrBqWZnApZ8GDzYbscrz4+jJjdRbVUgy2dmMird903 rAE7uSBhXEapvI3iYPhIDUcIVuyY+cKr4lf3UBhwejJUwJ96xX4xd8VZd+BXS5TsStsx Mpa7hTPwp6tVIFKYk9Hqgv6MnTtZ2dyvFQCbaHMKI1mVN8ZxQW8ckBDBkDi1GzeVEl/P 2QXEfGclw4UxIqJ9bnglFQIb8N1H/Ju+Z4bkreUhLgQTSMR+iRmxYpLkM3CWKwRy+WTa 0WFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534468; x=1737139268; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ffc/jeUEwcE9NsWtGdDMnxjVfG4g8pYiskW7P1F5M34=; b=SAGN3nTfIPv3nlkOfCxDkQSu0yEQ0W960Gpfc+2pDC5HqgGo3DdRRYt34DPly+kcye uj+Mn/xVN7AFzjT0s+yMqlam8gCe1kXqfdwSu/yl7rbaLvZKuZ31K0mpUxUpFwIfl5d0 dIXwhj47yPtxGyC4kfWWdGkBd3CnnL4QgXYkwT1h2YDVeuMpN4gfvztdYcAgnNDdhyvd trVVYbLTrp0WvScQG7A1hBDoX8FSIthUf7l/6U2tmX94vdpuASDpnnUxEWOMU+q/gTAG rhfzbhOI+xh7AdIGBDGcIPj6/081yBm0DJZXACR+enEoqdesVMZO+wHVxN53Y+MpC9yM pYqQ== X-Forwarded-Encrypted: i=1; AJvYcCWjt+j+4+H+EXecPFKZ/1mIiTBIoX8DXLCgip/a5oQkJb1mylW/NYzww7oRratwYBI2ix1ZMG3WabAozA==@lists.infradead.org X-Gm-Message-State: AOJu0YxT6SNz6uBIYTx0GaPh3TwJijFoWW2CiGQ2F0VVmd8FBBW2zKED Hm+gikngU5mini8s4gWTk0mQ1NoNoEcRiQfxfJu2/QgETBO4Zb4aAtL8BebcGKNzfZk8IJwtmUy kUXt5/3fSPQ== X-Google-Smtp-Source: AGHT+IF/q1QCp5xw/vI+ho6WzVZa4JA3GTBWKWx8cvSyaKNRoxJopmgrWWLxru20teHN6JHe+1fTI5ygNpFhkw== X-Received: from wmso37.prod.google.com ([2002:a05:600c:5125:b0:434:a98d:6a1c]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:524f:b0:435:d22:9c9e with SMTP id 5b1f17b1804b1-436e26d0cf9mr103592725e9.19.1736534468098; Fri, 10 Jan 2025 10:41:08 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:36 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-10-8419288bc805@google.com> Subject: [PATCH RFC v2 10/29] mm: asi: asi_exit() on PF, skip handling if address is accessible From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Ofir Weisse X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184110_565015_F8603E54 X-CRM114-Status: GOOD ( 35.71 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Ofir Weisse On a page-fault - do asi_exit(). Then check if now after the exit the address is accessible. We do this by refactoring spurious_kernel_fault() into two parts: 1. Verify that the error code value is something that could arise from a lazy TLB update. 2. Walk the page table and verify permissions, which is now called is_address_accessible(). We also define PTE_PRESENT() and PMD_PRESENT() which are suitable for checking userspace pages. For the sake of spurious faults, pte_present() and pmd_present() are only good for kernelspace pages. This is because these macros might return true even if the present bit is 0 (only relevant for userspace). checkpatch.pl VSPRINTF_SPECIFIER_PX - it's in a WARN that only fires in a debug build of the kernel when we hit a disastrous bug, seems OK to leak addresses. RFC note: A separate refactoring/prep commit should be split out of this patch. Checkpatch-args: --ignore=VSPRINTF_SPECIFIER_PX Signed-off-by: Ofir Weisse Signed-off-by: Brendan Jackman --- arch/x86/mm/fault.c | 118 +++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 103 insertions(+), 15 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index e6c469b323ccb748de22adc7d9f0a16dd195edad..ee8f5417174e2956391d538f41e2475553ca4972 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -948,7 +948,7 @@ do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address, force_sig_fault(SIGBUS, BUS_ADRERR, (void __user *)address); } -static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte) +static __always_inline int kernel_protection_ok(unsigned long error_code, pte_t *pte) { if ((error_code & X86_PF_WRITE) && !pte_write(*pte)) return 0; @@ -959,6 +959,8 @@ static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte) return 1; } +static int kernel_access_ok(unsigned long error_code, unsigned long address, pgd_t *pgd); + /* * Handle a spurious fault caused by a stale TLB entry. * @@ -984,11 +986,6 @@ static noinline int spurious_kernel_fault(unsigned long error_code, unsigned long address) { pgd_t *pgd; - p4d_t *p4d; - pud_t *pud; - pmd_t *pmd; - pte_t *pte; - int ret; /* * Only writes to RO or instruction fetches from NX may cause @@ -1004,6 +1001,50 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) return 0; pgd = init_mm.pgd + pgd_index(address); + return kernel_access_ok(error_code, address, pgd); +} +NOKPROBE_SYMBOL(spurious_kernel_fault); + +/* + * For kernel addresses, pte_present and pmd_present are sufficient for + * is_address_accessible. For user addresses these functions will return true + * even though the pte is not actually accessible by hardware (i.e _PAGE_PRESENT + * is not set). This happens in cases where the pages are physically present in + * memory, but they are not made accessible to hardware as they need software + * handling first: + * + * - ptes/pmds with _PAGE_PROTNONE need autonuma balancing (see pte_protnone(), + * change_prot_numa(), and do_numa_page()). + * + * - pmds with _PAGE_PSE & !_PAGE_PRESENT are undergoing splitting (see + * split_huge_page()). + * + * Here, we care about whether the hardware can actually access the page right + * now. + * + * These issues aren't currently present for PUD but we also have a custom + * PUD_PRESENT for a layer of future-proofing. + */ +#define PUD_PRESENT(pud) (pud_flags(pud) & _PAGE_PRESENT) +#define PMD_PRESENT(pmd) (pmd_flags(pmd) & _PAGE_PRESENT) +#define PTE_PRESENT(pte) (pte_flags(pte) & _PAGE_PRESENT) + +/* + * Check if an access by the kernel would cause a page fault. The access is + * described by a page fault error code (whether it was a write/instruction + * fetch) and address. This doesn't check for types of faults that are not + * expected to affect the kernel, e.g. PKU. The address can be user or kernel + * space, if user then we assume the access would happen via the uaccess API. + */ +static noinstr int +kernel_access_ok(unsigned long error_code, unsigned long address, pgd_t *pgd) +{ + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + int ret; + if (!pgd_present(*pgd)) return 0; @@ -1012,27 +1053,27 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) return 0; if (p4d_leaf(*p4d)) - return spurious_kernel_fault_check(error_code, (pte_t *) p4d); + return kernel_protection_ok(error_code, (pte_t *) p4d); pud = pud_offset(p4d, address); - if (!pud_present(*pud)) + if (!PUD_PRESENT(*pud)) return 0; if (pud_leaf(*pud)) - return spurious_kernel_fault_check(error_code, (pte_t *) pud); + return kernel_protection_ok(error_code, (pte_t *) pud); pmd = pmd_offset(pud, address); - if (!pmd_present(*pmd)) + if (!PMD_PRESENT(*pmd)) return 0; if (pmd_leaf(*pmd)) - return spurious_kernel_fault_check(error_code, (pte_t *) pmd); + return kernel_protection_ok(error_code, (pte_t *) pmd); pte = pte_offset_kernel(pmd, address); - if (!pte_present(*pte)) + if (!PTE_PRESENT(*pte)) return 0; - ret = spurious_kernel_fault_check(error_code, pte); + ret = kernel_protection_ok(error_code, pte); if (!ret) return 0; @@ -1040,12 +1081,11 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address) * Make sure we have permissions in PMD. * If not, then there's a bug in the page tables: */ - ret = spurious_kernel_fault_check(error_code, (pte_t *) pmd); + ret = kernel_protection_ok(error_code, (pte_t *) pmd); WARN_ONCE(!ret, "PMD has incorrect permission bits\n"); return ret; } -NOKPROBE_SYMBOL(spurious_kernel_fault); int show_unhandled_signals = 1; @@ -1490,6 +1530,29 @@ handle_page_fault(struct pt_regs *regs, unsigned long error_code, } } +static __always_inline void warn_if_bad_asi_pf( + unsigned long error_code, unsigned long address) +{ +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + struct asi *target; + + /* + * It's a bug to access sensitive data from the "critical section", i.e. + * on the path between asi_enter and asi_relax, where untrusted code + * gets run. #PF in this state sees asi_intr_nest_depth() as 1 because + * #PF increments it. We can't think of a better way to determine if + * this has happened than to check the ASI pagetables, hence we can't + * really have this check in non-debug builds unfortunately. + */ + VM_WARN_ONCE( + (target = asi_get_target(current)) != NULL && + asi_intr_nest_depth() == 1 && + !kernel_access_ok(error_code, address, asi_pgd(target)), + "ASI-sensitive data access from critical section, addr=%px error_code=%lx class=%s", + (void *) address, error_code, asi_class_name(target->class_id)); +#endif +} + DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) { irqentry_state_t state; @@ -1497,6 +1560,31 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2(); + if (static_asi_enabled() && !user_mode(regs)) { + pgd_t *pgd; + + /* Can be a NOP even for ASI faults, because of NMIs */ + asi_exit(); + + /* + * handle_page_fault() might oops if we run it for a kernel + * address in kernel mode. This might be the case if we got here + * due to an ASI fault. We avoid this case by checking whether + * the address is now, after asi_exit(), accessible by hardware. + * If it is - there's nothing to do. Note that this is a bit of + * a shotgun; we can also bail early from user-address faults + * here that weren't actually caused by ASI. So we might wanna + * move this logic later in the handler. In particular, we might + * be losing some stats here. However for now this keeps ASI + * page faults nice and fast. + */ + pgd = (pgd_t *)__va(read_cr3_pa()) + pgd_index(address); + if (!user_mode(regs) && kernel_access_ok(error_code, address, pgd)) { + warn_if_bad_asi_pf(error_code, address); + return; + } + } + prefetchw(¤t->mm->mmap_lock); /* From patchwork Fri Jan 10 18:40:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 77595E7719C for ; Fri, 10 Jan 2025 23:20:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=7PspCGX9Qh6Vlhho0r2w7NLw25Bb+UNSPDVCsmC3boQ=; b=bPzsWAHGFuhmyTXzXIe8N10TsG hv2tQDO620oN+I7r0w3CxCd3e7/x62g4sVt28Fjn/cFUh6sljQBUvraC/qElacKMxtDjkBxii+4Ir 7AqtRXjYzYaFLJAB/T6xVlAOFNw9yBAncJeqZ683fqwNeu94orubC6vq7Mh7wO24k1VGFBQI5soFO oXj37hjTNg2te0beZ8H5EaDWmZeF5pizJ2othtfdwauFlmHQ3CFvcuzICmj8miH/R8E4gVzgLSeAb Z1BujAmVBFO82n1wsRjyAF8jkaHXrLgUfxYHDD0PEZXdQev2BCJXOhQQImGaaxF83IaAv2dnWPaA6 zbQxmZmw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIG-0000000HEXp-0UjH; Fri, 10 Jan 2025 23:19:56 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwa-0000000GbVv-3N7N for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:17 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=9R39fCT8BkGaT3ojJEFUEFWnag6aLMpEmNSF+NNhCcA=; b=DeMZy9hEOq1voCHJe/KSIEHvhl J7VNG6F0XxMPaCHX/oQy4bTM15yMJBfJD1NOW4RYbEytutIg2Pnvvh9xpsBEmQkOSg370JUJtCej0 unVg0ufdE1cK127z1UPUddsC4Z1vQ+DUThzLaUua6rSKu2Ny8OBf83Y6bu38VzZb6lclavY7IF45D Z1qAJoAqN1nhVXGKwSDKrTP5RLpWHn3ZuyQEyisJgR8obSi2fuRMPWh5f4fylCWZOu44vbNPwxbGU L9B50BtwSjJ0lnnfqaPO4W1saYKZU516GaRCUPT6gww0aSXscayKypAO8tDA/u5e620I+Wwuuye8z I9elY8rw==; Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by casper.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwW-0000000EBIL-2ujV for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:15 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-436328fcfeeso20435575e9.1 for ; Fri, 10 Jan 2025 10:41:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534471; x=1737139271; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9R39fCT8BkGaT3ojJEFUEFWnag6aLMpEmNSF+NNhCcA=; b=c1Ya80gW7Mza0Cs58+CDiFbGKduQK7sg4/obc2MgIqNUr6+Vb4HJaqeryliX+ic2vC rXhjxYx12PxMs8ihyHC8VauxJz9R8fs5xZgfrNF5D+PUd4RET8kh1VXjDyG02RTScFEl RTj4SP2GOiFJcAaSsgYPrajEHDYPPdSsWFw15ynKy10mINnua9N2i/e1RqGxLylVyknC PKVwsrc2s2FDLX1WQkNmFkEbCjrFyhAaaL4UiR2Z7luwSyFVsSPye7JSgMklW+RPWNjx OMoj+nlLO/SfR0IywXtzIjmyO6KwAKbWYMCtedhXYL1VPDKRvwBLTCI3hLG0X6N+kbbf jsyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534471; x=1737139271; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9R39fCT8BkGaT3ojJEFUEFWnag6aLMpEmNSF+NNhCcA=; b=IJnD2yO+rNZmTZ13jB81IXJGpbtZc+kmpuh/Rh25fHKizu26ngCLRloYwK/Bl+tlVc +WrHr8dH5hj9WjEG0ofv2by2DN+vLK0YhOTHh4wzzQLv3wrhg/W/6d5IBknPadmsyHi8 CQ+KG05lop+MRctc+yOWpZAPW14RYFKwHv4cyoEEmB99d+u3imiPZwAEaKgkfsrQomwU mQcYmwBTrEGdxcpnVkrziiIwnbrPOUjA6ma6xvVlu3WMSi4+o1zTrvFN7ti4Cv/jmhlT wU1S24HTL+DkFRBSgFVi2ypFfBBgRqC+CnJVFOmlM0WVHSHralUehYYPjXYzqpO06TNb v7gw== X-Forwarded-Encrypted: i=1; AJvYcCVjM3HF+jnZDGZM0pjcqOxUtrYFnkDVyyqgNaPuRWHDZWQePflHYHMDZLgtRDP7Kmza25hdMNswnJPixg==@lists.infradead.org X-Gm-Message-State: AOJu0YxDqqekzvX5R4gHKg8HMrOdSY4XBYb+TQ58Ah8EDvZE9nwD7ycJ d6/s8/fVYUQQkiNNMNde93mgv+i1f5l0rrv2PNH1wktLiQp1uGqK8zlx12SncXntgyiID1mDu9d ZS9zvNDg4Ug== X-Google-Smtp-Source: AGHT+IFAF35eNkOAcz1ZyY8tB2zfXGnUuByWneOp6LmVFLX6PovKFbMifWci/fMsVQav8K8GF97UZk3GG6JG8w== X-Received: from wmbfl22.prod.google.com ([2002:a05:600c:b96:b0:436:6fa7:621]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:310c:b0:436:840b:2593 with SMTP id 5b1f17b1804b1-436e26ad50emr117815595e9.15.1736534470650; Fri, 10 Jan 2025 10:41:10 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:37 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-11-8419288bc805@google.com> Subject: [PATCH RFC v2 11/29] mm: asi: Functions to map/unmap a memory range into ASI page tables From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Junaid Shahid , Kevin Cheng X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184112_777262_822B08F3 X-CRM114-Status: GOOD ( 29.00 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Junaid Shahid Two functions, asi_map() and asi_map_gfp(), are added to allow mapping memory into ASI page tables. The mapping will be identical to the one for the same virtual address in the unrestricted page tables. This is necessary to allow switching between the page tables at any arbitrary point in the kernel. Another function, asi_unmap() is added to allow unmapping memory mapped via asi_map* RFC Notes: Don't read too much into the implementation of this, lots of it should probably be rewritten. It also needs to gain support for partial unmappings. Checkpatch-args: --ignore=MACRO_ARG_UNUSED Signed-off-by: Junaid Shahid Signed-off-by: Brendan Jackman Signed-off-by: Kevin Cheng --- arch/x86/include/asm/asi.h | 5 + arch/x86/mm/asi.c | 236 ++++++++++++++++++++++++++++++++++++++++++++- arch/x86/mm/tlb.c | 5 + include/asm-generic/asi.h | 11 +++ include/linux/pgtable.h | 3 + mm/internal.h | 2 + mm/vmalloc.c | 32 +++--- 7 files changed, 280 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/asi.h b/arch/x86/include/asm/asi.h index a55e73f1b2bc84c41b9ab25f642a4d5f1aa6ba90..33f18be0e268b3a6725196619cbb8d847c21e197 100644 --- a/arch/x86/include/asm/asi.h +++ b/arch/x86/include/asm/asi.h @@ -157,6 +157,11 @@ void asi_relax(void); /* Immediately exit the restricted address space if in it */ void asi_exit(void); +int asi_map_gfp(struct asi *asi, void *addr, size_t len, gfp_t gfp_flags); +int asi_map(struct asi *asi, void *addr, size_t len); +void asi_unmap(struct asi *asi, void *addr, size_t len); +void asi_flush_tlb_range(struct asi *asi, void *addr, size_t len); + static inline void asi_init_thread_state(struct thread_struct *thread) { thread->asi_state.intr_nest_depth = 0; diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index b15d043acedc9f459f17e86564a15061650afc3a..f2d8fbc0366c289891903e1c2ac6c59b9476d95f 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -11,6 +11,9 @@ #include #include #include +#include + +#include "../../../mm/internal.h" static struct asi_taint_policy *taint_policies[ASI_MAX_NUM_CLASSES]; @@ -100,7 +103,6 @@ const char *asi_class_name(enum asi_class_id class_id) */ static_assert(!IS_ENABLED(CONFIG_PARAVIRT)); #define DEFINE_ASI_PGTBL_ALLOC(base, level) \ -__maybe_unused \ static level##_t * asi_##level##_alloc(struct asi *asi, \ base##_t *base, ulong addr, \ gfp_t flags) \ @@ -455,3 +457,235 @@ void asi_handle_switch_mm(void) this_cpu_or(asi_taints, new_taints); this_cpu_and(asi_taints, ~(ASI_TAINTS_GUEST_MASK | ASI_TAINTS_USER_MASK)); } + +static bool is_page_within_range(unsigned long addr, unsigned long page_size, + unsigned long range_start, unsigned long range_end) +{ + unsigned long page_start = ALIGN_DOWN(addr, page_size); + unsigned long page_end = page_start + page_size; + + return page_start >= range_start && page_end <= range_end; +} + +static bool follow_physaddr( + pgd_t *pgd_table, unsigned long virt, + phys_addr_t *phys, unsigned long *page_size, ulong *flags) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + /* RFC: This should be rewritten with lookup_address_in_*. */ + + *page_size = PGDIR_SIZE; + pgd = pgd_offset_pgd(pgd_table, virt); + if (!pgd_present(*pgd)) + return false; + if (pgd_leaf(*pgd)) { + *phys = PFN_PHYS(pgd_pfn(*pgd)) | (virt & ~PGDIR_MASK); + *flags = pgd_flags(*pgd); + return true; + } + + *page_size = P4D_SIZE; + p4d = p4d_offset(pgd, virt); + if (!p4d_present(*p4d)) + return false; + if (p4d_leaf(*p4d)) { + *phys = PFN_PHYS(p4d_pfn(*p4d)) | (virt & ~P4D_MASK); + *flags = p4d_flags(*p4d); + return true; + } + + *page_size = PUD_SIZE; + pud = pud_offset(p4d, virt); + if (!pud_present(*pud)) + return false; + if (pud_leaf(*pud)) { + *phys = PFN_PHYS(pud_pfn(*pud)) | (virt & ~PUD_MASK); + *flags = pud_flags(*pud); + return true; + } + + *page_size = PMD_SIZE; + pmd = pmd_offset(pud, virt); + if (!pmd_present(*pmd)) + return false; + if (pmd_leaf(*pmd)) { + *phys = PFN_PHYS(pmd_pfn(*pmd)) | (virt & ~PMD_MASK); + *flags = pmd_flags(*pmd); + return true; + } + + *page_size = PAGE_SIZE; + pte = pte_offset_map(pmd, virt); + if (!pte) + return false; + + if (!pte_present(*pte)) { + pte_unmap(pte); + return false; + } + + *phys = PFN_PHYS(pte_pfn(*pte)) | (virt & ~PAGE_MASK); + *flags = pte_flags(*pte); + + pte_unmap(pte); + return true; +} + +/* + * Map the given range into the ASI page tables. The source of the mapping is + * the regular unrestricted page tables. Can be used to map any kernel memory. + * + * The caller MUST ensure that the source mapping will not change during this + * function. For dynamic kernel memory, this is generally ensured by mapping the + * memory within the allocator. + * + * If this fails, it may leave partial mappings behind. You must asi_unmap them, + * bearing in mind asi_unmap's requirements on the calling context. Part of the + * reason for this is that we don't want to unexpectedly undo mappings that + * weren't created by the present caller. + * + * If the source mapping is a large page and the range being mapped spans the + * entire large page, then it will be mapped as a large page in the ASI page + * tables too. If the range does not span the entire huge page, then it will be + * mapped as smaller pages. In that case, the implementation is slightly + * inefficient, as it will walk the source page tables again for each small + * destination page, but that should be ok for now, as usually in such cases, + * the range would consist of a small-ish number of pages. + * + * RFC: * vmap_p4d_range supports huge mappings, we can probably use that now. + */ +int __must_check asi_map_gfp(struct asi *asi, void *addr, unsigned long len, gfp_t gfp_flags) +{ + unsigned long virt; + unsigned long start = (size_t)addr; + unsigned long end = start + len; + unsigned long page_size; + + if (!static_asi_enabled()) + return 0; + + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); + VM_BUG_ON(!IS_ALIGNED(len, PAGE_SIZE)); + /* RFC: fault_in_kernel_space should be renamed. */ + VM_BUG_ON(!fault_in_kernel_space(start)); + + gfp_flags &= GFP_RECLAIM_MASK; + + if (asi->mm != &init_mm) + gfp_flags |= __GFP_ACCOUNT; + + for (virt = start; virt < end; virt = ALIGN(virt + 1, page_size)) { + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + phys_addr_t phys; + ulong flags; + + if (!follow_physaddr(asi->mm->pgd, virt, &phys, &page_size, &flags)) + continue; + +#define MAP_AT_LEVEL(base, BASE, level, LEVEL) { \ + if (base##_leaf(*base)) { \ + if (WARN_ON_ONCE(PHYS_PFN(phys & BASE##_MASK) !=\ + base##_pfn(*base))) \ + return -EBUSY; \ + continue; \ + } \ + \ + level = asi_##level##_alloc(asi, base, virt, gfp_flags);\ + if (!level) \ + return -ENOMEM; \ + \ + if (page_size >= LEVEL##_SIZE && \ + (level##_none(*level) || level##_leaf(*level)) && \ + is_page_within_range(virt, LEVEL##_SIZE, \ + start, end)) { \ + page_size = LEVEL##_SIZE; \ + phys &= LEVEL##_MASK; \ + \ + if (!level##_none(*level)) { \ + if (WARN_ON_ONCE(level##_pfn(*level) != \ + PHYS_PFN(phys))) { \ + return -EBUSY; \ + } \ + } else { \ + set_##level(level, \ + __##level(phys | flags)); \ + } \ + continue; \ + } \ + } + + pgd = pgd_offset_pgd(asi->pgd, virt); + + MAP_AT_LEVEL(pgd, PGDIR, p4d, P4D); + MAP_AT_LEVEL(p4d, P4D, pud, PUD); + MAP_AT_LEVEL(pud, PUD, pmd, PMD); + /* + * If a large page is going to be partially mapped + * in 4k pages, convert the PSE/PAT bits. + */ + if (page_size >= PMD_SIZE) + flags = protval_large_2_4k(flags); + MAP_AT_LEVEL(pmd, PMD, pte, PAGE); + + VM_BUG_ON(true); /* Should never reach here. */ + } + + return 0; +#undef MAP_AT_LEVEL +} + +int __must_check asi_map(struct asi *asi, void *addr, unsigned long len) +{ + return asi_map_gfp(asi, addr, len, GFP_KERNEL); +} + +/* + * Unmap a kernel address range previously mapped into the ASI page tables. + * + * The area being unmapped must be a whole previously mapped region (or regions) + * Unmapping a partial subset of a previously mapped region is not supported. + * That will work, but may end up unmapping more than what was asked for, if + * the mapping contained huge pages. A later patch will remove this limitation + * by splitting the huge mapping in the ASI page table in such a case. For now, + * vunmap_pgd_range() will just emit a warning if this situation is detected. + * + * This might sleep, and cannot be called with interrupts disabled. + */ +void asi_unmap(struct asi *asi, void *addr, size_t len) +{ + size_t start = (size_t)addr; + size_t end = start + len; + pgtbl_mod_mask mask = 0; + + if (!static_asi_enabled() || !len) + return; + + VM_BUG_ON(start & ~PAGE_MASK); + VM_BUG_ON(len & ~PAGE_MASK); + VM_BUG_ON(!fault_in_kernel_space(start)); /* Misnamed, ignore "fault_" */ + + vunmap_pgd_range(asi->pgd, start, end, &mask); + + /* We don't support partial unmappings. */ + if (mask & PGTBL_P4D_MODIFIED) { + VM_WARN_ON(!IS_ALIGNED((ulong)addr, P4D_SIZE)); + VM_WARN_ON(!IS_ALIGNED((ulong)len, P4D_SIZE)); + } else if (mask & PGTBL_PUD_MODIFIED) { + VM_WARN_ON(!IS_ALIGNED((ulong)addr, PUD_SIZE)); + VM_WARN_ON(!IS_ALIGNED((ulong)len, PUD_SIZE)); + } else if (mask & PGTBL_PMD_MODIFIED) { + VM_WARN_ON(!IS_ALIGNED((ulong)addr, PMD_SIZE)); + VM_WARN_ON(!IS_ALIGNED((ulong)len, PMD_SIZE)); + } + + asi_flush_tlb_range(asi, addr, len); +} diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index c41e083c5b5281684be79ad0391c1a5fc7b0c493..c55733e144c7538ce7f97b74ea2b1b9c22497c32 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1040,6 +1040,11 @@ noinstr u16 asi_pcid(struct asi *asi, u16 asid) // return kern_pcid(asid) | ((asi->index + 1) << X86_CR3_ASI_PCID_BITS_SHIFT); } +void asi_flush_tlb_range(struct asi *asi, void *addr, size_t len) +{ + flush_tlb_kernel_range((ulong)addr, (ulong)addr + len); +} + #else /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ u16 asi_pcid(struct asi *asi, u16 asid) { return kern_pcid(asid); } diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h index f777a6cf604b0656fb39087f6eba08f980b2cb6f..5be8f7d657ba0bc2196e333f62b084d0c9eef7b6 100644 --- a/include/asm-generic/asi.h +++ b/include/asm-generic/asi.h @@ -77,6 +77,17 @@ static inline int asi_intr_nest_depth(void) { return 0; } static inline void asi_intr_exit(void) { } +static inline int asi_map(struct asi *asi, void *addr, size_t len) +{ + return 0; +} + +static inline +void asi_unmap(struct asi *asi, void *addr, size_t len) { } + +static inline +void asi_flush_tlb_range(struct asi *asi, void *addr, size_t len) { } + #define static_asi_enabled() false static inline void asi_check_boottime_disable(void) { } diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index e8b2ac6bd2ae3b0a768734c8411f45a7d162e12d..492a9cdee7ff3d4e562c4bf508dc14fd7fa67e36 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1900,6 +1900,9 @@ typedef unsigned int pgtbl_mod_mask; #ifndef pmd_leaf #define pmd_leaf(x) false #endif +#ifndef pte_leaf +#define pte_leaf(x) 1 +#endif #ifndef pgd_leaf_size #define pgd_leaf_size(x) (1ULL << PGDIR_SHIFT) diff --git a/mm/internal.h b/mm/internal.h index 64c2eb0b160e169ab9134e3ab618d8a1d552d92c..c0454fe019b9078a963b1ab3685bf31ccfd768b7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -395,6 +395,8 @@ void unmap_page_range(struct mmu_gather *tlb, void page_cache_ra_order(struct readahead_control *, struct file_ra_state *, unsigned int order); void force_page_cache_ra(struct readahead_control *, unsigned long nr); +void vunmap_pgd_range(pgd_t *pgd_table, unsigned long addr, unsigned long end, + pgtbl_mod_mask *mask); static inline void force_page_cache_readahead(struct address_space *mapping, struct file *file, pgoff_t index, unsigned long nr_to_read) { diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 634162271c0045965eabd9bfe8b64f4a1135576c..8d260f2174fe664b54dcda054cb9759ae282bf03 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -427,6 +427,24 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, } while (p4d++, addr = next, addr != end); } +void vunmap_pgd_range(pgd_t *pgd_table, unsigned long addr, unsigned long end, + pgtbl_mod_mask *mask) +{ + unsigned long next; + pgd_t *pgd = pgd_offset_pgd(pgd_table, addr); + + BUG_ON(addr >= end); + + do { + next = pgd_addr_end(addr, end); + if (pgd_bad(*pgd)) + *mask |= PGTBL_PGD_MODIFIED; + if (pgd_none_or_clear_bad(pgd)) + continue; + vunmap_p4d_range(pgd, addr, next, mask); + } while (pgd++, addr = next, addr != end); +} + /* * vunmap_range_noflush is similar to vunmap_range, but does not * flush caches or TLBs. @@ -441,21 +459,9 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, */ void __vunmap_range_noflush(unsigned long start, unsigned long end) { - unsigned long next; - pgd_t *pgd; - unsigned long addr = start; pgtbl_mod_mask mask = 0; - BUG_ON(addr >= end); - pgd = pgd_offset_k(addr); - do { - next = pgd_addr_end(addr, end); - if (pgd_bad(*pgd)) - mask |= PGTBL_PGD_MODIFIED; - if (pgd_none_or_clear_bad(pgd)) - continue; - vunmap_p4d_range(pgd, addr, next, &mask); - } while (pgd++, addr = next, addr != end); + vunmap_pgd_range(init_mm.pgd, start, end, &mask); if (mask & ARCH_PAGE_TABLE_SYNC_MASK) arch_sync_kernel_mappings(start, end); From patchwork Fri Jan 10 18:40:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935572 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6FDE1E77188 for ; Fri, 10 Jan 2025 23:20:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=JsRbineyfkoxM7gNilS/aZXqgAyckRgUrBqx9FTflD0=; b=a14j0Lt1HgW7L82gejYXwKg2tK 2vb5SM0kCvze/FoxaXuZ1kmgONTm+QYjRAd2REmfeP01U8UBeBFqW7TyUWPbklNZuLmuQyZT62m9h kEaozU5Er2lR7ad5D1fOrUrCHxf/qGJw2VV6pmWMqQFnvmTa7cNJ0tgm2OAJt/adLHfUBsetiwFIw c7ij3JpQdlEmGWY8cvbLnB/7sZJCmYu9VPYGcH1SKufRFJrASAjYaf66JMgSChz9/jNrPL6zTJRwz 7S5eT28lhkCccNYR3J5fS0Mf8NidcjnI8blZt9PFmekoRIoThhy4ZRkrv1Kp7HvCosBhrjRods0nO K/KKZKKw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIG-0000000HEYj-432z; Fri, 10 Jan 2025 23:19:56 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwc-0000000GbXe-2G5E for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:18 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=UzXM9zi9X6couWShNCb4EkjJb05AI+Oo7qUi6Mvjsww=; b=o3W6DmWFQpWK61ps5xDrZzrRpU qMtEdMYaK9q7lGoOFFBcP7UyPHOn/qgyr8/XcGDb+KeGbmU9kYCz4XAl5uQaeiVNwk/kBJvYOdSPe Zb+EF9u/A6NPw7OODYVEU+nhKLxWohrelZ6VEJ0nIMZaV2QOKvHA04IyJET+QrahPIL3K+I94OBXI ZoHXWNtGCnYcT2Sm/TXYBAMEXH+OL4uOTvyXPzJen+wmmHDBX0CpkB6yUld/wU72hX0pucwbPWIBL cMDq3jSBJ/hhumQdOU8RA7IflSJ4oUdxokQtoiMMzeyzWePCswZqbB0dO13WttTYm1L88u5hy+Wx1 aV2CXLHg==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwY-00000009sVz-33rx for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:17 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-43631d8d9c7so11982475e9.1 for ; Fri, 10 Jan 2025 10:41:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534473; x=1737139273; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UzXM9zi9X6couWShNCb4EkjJb05AI+Oo7qUi6Mvjsww=; b=I1hf8nWTwD/El2vuVnper0o5meAFt2a2Eqm05UT5bRS7vPCGJjh60RyJKwVz4zZkTF zh81RMO4Is4WlvEhYhrMtEwSjo56jvPkHx4e9LVBats8gje4bRt7JDYKcT6YZNX9FF/v 9XYtHyTc/lPlaYb57gd/R66jfNIqJUezxlnYbX/Gxx6d8zqkqfygq79Ys7en1PUgefeS 5JCArAvyPkiPDtz1mV85pVzfXymXeCaShnlzi5BZI+PdCZSwIhrLufAy0zgaFNU04L4b ylQu/34rE8j+o48USbxr/AxykfSn1iGd2hHFB+hzg13BxlU6nfzHB+V4yZYojYQrxI/V eTDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534473; x=1737139273; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UzXM9zi9X6couWShNCb4EkjJb05AI+Oo7qUi6Mvjsww=; b=nuVkIu9h+uVC97aF8Ra/SCBDv2m0DjkNsW7TvoW8xh7h3rTT+62Q4tGOF4GLtPc5sy /0YfUxh8qLA7qa5oKD5VKJAzJR3T9t8hFLbfLsrMcMNAUSov508ekkGp5Bs3Mv/fx5VH YpTGTTHCmL3FKoSiBwBToIVXmQgzMBXaMmzrWaPxXy9kvGx2mUsYgw6tWt0rhPqdAZDz YPsqUbIZnjTvR7cImhZZQDHGMmsgfOWS57QvcNg+FLOp0L9ssJcPkxv7F+EEtlqepJOa uctY04hxwbFIqeDuL9hnuGizMu+qjRJWV3FKt7Yf2I9Arhe/B90eK3+UnQ0MwUOZT/0/ teBw== X-Forwarded-Encrypted: i=1; AJvYcCWphQg4yePg6N60HWJaBFvDGsp6fdR4xdQ4tabP8QEqLmuOY+VKjW0XlosCCpOQSt0Jp1c7F0o2Jjf3PA==@lists.infradead.org X-Gm-Message-State: AOJu0Yxbtz8sBCIkvzpEA1/Cxi3HeeXPwr0l4TNtp25BIcPfnF0xkbIc xCB0Ih6b4eqZYVDyJlKVq6p5yhOocIawV1fIfP2S+TNHxclCbDj8D/eqGM6cML4gmJhzxYGWWAl ELlOx0GNgzQ== X-Google-Smtp-Source: AGHT+IFDRv4zNENk63KYcLxYmQdIUcLs7zTtS/Sd8xMbR3tJqcykEbsngD0F8tGcq40UK8q2XIPTmnyPYnp+hA== X-Received: from wmqe5.prod.google.com ([2002:a05:600c:4e45:b0:435:21e:7bec]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d2a:b0:435:edb0:5d27 with SMTP id 5b1f17b1804b1-436e8827fbcmr75837805e9.9.1736534472863; Fri, 10 Jan 2025 10:41:12 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:38 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-12-8419288bc805@google.com> Subject: [PATCH RFC v2 12/29] mm: asi: Add basic infrastructure for global non-sensitive mappings From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Junaid Shahid X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184114_881454_7C8E056A X-CRM114-Status: GOOD ( 25.43 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Junaid Shahid A pseudo-PGD is added to store global non-sensitive ASI mappings. Actual ASI PGDs copy entries from this pseudo-PGD during asi_init(). Memory can be mapped as globally non-sensitive by calling asi_map() with ASI_GLOBAL_NONSENSITIVE. Page tables allocated for global non-sensitive mappings are never freed. These page tables are shared between all domains and init_mm, so they don't need special synchronization. RFC note: A refactoring/prep commit should be split out of this patch. Signed-off-by: Junaid Shahid Signed-off-by: Brendan Jackman --- arch/x86/include/asm/asi.h | 3 +++ arch/x86/mm/asi.c | 37 +++++++++++++++++++++++++++++++++++++ arch/x86/mm/init_64.c | 25 ++++++++++++++++--------- arch/x86/mm/mm_internal.h | 3 +++ include/asm-generic/asi.h | 2 ++ 5 files changed, 61 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/asi.h b/arch/x86/include/asm/asi.h index 33f18be0e268b3a6725196619cbb8d847c21e197..555edb5f292e4d6baba782f51d014aa48dc850b6 100644 --- a/arch/x86/include/asm/asi.h +++ b/arch/x86/include/asm/asi.h @@ -120,6 +120,9 @@ struct asi_taint_policy { asi_taints_t set; }; +extern struct asi __asi_global_nonsensitive; +#define ASI_GLOBAL_NONSENSITIVE (&__asi_global_nonsensitive) + /* * An ASI domain (struct asi) represents a restricted address space. The * unrestricted address space (and user address space under PTI) are not diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index f2d8fbc0366c289891903e1c2ac6c59b9476d95f..17391ec8b22e3c0903cd5ca29cbb03fcc4cbacce 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -13,6 +13,7 @@ #include #include +#include "mm_internal.h" #include "../../../mm/internal.h" static struct asi_taint_policy *taint_policies[ASI_MAX_NUM_CLASSES]; @@ -26,6 +27,13 @@ const char *asi_class_names[] = { DEFINE_PER_CPU_ALIGNED(struct asi *, curr_asi); EXPORT_SYMBOL(curr_asi); +static __aligned(PAGE_SIZE) pgd_t asi_global_nonsensitive_pgd[PTRS_PER_PGD]; + +struct asi __asi_global_nonsensitive = { + .pgd = asi_global_nonsensitive_pgd, + .mm = &init_mm, +}; + static inline bool asi_class_id_valid(enum asi_class_id class_id) { return class_id >= 0 && class_id < ASI_MAX_NUM_CLASSES; @@ -156,6 +164,31 @@ void __init asi_check_boottime_disable(void) pr_info("ASI enablement ignored due to incomplete implementation.\n"); } +static int __init asi_global_init(void) +{ + if (!boot_cpu_has(X86_FEATURE_ASI)) + return 0; + + /* + * Lower-level pagetables for global nonsensitive mappings are shared, + * but the PGD has to be copied into each domain during asi_init. To + * avoid needing to synchronize new mappings into pre-existing domains + * we just pre-allocate all of the relevant level N-1 entries so that + * the global nonsensitive PGD already has pointers that can be copied + * when new domains get asi_init()ed. + */ + preallocate_sub_pgd_pages(asi_global_nonsensitive_pgd, + PAGE_OFFSET, + PAGE_OFFSET + PFN_PHYS(max_pfn) - 1, + "ASI Global Non-sensitive direct map"); + preallocate_sub_pgd_pages(asi_global_nonsensitive_pgd, + VMALLOC_START, VMALLOC_END, + "ASI Global Non-sensitive vmalloc"); + + return 0; +} +subsys_initcall(asi_global_init) + static void __asi_destroy(struct asi *asi) { WARN_ON_ONCE(asi->ref_count <= 0); @@ -170,6 +203,7 @@ int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_ { struct asi *asi; int err = 0; + uint i; *out_asi = NULL; @@ -203,6 +237,9 @@ int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_ asi->mm = mm; asi->class_id = class_id; + for (i = KERNEL_PGD_BOUNDARY; i < PTRS_PER_PGD; i++) + set_pgd(asi->pgd + i, asi_global_nonsensitive_pgd[i]); + exit_unlock: if (err) __asi_destroy(asi); diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index ff253648706fa9cd49169a54882014a72ad540cf..9d358a05c4e18ac6d5e115de111758ea6cdd37f2 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1288,18 +1288,15 @@ static void __init register_page_bootmem_info(void) #endif } -/* - * Pre-allocates page-table pages for the vmalloc area in the kernel page-table. - * Only the level which needs to be synchronized between all page-tables is - * allocated because the synchronization can be expensive. - */ -static void __init preallocate_vmalloc_pages(void) +/* Initialize empty pagetables at the level below PGD. */ +void __init preallocate_sub_pgd_pages(pgd_t *pgd_table, ulong start, + ulong end, const char *name) { unsigned long addr; const char *lvl; - for (addr = VMALLOC_START; addr <= VMEMORY_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { - pgd_t *pgd = pgd_offset_k(addr); + for (addr = start; addr <= end; addr = ALIGN(addr + 1, PGDIR_SIZE)) { + pgd_t *pgd = pgd_offset_pgd(pgd_table, addr); p4d_t *p4d; pud_t *pud; @@ -1335,7 +1332,17 @@ static void __init preallocate_vmalloc_pages(void) * The pages have to be there now or they will be missing in * process page-tables later. */ - panic("Failed to pre-allocate %s pages for vmalloc area\n", lvl); + panic("Failed to pre-allocate %s pages for %s area\n", lvl, name); +} + +/* + * Pre-allocates page-table pages for the vmalloc area in the kernel page-table. + * Only the level which needs to be synchronized between all page-tables is + * allocated because the synchronization can be expensive. + */ +static void __init preallocate_vmalloc_pages(void) +{ + preallocate_sub_pgd_pages(init_mm.pgd, VMALLOC_START, VMEMORY_END, "vmalloc"); } void __init mem_init(void) diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h index 3f37b5c80bb32ff34656a20789449da92e853eb6..1203a977edcd523589ad88a37aab01398a10a129 100644 --- a/arch/x86/mm/mm_internal.h +++ b/arch/x86/mm/mm_internal.h @@ -25,4 +25,7 @@ void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache); extern unsigned long tlb_single_page_flush_ceiling; +extern void preallocate_sub_pgd_pages(pgd_t *pgd_table, ulong start, + ulong end, const char *name); + #endif /* __X86_MM_INTERNAL_H */ diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h index 5be8f7d657ba0bc2196e333f62b084d0c9eef7b6..7867b8c23449058a1dd06308ab5351e0d210a489 100644 --- a/include/asm-generic/asi.h +++ b/include/asm-generic/asi.h @@ -23,6 +23,8 @@ typedef u8 asi_taints_t; #ifndef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION +#define ASI_GLOBAL_NONSENSITIVE NULL + struct asi_hooks {}; struct asi {}; From patchwork Fri Jan 10 18:40:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935574 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ABC29E7719D for ; Fri, 10 Jan 2025 23:20:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kz2cNRg4C29L6EmWZlec/3p0VMJNBBTUI9BAeo1GLzA=; b=Q1XL0x1zGxyE0egH9rLJaFu/8l cvgaNBJQmST+hoe0QVG3m80nWht2WW6uIoANrC5lnRGhEUUzDLeKkBdyQto3nR9nVKqHDtVXK8FLY s0+1izKwWfyxT0/evTISnzZ34HdpQ+AUfYbOnwi5/TzAGyD5pH/eQg5eDtUkBs8og8fLvG5XGwQg+ lQeVNZMrQk1xsA5Vefc24wPBpUDGRw8yEnLuOznoU9+6K+wMvUFs9U6wQs9mlg8Vuq1p5nvvDEljV WAhGvlJBqgaQMIpvyYEcF9P+YZcGQh9PEopmPBXeLH1srmw2f9fehXislHJ8BIlQ8PBCrJswMSHTr CON3iCfQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIH-0000000HEZT-2rps; Fri, 10 Jan 2025 23:19:57 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwh-0000000Gbct-3t7r for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=DqQy0/uo7rp5xt5hxwNN280PfwFL6RX7EFGPCdWY0es=; b=A+/HdBDjco2UYYXqt8p6ej53KH FQHjmtdin9PMNhS0DeTtKvsNH2ww9Hpu2WpuYmZtQLn4WFfNu99y6HYEOsuD0DJMCjz7ZGqYgWaJR ZD3F9TCKJU68J+3TTn4wGj/JCPMxE4Ng1nxohbPULfxb16GngKDnkIeljM3Skx/e52C46AV1m8fze 9P9gV8U2Yi/9rCIinS+ASaMO3BhwT756tKA6lFbhqv0xAkQddB2rk53TcDOL7cnVce8hNL0KcX/1+ cgUjGrCs853DweKkSruuT81UuF0KoKvexW8oYqILa0EV1wWEmNgRTXw2loGupPc/5eHSG21KIhHwe pa520Oog==; Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwa-00000009sXm-3bzn for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:19 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-43582d49dacso16414085e9.2 for ; Fri, 10 Jan 2025 10:41:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534475; x=1737139275; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DqQy0/uo7rp5xt5hxwNN280PfwFL6RX7EFGPCdWY0es=; b=3VaJ1QNUVgChADY9F1sMg77acz9tvDKi30MSbMN8CSWpJ8QGyaTHauGT178xBBUeDo AuXaeCWbj1PEgGurbgdvJlKafxuVRfZWV+VNLUyXPTvKO6afx/x3x1WjHuuPwR+7tE84 jsTaoXZ7bUUqSwV2DoqtGSuQO/V4tTFMlnmd4KRZf06fAi8KFtr3NVN/X1hdZMSnVotT qYR52rRQN1q3DabM10Hg0Ac+U+f71BUcIao/taUURtrXBcCKn+yy1CAyaRZzEQL5gKY7 eZywMdFFsNk72ut2uKC50mGNvWd6EuoTAjs4eAKIPwK0P6kX8HOJ9OvLSdQrHRfLnlYZ kuhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534475; x=1737139275; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DqQy0/uo7rp5xt5hxwNN280PfwFL6RX7EFGPCdWY0es=; b=NGOj+xERBkaH0392akEuLqaDvPdc3eSRJVDFcmux2YSE8q5OiccehZIOdVWJsaNd54 6ATrNuxGhASNwj96ItEAEf8aoKvmtLDRej6SysodlWhce/HkQgFiwq5a4qmwg0MV7V2w 0p/YXqgHhVP3U92Ih3A83nZIbECGEqWn6NDrqonLZ+r2aNjw9mo2LTUBth4jO+IKf24S TaAy4k28FCc7P0KZetXCiS+EKiBrYcR2nRl2OuEn9QsAllFL79OwBMxVsUpXKkdNJlLl s5/lZO+RiNIZKszXtPuqluIwm5v+k60bVgocyHq3HekVguPx2mRzikpRUkZAC51JID14 d5Jg== X-Forwarded-Encrypted: i=1; AJvYcCX7/CFqELJMzkWGiuaJKwsAuXBpasfJ7EvfTScec+Kyj4LrQoszmoUw30Qq15oRXCEhhMQXvA+Oid4a3w==@lists.infradead.org X-Gm-Message-State: AOJu0Yxzt/e8pSMBASQ4MotgbUWyU/WfXPtSWiZ9JFIN6GnFSAcF9g1z bxikRWfkLp5RMAuDl0A2K8GfYvSkSI3rxwN7wqZr6BsVQx7T56nYOvIbfE95Dk2sgDE4ghca/uJ SmoLGe/wc/Q== X-Google-Smtp-Source: AGHT+IEHMyjAf4Ii6uANDaoYyo5ZkF+D1/fwKOgm4kaaSXfegc+L5vVxTYn6WIEloGlJDmt93RFbSdb2jTZqKA== X-Received: from wmbjt19.prod.google.com ([2002:a05:600c:5693:b0:435:4bd2:1dcd]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e44:b0:434:e9ee:c3d with SMTP id 5b1f17b1804b1-436e27070b1mr106983675e9.20.1736534474969; Fri, 10 Jan 2025 10:41:14 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:39 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-13-8419288bc805@google.com> Subject: [PATCH RFC v2 13/29] mm: Add __PAGEFLAG_FALSE From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184117_073054_F58BD625 X-CRM114-Status: GOOD ( 12.42 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org __PAGEFLAG_FALSE is a non-atomic equivalent of PAGEFLAG_FALSE. Checkpatch-args: --ignore=COMPLEX_MACRO Signed-off-by: Brendan Jackman --- include/linux/page-flags.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index cc839e4365c18223e68c35efd0f67e7650708e8b..7ee9a0edc6d21708fc93dfa8913dc1ae9478dee3 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -484,6 +484,10 @@ static inline int Page##uname(const struct page *page) { return 0; } FOLIO_SET_FLAG_NOOP(lname) \ static inline void SetPage##uname(struct page *page) { } +#define __SETPAGEFLAG_NOOP(uname, lname) \ +static inline void __folio_set_##lname(struct folio *folio) { } \ +static inline void __SetPage##uname(struct page *page) { } + #define CLEARPAGEFLAG_NOOP(uname, lname) \ FOLIO_CLEAR_FLAG_NOOP(lname) \ static inline void ClearPage##uname(struct page *page) { } @@ -506,6 +510,9 @@ static inline int TestClearPage##uname(struct page *page) { return 0; } #define TESTSCFLAG_FALSE(uname, lname) \ TESTSETFLAG_FALSE(uname, lname) TESTCLEARFLAG_FALSE(uname, lname) +#define __PAGEFLAG_FALSE(uname, lname) TESTPAGEFLAG_FALSE(uname, lname) \ + __SETPAGEFLAG_NOOP(uname, lname) __CLEARPAGEFLAG_NOOP(uname, lname) + __PAGEFLAG(Locked, locked, PF_NO_TAIL) FOLIO_FLAG(waiters, FOLIO_HEAD_PAGE) FOLIO_FLAG(referenced, FOLIO_HEAD_PAGE) From patchwork Fri Jan 10 18:40:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A56ECE7719E for ; Fri, 10 Jan 2025 23:20:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=QNRmW13ORS1xKqXEicjNe/apMydtnRPrcZXAGtWK4EA=; b=M/1fzXiSeXCJMhfU5TZ4Hl3Rp8 /NfGt48lUmKMHtktcFSOHW/oOAFAhfPVXpjzqfdI2TpcGcAH5KaZZe0oATN9s0UoXieoR73YvL5HG YudgPqFQMzSUbRwWsDbJg8eUVtmzoLmqm16pYxaLgqmkAllwfJKXqfzP5T+r87Ce0P9yKHq9G+czc FQko6fAaqVuuD6jsEbEAQxeKXf67NBqvTBryHYHGGTb/vhU5CpuR6s4dfokQRvmtHwtyg3uGUlPLU BDIBrnY8i+1GAG9j6eAyN0b1sCI5Iauizze7ZO0pbvMhrULphziwqR3VHvRRmXqBOKA9uWXLAk9Ol rA72iHwQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOII-0000000HEaW-258O; Fri, 10 Jan 2025 23:19:58 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwk-0000000Gbf9-0h1X for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=pY19vAYDyHEycAFKYfxYiz+ttrD3KEZuAB5DIP4OiZk=; b=ZxL9g4zQhGCBpAXZBp58iPCfES d28x2F+t7UHuxMENKUoys+ud5bM9n5HCPRRqRdvn4NWgDiEUVA+zZl6ytZDZJTdTC4JtVgkOT7a+j yHt9uiUyzW2y775cAmv5p1aQ5z4QH+/4scj41nmTGK1AnfqbEIcBOLkOKxKJxL/h/cnTbCWDtT1c1 j6YBF1rT6RpnGdx/v5kfS69x4eCPOW4S62l+1C7OacQjTi3+2ChPEvBvaL9mIbOyPiyFHSKUaV7f8 nCDpgPIscvJB7yO9fjf66IBBLXq/nm+wzfAsZ1CF8BJLLIq23R8x32qkFj+xeG4CkFNPSgWBqHYqt qmxPkmfQ==; Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwg-00000009sZs-11rf for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:24 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-385d6ee042eso1539531f8f.0 for ; Fri, 10 Jan 2025 10:41:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534477; x=1737139277; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pY19vAYDyHEycAFKYfxYiz+ttrD3KEZuAB5DIP4OiZk=; b=j6ZtcB/SIXQkX1ncfBiSs+ibJ91RviqZ9u/DkRy8t5aKnn8/uZgfoudXK0KoaeIIhj OMB8yyEXDUS726YWOBkgiPm85GJnoMF7SpVQyOLheZ+SnBzA5utmKts1qEUasZj47c1b Rjzl/1CSBkaBGku8o1MUxBwrvyCz3WU732T+b+Di+f3H6USIZQD94w7agBimG8GCG4iw kUou+4jqZyjdICVijaVKuvX/X5IUvdPmCLDa1ePTDJ3AnjPfOsOqmuuxmPMUsU817VoF kFEGsspSrkS1qP/zv3y87U645jE0xLatHASpqn6wS9kk/vB1KbQWqByuWv4NiVpez7qW KdZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534477; x=1737139277; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pY19vAYDyHEycAFKYfxYiz+ttrD3KEZuAB5DIP4OiZk=; b=Z0GY6oOGJz/VDzA4iZ9PK9N3hOzkf9oWbSCQRQrdbYbqcGwfGVIYA70P7/AecoHNDY e7a+2SbCqj6N3bLy1jQ0Y0FjAZRyu3M7rfNc3Uh5lXy7RGrTYrGNL8iAMmDGclnd8Ekf N22ixzUCfOVu5XHYAoKgbRQw9GgvZQEQxTCieQ5n/fdaGXiwpWBE6FqdqMHfcB5RfBq3 ZXZpmJYQtdVXhHfeRQlfH6X6zYj+MHbTKE1LSuuYqNqtffQmIp5US1EUiMuF8dPfy48I Py7/J8GI7hUwHG8TqDxkLDVoMs6jRbuXmW/887nF19HmEYNm/a1yidaHbcmyuFLG8oO6 oaQQ== X-Forwarded-Encrypted: i=1; AJvYcCUq4dtV8stM5fxjYGtLFToPhBferxVLu+7XWLq07XHelItDiVCAY7R9KXCIsnTMZZ71HvTrrP6nqggyDQ==@lists.infradead.org X-Gm-Message-State: AOJu0YzxeFhQgErOkko0M8JpROY6TVsv7TkI2MyqJSc5Sr+XAAwOMts1 ukyvEM0OAFyQuogK6VBi3ka7O4bqzwTUs11mtiokqDttJ2QUnNqEgHGipcrYowbuvhcPwZ6kkLM +94H4FCOR/g== X-Google-Smtp-Source: AGHT+IH1S1jc9i/BllMTXkt6dmv+P02SGjsqDnUUQ50M5C80r6DIU0GdLO3snheexKkp2plBsUVSQfJITnAXdg== X-Received: from wmbfk13.prod.google.com ([2002:a05:600c:ccd:b0:434:f9da:44af]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:154a:b0:385:eeb9:a5bb with SMTP id ffacd0b85a97d-38a872de3femr10883177f8f.17.1736534477376; Fri, 10 Jan 2025 10:41:17 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:40 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-14-8419288bc805@google.com> Subject: [PATCH RFC v2 14/29] mm: asi: Map non-user buddy allocations as nonsensitive From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184122_935508_F76CE3FF X-CRM114-Status: GOOD ( 34.02 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This is just simplest possible page_alloc patch I could come up with to demonstrate ASI working in a "denylist" mode: we map the direct map into the restricted address space, except pages allocated with GFP_USER. Pages must be asi_unmap()'d before they can be re-allocated. This requires a TLB flush, which can't generally be done from the free path (requires IRQs on), so pages that need unmapping are freed via a workqueue. This solution is not ideal: - If the async queue gets long, we'll run out of allocatable memory. - We don't batch the TLB flushing or worker wakeups at all. - We drop FPI flags and skip the pcplists. Internally at Google we've so far found with extra complexity we're able to make this solution work for the workloads we've tested so far. It seems likely this won't keep working forever. So instead for the [PATCH] version I hope to come up with an implementation that instead just makes the allocator more deeply aware of sensitivity, most likely this will look a bit like an extra "dimension" like movability etc. This was discussed at LSF/MM/BPF [1], I plan to research this right after RFCv2. However, once that research is done we might want to consider merging a sub-optimal solution to unblock iteration and development. [1] https://youtu.be/WD9-ey8LeiI The main thing in here that is "real" and may warrant discussion is __GFP_SENSITIVE (or at least, some sort of allocator switch to determine sensitivity, in an "allowlist" model we would probably have the opposite, and in future iterations we might want additional options for different "types" of sensitivity). I think we need this as an extension to the allocation API; the main alternative would be to infer from context of the allocation whether the data should be treated as sensitive; however I think we will have contexts where both sensitive and nonsensitive data needs to be allocatable. If there are concerns about __GFP flags specifically, rather than just the general problem of expanding the allocator API, we could always just provide an API like __alloc_pages_sensitive or something, implemented with ALLOC_ flags internally. Checkpatch-args: --ignore=SPACING,MACRO_ARG_UNUSED,COMPLEX_MACRO Signed-off-by: Brendan Jackman --- arch/x86/mm/asi.c | 33 +++++++++- include/linux/gfp.h | 5 ++ include/linux/gfp_types.h | 15 ++++- include/linux/page-flags.h | 11 ++++ include/trace/events/mmflags.h | 12 +++- mm/mm_init.c | 1 + mm/page_alloc.c | 134 ++++++++++++++++++++++++++++++++++++++++- tools/perf/builtin-kmem.c | 1 + 8 files changed, 205 insertions(+), 7 deletions(-) diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index 17391ec8b22e3c0903cd5ca29cbb03fcc4cbacce..b951f2100b8bdea5738ded16166255deb29faf57 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -5,6 +5,8 @@ #include #include +#include + #include #include #include @@ -104,10 +106,17 @@ const char *asi_class_name(enum asi_class_id class_id) * allocator from interrupts and the page allocator ultimately calls this * code. * - They support customizing the allocation flags. + * - They avoid infinite recursion when the page allocator calls back to + * asi_map * * On the other hand, they do not use the normal page allocation infrastructure, * that means that PTE pages do not have the PageTable type nor the PagePgtable * flag and we don't increment the meminfo stat (NR_PAGETABLE) as they do. + * + * As an optimisation we attempt to map the pagetables in + * ASI_GLOBAL_NONSENSITIVE, but this can fail, and for simplicity we don't do + * anything about that. This means it's invalid to access ASI pagetables from a + * critical section. */ static_assert(!IS_ENABLED(CONFIG_PARAVIRT)); #define DEFINE_ASI_PGTBL_ALLOC(base, level) \ @@ -116,8 +125,11 @@ static level##_t * asi_##level##_alloc(struct asi *asi, \ gfp_t flags) \ { \ if (unlikely(base##_none(*base))) { \ - ulong pgtbl = get_zeroed_page(flags); \ + /* Stop asi_map calls causing recursive allocation */ \ + gfp_t pgtbl_gfp = flags | __GFP_SENSITIVE; \ + ulong pgtbl = get_zeroed_page(pgtbl_gfp); \ phys_addr_t pgtbl_pa; \ + int err; \ \ if (!pgtbl) \ return NULL; \ @@ -131,6 +143,16 @@ static level##_t * asi_##level##_alloc(struct asi *asi, \ } \ \ mm_inc_nr_##level##s(asi->mm); \ + \ + err = asi_map_gfp(ASI_GLOBAL_NONSENSITIVE, \ + (void *)pgtbl, PAGE_SIZE, flags); \ + if (err) \ + /* Should be rare. Spooky. */ \ + pr_warn_ratelimited("Created sensitive ASI %s (%pK, maps %luK).\n",\ + #level, (void *)pgtbl, addr); \ + else \ + __SetPageGlobalNonSensitive(virt_to_page(pgtbl));\ + \ } \ out: \ VM_BUG_ON(base##_leaf(*base)); \ @@ -586,6 +608,9 @@ static bool follow_physaddr( * reason for this is that we don't want to unexpectedly undo mappings that * weren't created by the present caller. * + * This must not be called from the critical section, as ASI's pagetables are + * not guaranteed to be mapped in the restricted address space. + * * If the source mapping is a large page and the range being mapped spans the * entire large page, then it will be mapped as a large page in the ASI page * tables too. If the range does not span the entire huge page, then it will be @@ -606,6 +631,9 @@ int __must_check asi_map_gfp(struct asi *asi, void *addr, unsigned long len, gfp if (!static_asi_enabled()) return 0; + /* ASI pagetables might be sensitive. */ + WARN_ON_ONCE(asi_in_critical_section()); + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); VM_BUG_ON(!IS_ALIGNED(len, PAGE_SIZE)); /* RFC: fault_in_kernel_space should be renamed. */ @@ -706,6 +734,9 @@ void asi_unmap(struct asi *asi, void *addr, size_t len) if (!static_asi_enabled() || !len) return; + /* ASI pagetables might be sensitive. */ + WARN_ON_ONCE(asi_in_critical_section()); + VM_BUG_ON(start & ~PAGE_MASK); VM_BUG_ON(len & ~PAGE_MASK); VM_BUG_ON(!fault_in_kernel_space(start)); /* Misnamed, ignore "fault_" */ diff --git a/include/linux/gfp.h b/include/linux/gfp.h index a951de920e208991b37fb2d878d9a0e9c550548c..dd3678b5b08016ceaee2d8e1932bf4aefbc7efb0 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -396,6 +396,11 @@ extern void page_frag_free(void *addr); #define __free_page(page) __free_pages((page), 0) #define free_page(addr) free_pages((addr), 0) +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION +void page_alloc_init_asi(void); +#else +static inline void page_alloc_init_asi(void) { } +#endif void page_alloc_init_cpuhp(void); int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp); void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp); diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index 65db9349f9053c701e24bdcf1dfe6afbf1278a2d..5147dbd53eafdccc32cfd506569b04d5c082d1b2 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -58,6 +58,7 @@ enum { #ifdef CONFIG_SLAB_OBJ_EXT ___GFP_NO_OBJ_EXT_BIT, #endif + ___GFP_SENSITIVE_BIT, ___GFP_LAST_BIT }; @@ -103,6 +104,11 @@ enum { #else #define ___GFP_NO_OBJ_EXT 0 #endif +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION +#define ___GFP_SENSITIVE BIT(___GFP_SENSITIVE_BIT) +#else +#define ___GFP_SENSITIVE 0 +#endif /* * Physical address zone modifiers (see linux/mmzone.h - low four bits) @@ -299,6 +305,12 @@ enum { /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) +/* + * Allocate sensitive memory, i.e. do not map it into ASI's restricted address + * space. + */ +#define __GFP_SENSITIVE ((__force gfp_t)___GFP_SENSITIVE) + /* Room for N __GFP_FOO bits */ #define __GFP_BITS_SHIFT ___GFP_LAST_BIT #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) @@ -380,7 +392,8 @@ enum { #define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM | __GFP_NOWARN) #define GFP_NOIO (__GFP_RECLAIM) #define GFP_NOFS (__GFP_RECLAIM | __GFP_IO) -#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL) +#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | \ + __GFP_HARDWALL | __GFP_SENSITIVE) #define GFP_DMA __GFP_DMA #define GFP_DMA32 __GFP_DMA32 #define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 7ee9a0edc6d21708fc93dfa8913dc1ae9478dee3..761b082f1885976b860196d8e69044276e8fa9ca 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -125,6 +125,9 @@ enum pageflags { #endif #ifdef CONFIG_ARCH_USES_PG_ARCH_3 PG_arch_3, +#endif +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + PG_global_nonsensitive, #endif __NR_PAGEFLAGS, @@ -632,6 +635,14 @@ FOLIO_TEST_CLEAR_FLAG_FALSE(young) FOLIO_FLAG_FALSE(idle) #endif +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION +__PAGEFLAG(GlobalNonSensitive, global_nonsensitive, PF_ANY); +#define __PG_GLOBAL_NONSENSITIVE (1UL << PG_global_nonsensitive) +#else +__PAGEFLAG_FALSE(GlobalNonSensitive, global_nonsensitive); +#define __PG_GLOBAL_NONSENSITIVE 0 +#endif + /* * PageReported() is used to track reported free pages within the Buddy * allocator. We can use the non-atomic version of the test and set diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index bb8a59c6caa21971862b6f200263c74cedff3882..a511a76b4310e949fd5b40b01253cf7d262f0177 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -50,7 +50,8 @@ gfpflag_string(__GFP_RECLAIM), \ gfpflag_string(__GFP_DIRECT_RECLAIM), \ gfpflag_string(__GFP_KSWAPD_RECLAIM), \ - gfpflag_string(__GFP_ZEROTAGS) + gfpflag_string(__GFP_ZEROTAGS), \ + gfpflag_string(__GFP_SENSITIVE) #ifdef CONFIG_KASAN_HW_TAGS #define __def_gfpflag_names_kasan , \ @@ -95,6 +96,12 @@ #define IF_HAVE_PG_ARCH_3(_name) #endif +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION +#define IF_HAVE_ASI(_name) ,{1UL << PG_##_name, __stringify(_name)} +#else +#define IF_HAVE_ASI(_name) +#endif + #define DEF_PAGEFLAG_NAME(_name) { 1UL << PG_##_name, __stringify(_name) } #define __def_pageflag_names \ @@ -122,7 +129,8 @@ IF_HAVE_PG_HWPOISON(hwpoison) \ IF_HAVE_PG_IDLE(idle) \ IF_HAVE_PG_IDLE(young) \ IF_HAVE_PG_ARCH_2(arch_2) \ -IF_HAVE_PG_ARCH_3(arch_3) +IF_HAVE_PG_ARCH_3(arch_3) \ +IF_HAVE_ASI(global_nonsensitive) #define show_page_flags(flags) \ (flags) ? __print_flags(flags, "|", \ diff --git a/mm/mm_init.c b/mm/mm_init.c index 4ba5607aaf1943214c7f79f2a52e17eefac2ad79..30b84c0dd8b764e91fb64b116805ebb46526dd7e 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2639,6 +2639,7 @@ void __init mm_core_init(void) BUILD_BUG_ON(MAX_ZONELISTS > 2); build_all_zonelists(NULL); page_alloc_init_cpuhp(); + page_alloc_init_asi(); /* * page_ext requires contiguous pages, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b6958333054d06ed910f8fef863d83a7312eca9e..3e98fdfbadddb1f7d71e9e050b63255b2008d167 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1041,6 +1041,8 @@ static void kernel_init_pages(struct page *page, int numpages) kasan_enable_current(); } +static bool asi_async_free_enqueue(struct page *page, unsigned int order); + __always_inline bool free_pages_prepare(struct page *page, unsigned int order) { @@ -1049,6 +1051,11 @@ __always_inline bool free_pages_prepare(struct page *page, bool init = want_init_on_free(); bool compound = PageCompound(page); struct folio *folio = page_folio(page); + /* + * __PG_GLOBAL_NONSENSITIVE needs to be kept around for the ASI async + * free logic. + */ + unsigned long flags_mask = ~PAGE_FLAGS_CHECK_AT_PREP | __PG_GLOBAL_NONSENSITIVE; VM_BUG_ON_PAGE(PageTail(page), page); @@ -1107,7 +1114,7 @@ __always_inline bool free_pages_prepare(struct page *page, continue; } } - (page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; + (page + i)->flags &= flags_mask; } } if (PageMappingFlags(page)) { @@ -1123,7 +1130,7 @@ __always_inline bool free_pages_prepare(struct page *page, } page_cpupid_reset_last(page); - page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; + page->flags &= flags_mask; reset_page_owner(page, order); page_table_check_free(page, order); pgalloc_tag_sub(page, 1 << order); @@ -1164,7 +1171,7 @@ __always_inline bool free_pages_prepare(struct page *page, debug_pagealloc_unmap_pages(page, 1 << order); - return true; + return !asi_async_free_enqueue(page, order); } /* @@ -4528,6 +4535,118 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, return true; } +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + +struct asi_async_free_cpu_state { + struct work_struct work; + struct list_head to_free; +}; +static DEFINE_PER_CPU(struct asi_async_free_cpu_state, asi_async_free_cpu_state); + +static void asi_async_free_work_fn(struct work_struct *work) +{ + struct asi_async_free_cpu_state *cpu_state = + container_of(work, struct asi_async_free_cpu_state, work); + struct page *page, *tmp; + struct list_head to_free = LIST_HEAD_INIT(to_free); + + local_irq_disable(); + list_splice_init(&cpu_state->to_free, &to_free); + local_irq_enable(); /* IRQs must be on for asi_unmap. */ + + /* Use _safe because __free_the_page uses .lru */ + list_for_each_entry_safe(page, tmp, &to_free, lru) { + unsigned long order = page_private(page); + + asi_unmap(ASI_GLOBAL_NONSENSITIVE, page_to_virt(page), + PAGE_SIZE << order); + for (int i = 0; i < (1 << order); i++) + __ClearPageGlobalNonSensitive(page + i); + + free_one_page(page_zone(page), page, page_to_pfn(page), order, FPI_NONE); + cond_resched(); + } +} + +/* Returns true if the page was queued for asynchronous freeing. */ +static bool asi_async_free_enqueue(struct page *page, unsigned int order) +{ + struct asi_async_free_cpu_state *cpu_state; + unsigned long flags; + + if (!PageGlobalNonSensitive(page)) + return false; + + local_irq_save(flags); + cpu_state = this_cpu_ptr(&asi_async_free_cpu_state); + set_page_private(page, order); + list_add(&page->lru, &cpu_state->to_free); + if (mm_percpu_wq) + queue_work_on(smp_processor_id(), mm_percpu_wq, &cpu_state->work); + local_irq_restore(flags); + + return true; +} + +void __init page_alloc_init_asi(void) +{ + int cpu; + + if (!static_asi_enabled()) + return; + + for_each_possible_cpu(cpu) { + struct asi_async_free_cpu_state *cpu_state + = &per_cpu(asi_async_free_cpu_state, cpu); + + INIT_WORK(&cpu_state->work, asi_async_free_work_fn); + INIT_LIST_HEAD(&cpu_state->to_free); + } +} + +static int asi_map_alloced_pages(struct page *page, uint order, gfp_t gfp_mask) +{ + + if (!static_asi_enabled()) + return 0; + + if (!(gfp_mask & __GFP_SENSITIVE)) { + int err = asi_map_gfp( + ASI_GLOBAL_NONSENSITIVE, page_to_virt(page), + PAGE_SIZE * (1 << order), gfp_mask); + uint i; + + if (err) + return err; + + for (i = 0; i < (1 << order); i++) + __SetPageGlobalNonSensitive(page + i); + } + + return 0; +} + +#else /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ + +static inline +int asi_map_alloced_pages(struct page *pages, uint order, gfp_t gfp_mask) +{ + return 0; +} + +static inline +bool asi_unmap_freed_pages(struct page *page, unsigned int order) +{ + return true; +} + +static bool asi_async_free_enqueue(struct page *page, unsigned int order) +{ + return false; +} + +#endif + /* * __alloc_pages_bulk - Allocate a number of order-0 pages to a list or array * @gfp: GFP flags for the allocation @@ -4727,6 +4846,10 @@ struct page *__alloc_pages_noprof(gfp_t gfp, unsigned int order, if (WARN_ON_ONCE_GFP(order > MAX_PAGE_ORDER, gfp)) return NULL; + /* Clear out old (maybe sensitive) data before reallocating as nonsensitive. */ + if (!static_asi_enabled() && !(gfp & __GFP_SENSITIVE)) + gfp |= __GFP_ZERO; + gfp &= gfp_allowed_mask; /* * Apply scoped allocation constraints. This is mainly about GFP_NOFS @@ -4773,6 +4896,11 @@ struct page *__alloc_pages_noprof(gfp_t gfp, unsigned int order, trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype); kmsan_alloc_page(page, order, alloc_gfp); + if (page && unlikely(asi_map_alloced_pages(page, order, gfp))) { + __free_pages(page, order); + page = NULL; + } + return page; } EXPORT_SYMBOL(__alloc_pages_noprof); diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c index a756147e2eec7a3820e1953db346fafa8fe687ba..99f4c6632155d2573f1370af131c15c3d8baa655 100644 --- a/tools/perf/builtin-kmem.c +++ b/tools/perf/builtin-kmem.c @@ -682,6 +682,7 @@ static const struct { { "__GFP_RECLAIM", "R" }, { "__GFP_DIRECT_RECLAIM", "DR" }, { "__GFP_KSWAPD_RECLAIM", "KR" }, + { "__GFP_SENSITIVE", "S" }, }; static size_t max_gfp_len; From patchwork Fri Jan 10 18:40:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935576 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 56AE8E77188 for ; Fri, 10 Jan 2025 23:20:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=mWU1ilxepvqnI5nO6yU1C8ZrwTTyWfon9Tfr7IRLMj4=; b=ZJpLeQpjom1kXCCSgg+V51X0LP 8gPTrcm0f9gbgGec6/4jB7nNXmLhTDyjm84+BPD1uJIJ4QrVr72258WwU41gBHNgsE40pGCbn0SPm q00QAc8W8b+AO4sDHuX5qegAOGChMthL8QWONwzXDEaLNyMdJqwflkJ+N1+zz+tMlHeZviRw7qQYB h58dkpwZ454qMgeCjHX66ytiNKkcYBXPoUDeZW8K5B9uJUVTflHXoYz+iKsUHu/PPhlxzsk3Sxwn8 oaD10y+4D09Ka0w+F9ZfGRHV4jYdzNFOX4DWIlc1Yau/ULES/Swo4ktsESk4P9WlyIOo0JblLkpVZ Cq8yvnvg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIJ-0000000HEc4-2Jh0; Fri, 10 Jan 2025 23:19:59 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwl-0000000GbfW-3FAE for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Aw/CnxStV8mvixiIOCl4TLOf7+c8cjSAdFrfDhkuuHg=; b=ZAMQvj9T5BWP7UD7vh8g6yunZ8 m4a0aF4cy8GfWceN/po9RKjrCYydPV1f1DL6CEFWxAgdwQq5rxbeUsmlZce9/TY+RwAQIUwMQi+Z9 Z0DXdluh72mf2HvUySahaCpGRe/Oo8Xn8pORHcPZLUOv0CO+kUaqlguTHhODfAE2YNBbbPgKlt1J/ hKQAfav1n/fD7G9G1XhpPov04iOTwlY5F+YRkFRoT0N+U7tCxkX1/rX7Jbv0l2i0rIEOi4V1+sS81 KQlZ+HmdcDrwUlu4tywPh1qbFXOkSfQXmUCgvD1KnGAQZwKvWLaTShkr4AaRIkx0waiNBVDv3Yv94 mzFfhiGg==; Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by casper.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwg-0000000EBLL-2loR for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:26 +0000 Received: by mail-wr1-x449.google.com with SMTP id ffacd0b85a97d-385d52591d6so1061760f8f.1 for ; Fri, 10 Jan 2025 10:41:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534480; x=1737139280; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Aw/CnxStV8mvixiIOCl4TLOf7+c8cjSAdFrfDhkuuHg=; b=MbwMKC84jL1fBfsW25q7irDAw5uGepFOeva/nmxTl44zib7HxRE7cGiJcGbHC/c9AT onDVtm7LGfcUQRhvQrNMGBMfepqbCVUTyLXjdndPdStB7+6bJSPj9NtIwNlAuNE4E5qL bIUtIgLVwGNQWAtCVDTbVpcKx9F4UETBcmla5OIscIbHEhQxqOfKR9narGfx2k1X/G98 U99MjejPaA6OcgYHNe5PpaisKzocFbDGxrXMG3BnYMHy1rj22mJsodGmqPIxYeMe34DL jdLBwNS90OhQIl8FvUmf9rPOGZ71YRLFKCy78jpMzfL+I2OJmoxs1z8whebKUJq+fogP AfWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534480; x=1737139280; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Aw/CnxStV8mvixiIOCl4TLOf7+c8cjSAdFrfDhkuuHg=; b=D/jzI8iXMILpkSQXxG/vBlUbrB8q65GgJVXah+2nqFFjF9NpV4hKFwgehuIfbbOHFF FyOgxmE1OS1H+5qeo+jtxu4WEXWT1idVBGedlwOFHugxWrpPuOrGltljh8retCyDLST9 kxeTFb5HV0AArakRKlyHLVVE+RmCEj7yrAqMJUOoTtwXNCgwD9KPbJZzH4N5JxWLqHKe NG6L54ymf4ZtL17Bzq3iWU2XGut1kUxjRSIVy7Z3ntS0ygbIYhCmYW11lYvaO1JcHS6a Wo7vQJjKYAonQgFG8hnB9h07qJdlFlnHOqZHYCT3lLdFEc4AjYXmQV54vlX2qGjFENKb xp5w== X-Forwarded-Encrypted: i=1; AJvYcCWb2dgknHUw6CX62OB6jDpyhpO97k1+fkQ38Kv17MG3gZqE3HqiA5PrXiYd+kWaWGbkyPhUrxTD1NlB7A==@lists.infradead.org X-Gm-Message-State: AOJu0Yw4d+LG4lwxl8BKZ50CZkyf6qZI+w10m2fNbkgDG7xr694y+YSL 5nylw/1uIsFjlV7QT2IV8lVFcIk5k93LNN70VO69mlqZeHQ3jsf330yuvM2w2FTl1EdCe/Apzz9 1rL7kA50jaQ== X-Google-Smtp-Source: AGHT+IEfIL+1aAzwB4pbtK9eyCgzXtWnxn8Tg5skOTY2l9Jdg/lcArfVr7lNF3vncmrBkENqt6o7go6KThD7+Q== X-Received: from wmqe1.prod.google.com ([2002:a05:600c:4e41:b0:434:a050:ddcf]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1547:b0:386:34af:9bae with SMTP id ffacd0b85a97d-38a8b0b7fd0mr6975597f8f.4.1736534479521; Fri, 10 Jan 2025 10:41:19 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:41 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-15-8419288bc805@google.com> Subject: [PATCH TEMP WORKAROUND RFC v2 15/29] mm: asi: Workaround missing partial-unmap support From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184122_700729_39BB4B2C X-CRM114-Status: GOOD ( 22.27 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This is a hack, no need to review it carefully. asi_unmap() doesn't currently work unless it corresponds exactly to an asi_map() of the exact same region. This is mostly harmless (it's only a functional problem if you wanna touch those pages from the ASI critical section) but it's messy. For now, working around the only practical case that appears by moving the asi_map call up the call stack in the page allocator, to the place where we know the actual size the mapping is supposed to end up at. This just removes the main case where that happens. Later, a proper solution for partial unmaps will be needed. Signed-off-by: Brendan Jackman --- mm/page_alloc.c | 40 ++++++++++++++++++++++++++-------------- 1 file changed, 26 insertions(+), 14 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3e98fdfbadddb1f7d71e9e050b63255b2008d167..f96e95032450be90b6567f67915b0b941fc431d8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4604,22 +4604,20 @@ void __init page_alloc_init_asi(void) } } -static int asi_map_alloced_pages(struct page *page, uint order, gfp_t gfp_mask) +static int asi_map_alloced_pages(struct page *page, size_t size, gfp_t gfp_mask) { if (!static_asi_enabled()) return 0; if (!(gfp_mask & __GFP_SENSITIVE)) { - int err = asi_map_gfp( - ASI_GLOBAL_NONSENSITIVE, page_to_virt(page), - PAGE_SIZE * (1 << order), gfp_mask); + int err = asi_map_gfp(ASI_GLOBAL_NONSENSITIVE, page_to_virt(page), size, gfp_mask); uint i; if (err) return err; - for (i = 0; i < (1 << order); i++) + for (i = 0; i < (size >> PAGE_SHIFT); i++) __SetPageGlobalNonSensitive(page + i); } @@ -4629,7 +4627,7 @@ static int asi_map_alloced_pages(struct page *page, uint order, gfp_t gfp_mask) #else /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ static inline -int asi_map_alloced_pages(struct page *pages, uint order, gfp_t gfp_mask) +int asi_map_alloced_pages(struct page *pages, size_t size, gfp_t gfp_mask) { return 0; } @@ -4896,7 +4894,7 @@ struct page *__alloc_pages_noprof(gfp_t gfp, unsigned int order, trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype); kmsan_alloc_page(page, order, alloc_gfp); - if (page && unlikely(asi_map_alloced_pages(page, order, gfp))) { + if (page && unlikely(asi_map_alloced_pages(page, PAGE_SIZE << order, gfp))) { __free_pages(page, order); page = NULL; } @@ -5118,12 +5116,13 @@ void page_frag_free(void *addr) } EXPORT_SYMBOL(page_frag_free); -static void *make_alloc_exact(unsigned long addr, unsigned int order, - size_t size) +static void *finish_exact_alloc(unsigned long addr, unsigned int order, + size_t size, gfp_t gfp_mask) { if (addr) { unsigned long nr = DIV_ROUND_UP(size, PAGE_SIZE); struct page *page = virt_to_page((void *)addr); + struct page *first = page; struct page *last = page + nr; split_page_owner(page, order, 0); @@ -5132,9 +5131,22 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, while (page < --last) set_page_refcounted(last); - last = page + (1UL << order); + last = page + (1 << order); for (page += nr; page < last; page++) __free_pages_ok(page, 0, FPI_TO_TAIL); + + /* + * ASI doesn't support partially undoing calls to asi_map, so + * we can only safely free sub-allocations if they were made + * with __GFP_SENSITIVE in the first place. Users of this need + * to map with forced __GFP_SENSITIVE and then here we'll make a + * second asi_map_alloced_pages() call to do any mapping that's + * necessary, but with the exact size. + */ + if (unlikely(asi_map_alloced_pages(first, nr << PAGE_SHIFT, gfp_mask))) { + free_pages_exact(first, size); + return NULL; + } } return (void *)addr; } @@ -5162,8 +5174,8 @@ void *alloc_pages_exact_noprof(size_t size, gfp_t gfp_mask) if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM); - addr = get_free_pages_noprof(gfp_mask, order); - return make_alloc_exact(addr, order, size); + addr = get_free_pages_noprof(gfp_mask | __GFP_SENSITIVE, order); + return finish_exact_alloc(addr, order, size, gfp_mask); } EXPORT_SYMBOL(alloc_pages_exact_noprof); @@ -5187,10 +5199,10 @@ void * __meminit alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_ma if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM); - p = alloc_pages_node_noprof(nid, gfp_mask, order); + p = alloc_pages_node_noprof(nid, gfp_mask | __GFP_SENSITIVE, order); if (!p) return NULL; - return make_alloc_exact((unsigned long)page_address(p), order, size); + return finish_exact_alloc((unsigned long)page_address(p), order, size, gfp_mask); } /** From patchwork Fri Jan 10 18:40:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935578 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E8A03E77188 for ; Fri, 10 Jan 2025 23:20:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=7ecASLCLIj/wgGA9otCjssC+2Ke2xXZNM/eZNNNsmcg=; b=GBVhzbWGTIClEtctr1XbKI8+ih apGluOe9Jv8AjhYKpsB4W4PziV2Dr9md9o8opEnvmiGVDyiWcBYLLJycWWWuF13leClDO4E/3SQtr +NVmwxbh4i5Z7qy+pb8mqf+TwDZgbl62vIA+dKRA70bppJ2xpmNb96P1Z6JVVAEnfapTswW6oLBuM R6+QOqQYr04CM2t0iOBs12sWsjthN/M4gyhA4vpkqUJRBnvQdXR9KPf8U+Iqu0RLHfYcDoHFdvSp6 ypQv7G3vIn0nT3bIcSrG6MBi4umMFPpIAdx5d7X8ooyfmAhBPo5G7wIqG17hlvvsNG908gfi77OxX 7GrMJIBw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIL-0000000HEeW-26Mo; Fri, 10 Jan 2025 23:20:01 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwo-0000000Gbhg-1CTf for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:32 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=KV+yP+i7I0yJqjdnPuT5zohhg/S0Da4STnyrBal2kA0=; b=RHUVpqQ8DsY8/mWG3S4So1aOd4 Vy0bn8eK5zodLR2JT5apLrtEVCdPd37qOO1Hn7FBRx/PD/hEVsJf1BdJ4Y4s+2HaK83Ttsww9DPFj K4d0Xl1H7mz9s0AvD+UVmNwp1b4SFwUvlVR/9dhm0M/ZmEE+CJ2KQLv2kkl9diKs0WcS9JOjDDl2h 07Pzy6r57zV6VZ6Rjnjzg5GC1pkBtWLbxINIpoPyE6eDDYpScB1YBh76VIAL4VA2Hknq29CB/PRsy vjG2g95yOyaMwcYxR0IgDgLedLO5laa7D299Nh+VCisyiLSoPhq3Fzwk+LM0e6StCM19bYGCL+ZyP NuPIF9Kw==; Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwi-00000009scl-1Krg for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:28 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-436219070b4so11488005e9.1 for ; Fri, 10 Jan 2025 10:41:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534482; x=1737139282; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KV+yP+i7I0yJqjdnPuT5zohhg/S0Da4STnyrBal2kA0=; b=T+Y+7zuTZ3uuB41Q0pbed2FlPJmpU3v9IcNl3GFgW1sZqpevwd2McPQiXQWbMMNHID C4ywAzuswJq6jwtg/qQ2Kw3XwnZ66dMc+mwLC5k3BFTyR9Phu4nvCnniESGSKBQ2DA6S 7Z6oiuxSfBlEWCTNKTBFR6lITATLON0DZAOoIKVWMhLH8Vgkv7h4VWNbUv5TVBFiBZQL vrF9huerFv7J8ueHsPBxQTP9GWu3uOjMtFBgEZdiUfZ3L3YI4MrrpZD5K3oayST5j+df wq8Xp8xwX9xKtOrlErcwOoQxjP1ZE5rA/iZyemrvc/NAW+drCDJyqHlR1+htuJTM8wa9 IL4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534482; x=1737139282; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KV+yP+i7I0yJqjdnPuT5zohhg/S0Da4STnyrBal2kA0=; b=mlis8RHbMh6jljkqgVoZvB4bw90pMATE4xjjx4ZJBBxzzIMbFbngiDBelNZH7STxQ+ +2Be2l2eukFM0ASFbjFjYumO7ZF7sUTdbNTuwt3T+htmLnrlE8Ln3NtODDZNS5D+6IkX VuxuaXbWh9RIBO44tPQpu39l3T/e4pxR8iElCDVpcaDrqgrXvhHH/ZY6FhlSEfgrX1iG h2WPhR3bkashLf/DYTNzX1neCW3sN78Qlkf/0Kq0pf+ucQMTMOe1nYmlzrFNJ4WIJr0L EYviECjl8pmEnTc6Nx+dQN+prFgZE6NfzT/v98nFmwz3QZGy67zlqjgdBgrtbYUaN+Sw grew== X-Forwarded-Encrypted: i=1; AJvYcCXSFkCf4JGoovwuU8ZXgvqFAuwLpWW3Jm+HxqdQzUbCth4+UEl1oRORt5K+PZa8xufX+S9rHCSr9xuoBg==@lists.infradead.org X-Gm-Message-State: AOJu0Yy8mCUV/lbSEScVgMBhfBWb2m9T0aDSR9Nm6hXD4+8MVQvRHunH PUkER3uLwr/UP8P2R7o7xh41ZW8zcdW19PJRcCoVTUvdeLQBnoQkmji6J8XCkaE0GoeIy3dme7F V2ON9ZTN3bg== X-Google-Smtp-Source: AGHT+IGysxF4lj+Pa3edYCQyxcx2uUdgDB4x/aH3it4KUk62HUjYx0xS9V+RTFKPnAfHJUfpiQOrJBNcPZbgiA== X-Received: from wmbfl22.prod.google.com ([2002:a05:600c:b96:b0:436:6fa7:621]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5117:b0:431:60ec:7a96 with SMTP id 5b1f17b1804b1-436e26ddc53mr94187935e9.25.1736534481726; Fri, 10 Jan 2025 10:41:21 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:42 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-16-8419288bc805@google.com> Subject: [PATCH RFC v2 16/29] mm: asi: Map kernel text and static data as nonsensitive From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184124_454951_80AAFC98 X-CRM114-Status: GOOD ( 27.97 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Basically we need to map the kernel code and all its static variables. Per-CPU variables need to be treated specially as described in the comments. The cpu_entry_area is similar - this needs to be nonsensitive so that the CPU can access the GDT etc when handling a page fault. Under 5-level paging, most of the kernel memory comes under a single PGD entry (see Documentation/x86/x86_64/mm.rst. Basically, the mapping is for this big region is the same as under 4-level, just wrapped in an outer PGD entry). For that region, the "clone" logic is moved down one step of the paging hierarchy. Note that the p4d_alloc in asi_clone_p4d won't actually be used in practice; the relevant PGD entry will always have been populated by prior asi_map calls so this code would "work" if we just wrote p4d_offset (but asi_clone_p4d would be broken if viewed in isolation). The vmemmap area is not under this single PGD, it has its own 2-PGD area, so we still use asi_clone_pgd for that one. Signed-off-by: Brendan Jackman --- arch/x86/mm/asi.c | 105 +++++++++++++++++++++++++++++++++++++- include/asm-generic/vmlinux.lds.h | 11 ++++ 2 files changed, 115 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index b951f2100b8bdea5738ded16166255deb29faf57..bc2cf0475a0e7344a66d81453f55034b2fc77eef 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -7,7 +7,6 @@ #include #include -#include #include #include #include @@ -186,8 +185,68 @@ void __init asi_check_boottime_disable(void) pr_info("ASI enablement ignored due to incomplete implementation.\n"); } +/* + * Map data by sharing sub-PGD pagetables with the unrestricted mapping. This is + * more efficient than asi_map, but only works when you know the whole top-level + * page needs to be mapped in the restricted tables. Note that the size of the + * mappings this creates differs between 4 and 5-level paging. + */ +static void asi_clone_pgd(pgd_t *dst_table, pgd_t *src_table, size_t addr) +{ + pgd_t *src = pgd_offset_pgd(src_table, addr); + pgd_t *dst = pgd_offset_pgd(dst_table, addr); + + if (!pgd_val(*dst)) + set_pgd(dst, *src); + else + WARN_ON_ONCE(pgd_val(*dst) != pgd_val(*src)); +} + +/* + * For 4-level paging this is exactly the same as asi_clone_pgd. For 5-level + * paging it clones one level lower. So this always creates a mapping of the + * same size. + */ +static void asi_clone_p4d(pgd_t *dst_table, pgd_t *src_table, size_t addr) +{ + pgd_t *src_pgd = pgd_offset_pgd(src_table, addr); + pgd_t *dst_pgd = pgd_offset_pgd(dst_table, addr); + p4d_t *src_p4d = p4d_alloc(&init_mm, src_pgd, addr); + p4d_t *dst_p4d = p4d_alloc(&init_mm, dst_pgd, addr); + + if (!p4d_val(*dst_p4d)) + set_p4d(dst_p4d, *src_p4d); + else + WARN_ON_ONCE(p4d_val(*dst_p4d) != p4d_val(*src_p4d)); +} + +/* + * percpu_addr is where the linker put the percpu variable. asi_map_percpu finds + * the place where the percpu allocator copied the data during boot. + * + * This is necessary even when the page allocator defaults to + * global-nonsensitive, because the percpu allocator uses the memblock allocator + * for early allocations. + */ +static int asi_map_percpu(struct asi *asi, void *percpu_addr, size_t len) +{ + int cpu, err; + void *ptr; + + for_each_possible_cpu(cpu) { + ptr = per_cpu_ptr(percpu_addr, cpu); + err = asi_map(asi, ptr, len); + if (err) + return err; + } + + return 0; +} + static int __init asi_global_init(void) { + int err; + if (!boot_cpu_has(X86_FEATURE_ASI)) return 0; @@ -207,6 +266,46 @@ static int __init asi_global_init(void) VMALLOC_START, VMALLOC_END, "ASI Global Non-sensitive vmalloc"); + /* Map all kernel text and static data */ + err = asi_map(ASI_GLOBAL_NONSENSITIVE, (void *)__START_KERNEL, + (size_t)_end - __START_KERNEL); + if (WARN_ON(err)) + return err; + err = asi_map(ASI_GLOBAL_NONSENSITIVE, (void *)FIXADDR_START, + FIXADDR_SIZE); + if (WARN_ON(err)) + return err; + /* Map all static percpu data */ + err = asi_map_percpu( + ASI_GLOBAL_NONSENSITIVE, + __per_cpu_start, __per_cpu_end - __per_cpu_start); + if (WARN_ON(err)) + return err; + + /* + * The next areas are mapped using shared sub-P4D paging structures + * (asi_clone_p4d instead of asi_map), since we know the whole P4D will + * be mapped. + */ + asi_clone_p4d(asi_global_nonsensitive_pgd, init_mm.pgd, + CPU_ENTRY_AREA_BASE); +#ifdef CONFIG_X86_ESPFIX64 + asi_clone_p4d(asi_global_nonsensitive_pgd, init_mm.pgd, + ESPFIX_BASE_ADDR); +#endif + /* + * The vmemmap area actually _must_ be cloned via shared paging + * structures, since mappings can potentially change dynamically when + * hugetlbfs pages are created or broken down. + * + * We always clone 2 PGDs, this is a corrolary of the sizes of struct + * page, a page, and the physical address space. + */ + WARN_ON(sizeof(struct page) * MAXMEM / PAGE_SIZE != 2 * (1UL << PGDIR_SHIFT)); + asi_clone_pgd(asi_global_nonsensitive_pgd, init_mm.pgd, VMEMMAP_START); + asi_clone_pgd(asi_global_nonsensitive_pgd, init_mm.pgd, + VMEMMAP_START + (1UL << PGDIR_SHIFT)); + return 0; } subsys_initcall(asi_global_init) @@ -599,6 +698,10 @@ static bool follow_physaddr( * Map the given range into the ASI page tables. The source of the mapping is * the regular unrestricted page tables. Can be used to map any kernel memory. * + * In contrast to some internal ASI logic (asi_clone_pgd and asi_clone_p4d) this + * never shares pagetables between restricted and unrestricted address spaces, + * instead it creates wholly new equivalent mappings. + * * The caller MUST ensure that the source mapping will not change during this * function. For dynamic kernel memory, this is generally ensured by mapping the * memory within the allocator. diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index eeadbaeccf88b73af40efe5221760a7cb37058d2..18f6c0448baf5dfbd0721ba9a6d89000fa86f061 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -1022,6 +1022,16 @@ COMMON_DISCARDS \ } +/* + * ASI maps certain sections with certain sensitivity levels, so they need to + * have a page-aligned size. + */ +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION +#define ASI_ALIGN() ALIGN(PAGE_SIZE) +#else +#define ASI_ALIGN() . +#endif + /** * PERCPU_INPUT - the percpu input sections * @cacheline: cacheline size @@ -1043,6 +1053,7 @@ *(.data..percpu) \ *(.data..percpu..shared_aligned) \ PERCPU_DECRYPTED_SECTION \ + . = ASI_ALIGN(); \ __per_cpu_end = .; /** From patchwork Fri Jan 10 18:40:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935577 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0BE94E7719F for ; Fri, 10 Jan 2025 23:20:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Xu0ksMyNNlOVm22AEp7v99Hx/LhDXDfQ0si5HkzmJoI=; b=0BZVF3DQXjqfVqkBkNxM3T9Vx8 E5pkz4k7pG8f+o59tSi2EKu8vhMG1Al41WQtCRHexRH3cTJpN9Jv4vAOwsRG3vm4oZEs1gT8Rc43G QNauBZLnSuq3HcLnDacoKhcjqgjRuEfhmtJ4iRw4INp5bhPuHRZA4MfzGLGv+D2zI0aobqT1bSy8Z 0pnglRMM7fEb2NKoT8CXHDscMqkQ+Kv40DVBCJJBUgijRNEn8yfnWQgNV9Y4wQlmKC55UBrdzhMx4 J1qi9nj9MRdsuv0+llg3OMnPAFr5SpZ9z5IZV8oa4Dr1MaVDwLz9YBTr2S3Lcc1bdnNyytiaU3XC2 NnIKOsRw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIK-0000000HEd4-1xMK; Fri, 10 Jan 2025 23:20:00 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwn-0000000GbhE-0aTh for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:29 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=HFKS9He99X0lK3xnnbI99ZyULyVP6qmX2/a79HHV3Mw=; b=m81/k4+RSCYLefk/bmCSJBzI6S 7XrmatHV2EycH5dD8ZTH5Kt1omwPgqOQXtmc297k82Gdmk3RuDeVXpbthIxlQMBMGhMIIvnQHc8Ic WJyt/22adqMcKC/rvazJk0pQKoPtbH93yaBWF4NTqoWXhTUqBMwLs8zEkWbKKMorKqOU2ga0CS/EW 3WlLuKXL5FEZ40lMZd9WwWbmUUo9/XUX1G1zEy3KNPAdMCPaj82RgQOA3q02YD+de1oCddfhTlqBm ghIBkDBo/CHcnC1RE0k8vbqZWq1yTGvZkzNQv6ARw9dpiGT97CGU4HHC5tf4DDmQ2rNqQcRCKFmSJ yFvOTGtg==; Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by casper.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwk-0000000EBNf-12c6 for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:28 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-436723bf7ffso19279245e9.3 for ; Fri, 10 Jan 2025 10:41:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534484; x=1737139284; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HFKS9He99X0lK3xnnbI99ZyULyVP6qmX2/a79HHV3Mw=; b=acJKGZDtuBUMClBPkewJWxO/JWk1nf0iRcmUR7veI5O7dBxSqJPSMijRjunF1BHvqI iDhC72IrCN0wfFvHGgTJFPBJdQII6ceA/EbX0KgiD+u1678ivnEgU3plQdJcO1lceyq4 9vu3uwx8v13itzWD1Nf3D+JtAt80A5zJTnuIuTZBvwFmeXq1ylhiJnxpTZtPiC9NrtcL J0l07yAClMJPDKKqmUA0+ANE3+jm3gfSliMpd1ZC5JTcdKFtw8UuF+04eZK+ApUr4vnz duJnD6CCegwvJf4yjfW7S2X5PJ3nF22NiyrJ6DQWSg9CtAgntD6PxzTRy5V1HCYhRA7Q BQ5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534484; x=1737139284; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HFKS9He99X0lK3xnnbI99ZyULyVP6qmX2/a79HHV3Mw=; b=bDnvOWQr06uJlCaMJOLqeeVSBex2O4Fm0DTY3iFziOqaRzGKgiCLgsuBiEdOvFdEZh j9R4Dd5c7SC3xejxsxexPNUBDjM1MvZsCtBqPMovA6PnKA0OqJY3VqFZcYKt6B/x6VQ5 DdHCmcS7JA8ZjMy/adZq1BdOxv2pb853a71TEKxnso4XpSLbWLisJP6NF4PAOWbU3177 LA1CyWevJWuMuftlfQ/fhNNvbeADkeux5qYs51vyqWDduhpstFlNoABBZfdT7uKo7KT2 UoZ9eKgeTcRzo+TIfvjD4O2ngt19RfELqnt5+SPav31ODaklAPmjq3GtMPWKQTam3P7d Lmmw== X-Forwarded-Encrypted: i=1; AJvYcCW0FytYo2Z9jFesCAQ6ZEchw1+ZsGSUHgXicBLRw4BU4eohdtNlyN9823lbVgEGjqcDlLWVekvlR3xeBA==@lists.infradead.org X-Gm-Message-State: AOJu0Yz28EqZbz/7kkBmTuoJw6wC5IdDpciQ6cbSJHiYR6KfWnob+iZA ZqcQx1XaFuRdj+yhqvnhL2RCUYtc1+qjqSITDeY7OI23RZpfeWxwES7i3ETS0PQyXru+5EVWni0 XsuCQnlhbUA== X-Google-Smtp-Source: AGHT+IHb2tLJeEj/nhEIcvCueP11HP0tzz903p60s9RvGJF27LFwQlQKr5YD0ey6aPDR1IvAD0AJ740K8RZlJQ== X-Received: from wmbbh20.prod.google.com ([2002:a05:600c:3d14:b0:434:fd4d:ffad]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3c85:b0:436:18d0:aa6e with SMTP id 5b1f17b1804b1-436e2679a7cmr125832515e9.5.1736534484149; Fri, 10 Jan 2025 10:41:24 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:43 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-17-8419288bc805@google.com> Subject: [PATCH RFC v2 17/29] mm: asi: Map vmalloc/vmap data as nonsensitive From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184126_368334_977CB01D X-CRM114-Status: GOOD ( 20.12 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org We add new VM flags for sensitive and global-nonsensitive, parallel to the corresponding GFP flags. __get_vm_area_node and friends will default to creating global-nonsensitive VM areas, and vmap then calls asi_map as necessary. __vmalloc_node_range has additional logic to check and set defaults for the sensitivity of the underlying page allocation. It does this via an initial __set_asi_flags call - note that it then calls __get_vm_area_node which also calls __set_asi_flags. This second call is a NOP. By default, we mark the underlying page allocation as sensitive, even if the VM area is global-nonsensitive. This is just an optimization to avoid unnecessary asi_map etc, since presumably most code has no reason to access vmalloc'd data through the direct map. There are some details of the GFP-flag/VM-flag interaction that are not really obvious, for example: what should happen when callers of __vmalloc explicitly set GFP sensitivity flags? (That function has no VM flags argument). For the moment let's just not block on that and focus on adding the infrastructure, though. At the moment, the high-level vmalloc APIs doesn't actually provide a way to configure sensitivity, this commit just adds the infrastructure. We'll have to decide how to expose this to allocation sites as we implement more denylist logic. vmap does already allow configuring vm flags. Signed-off-by: Brendan Jackman --- mm/vmalloc.c | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 8d260f2174fe664b54dcda054cb9759ae282bf03..00745edf0b2c5f4c769a46bdcf0872223de5299d 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3210,6 +3210,7 @@ struct vm_struct *remove_vm_area(const void *addr) { struct vmap_area *va; struct vm_struct *vm; + unsigned long vm_addr; might_sleep(); @@ -3221,6 +3222,7 @@ struct vm_struct *remove_vm_area(const void *addr) if (!va || !va->vm) return NULL; vm = va->vm; + vm_addr = (unsigned long) READ_ONCE(vm->addr); debug_check_no_locks_freed(vm->addr, get_vm_area_size(vm)); debug_check_no_obj_freed(vm->addr, get_vm_area_size(vm)); @@ -3352,6 +3354,7 @@ void vfree(const void *addr) addr); return; } + asi_unmap(ASI_GLOBAL_NONSENSITIVE, vm->addr, get_vm_area_size(vm)); if (unlikely(vm->flags & VM_FLUSH_RESET_PERMS)) vm_reset_perms(vm); @@ -3397,6 +3400,7 @@ void vunmap(const void *addr) addr); return; } + asi_unmap(ASI_GLOBAL_NONSENSITIVE, vm->addr, get_vm_area_size(vm)); kfree(vm); } EXPORT_SYMBOL(vunmap); @@ -3445,16 +3449,21 @@ void *vmap(struct page **pages, unsigned int count, addr = (unsigned long)area->addr; if (vmap_pages_range(addr, addr + size, pgprot_nx(prot), - pages, PAGE_SHIFT) < 0) { - vunmap(area->addr); - return NULL; - } + pages, PAGE_SHIFT) < 0) + goto err; + + if (asi_map(ASI_GLOBAL_NONSENSITIVE, area->addr, + get_vm_area_size(area))) + goto err; /* The necessary asi_unmap() is in vunmap. */ if (flags & VM_MAP_PUT_PAGES) { area->pages = pages; area->nr_pages = count; } return area->addr; +err: + vunmap(area->addr); + return NULL; } EXPORT_SYMBOL(vmap); @@ -3711,6 +3720,10 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, goto fail; } + if (asi_map(ASI_GLOBAL_NONSENSITIVE, area->addr, + get_vm_area_size(area))) + goto fail; /* The necessary asi_unmap() is in vfree. */ + return area->addr; fail: From patchwork Fri Jan 10 18:40:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935580 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7434FE7719C for ; Fri, 10 Jan 2025 23:20:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=uRJIy/qLwE7m5ITpcAUvSLtgdk2SxhpzAuGgf3GuSjU=; b=u+Mm6DarNAYXQCppnQDqo1Im9o ckLTYvp4/HAnya27cTzUgvc+IcYsex+newDgomdGglMFRYlj80y0UrtZVTJSd2nza2d/sDHntVcD8 ZGtxXXlfCWQ5ErqiJMp6HsPKf+cFUg4i/czVjVaO6RJ6sTYqDQ2i/7V/f++P36jBG4fPS62AXnFDD Nj+En05K+8uIScSUbQoTiKC6Q/09pJutT+Tqs+g5i+iL84OF6SSgy/ZAk3S/c4TjrQZLGvbferjcY HRdl4ihkRDDMdTEmoRPW1DM5NvxKnDL4tn8sD5GDs8hwKej/XmS8OVAf55Eou4iKVcXswXUrF5u/y YwEJ4/tw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIN-0000000HEh1-0lOB; Fri, 10 Jan 2025 23:20:03 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwt-0000000GbmB-0fb1 for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:35 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=+fqV4f6s/Eo+EZ+PE2OCyE0kBm/vfnsLcFKRz+1otrE=; b=SfJTsR1mUVffuwO4Ia6w0khGfb Z1vw7oe1zU6SBvW0eQgSbRm436vfARb/IUJvRUIZVQMpuSfY1QdGc+TvokX4HC4kk7jjCWoqV0olm dFsxU5zdS6dVT3rmJzGSzBjfMZECa/d2OKCkv26mBOr6d0e7cfyGOQahYWI5eUWZZmPiDUxddqfpj PgSMo2mRjlfIsKSPz90wiMe8L4Sm20RtKmwZV9No2CYlkG9cLH9rcc7l8uiuzMX21VdVIP5c/Zm5L lzoIg52ECliKQVtVt+ExItUZf2Dy/U4O+7JYRmHmDqX/FN2AB0BA+DauayNFL7WE3c16eKKBXC2n7 TFkXyHqA==; Received: from mail-wr1-f74.google.com ([209.85.221.74]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwo-00000009sfs-2NJs for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:34 +0000 Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-38a684a096eso995701f8f.2 for ; Fri, 10 Jan 2025 10:41:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534486; x=1737139286; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+fqV4f6s/Eo+EZ+PE2OCyE0kBm/vfnsLcFKRz+1otrE=; b=EavjCIYG0dDEOr3FhAz1S0PJOpbg5+xopl3JoU7RHAMpSjBt02AHaVfJV5NCnZdx4J GBvEfWJoV5jwnFH3ltm40pNX9jMvW9Y3uGXzxMElmv1rt+lTAhCQuvqswJUAFFdmZ6sJ Bjjh8nwLxcqJQRz0t/62Z2VBZDkX3e9fy5WpLpqe7zPYzU5dyYCqD0bglHXwnY2zRM0h 6FFPcWIj3pX2JABqkynPof4fBg5CyvvumxMmzOl2DYHoKrNtbOLH3kDjkJddzFVhzZX0 FJ7CEBMDOXpGrqIOlkawhzvmhRWQtWGB68dRWwutCbPhLTzgSSs/KMJ8PJFpX2c1rsWn p/fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534486; x=1737139286; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+fqV4f6s/Eo+EZ+PE2OCyE0kBm/vfnsLcFKRz+1otrE=; b=nRrJpA7FwOlUNgUu9W/+llHBXqBUMrNKsg36KEnZGWCfX209M624ZLj2S6Zv85IrJw LewJA1eniCHeH1ih50tIHe69RLrtgoPN12QSzQBkh5gZ4t8utnPa5GzQGCiicMkg1GY7 K1sL6ckeRlsTtKszP/J0cLbwUOw2hr8nZ/8avYh17H21WglSzWrYm4ddy62NPFw7KPFt Bi2BLgiKnw6GGevHlnV0rZfKOsG/lMqHVH/i01r3q/fIclG0ijVR4Mg4hCuS3s7vNEvX hAVkX7Cv+JfXOjhmvsdh1UyKRcoOqH3UbaozqM+Ii+LLJF3lgFGzS0D4LJ/VCbxL/uWz QY5w== X-Forwarded-Encrypted: i=1; AJvYcCU9USqapuJ3VVT78MtmTLGfMPcjwsXxluAhzKtxrWuqTwsPG7nBRRyJHQCJ73Bcyse5e1nZuRnrHan7kg==@lists.infradead.org X-Gm-Message-State: AOJu0YzWciLkGqcOTZKUvy9ArD/LDynCKHEx9JUguTMuWG/AE40II0uf j9+QrWqG0/bvWT4U/NwANgMNbhzwofvgyjjX3V27261oVy2jTpeQ7gt8fMlkmuhAJsP7qWsecBw 2YlH1fRFc3w== X-Google-Smtp-Source: AGHT+IFgJUpClA++DtJe4xQ45Yc9kKsVSyJCXKqF0cxpPlCBfuMButRzIe/JLM7ynzshFcOsnG5hhjluySNsEA== X-Received: from wrbeh5.prod.google.com ([2002:a05:6000:4105:b0:382:4235:c487]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:adf:8b5b:0:b0:38a:88bc:aea4 with SMTP id ffacd0b85a97d-38a88bcaebfmr7355114f8f.30.1736534486264; Fri, 10 Jan 2025 10:41:26 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:44 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-18-8419288bc805@google.com> Subject: [PATCH RFC v2 18/29] mm: asi: Map dynamic percpu memory as nonsensitive From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Reiji Watanabe , Junaid Shahid X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184130_765706_D7811E46 X-CRM114-Status: GOOD ( 20.51 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Currently, all dynamic percpu memory is implicitly (and unintentionally) treated as sensitive memory. Unconditionally map pages for dynamically allocated percpu memory as global nonsensitive memory, other than pages that are allocated for pcpu_{first,reserved}_chunk during early boot via memblock allocator (these will be taken care by the following patch). We don't support sensitive percpu memory allocation yet. Co-developed-by: Junaid Shahid Signed-off-by: Junaid Shahid Signed-off-by: Reiji Watanabe Signed-off-by: Brendan Jackman WIP: Drop VM_SENSITIVE checks from percpu code --- mm/percpu-vm.c | 50 ++++++++++++++++++++++++++++++++++++++++++++------ mm/percpu.c | 4 ++-- 2 files changed, 46 insertions(+), 8 deletions(-) diff --git a/mm/percpu-vm.c b/mm/percpu-vm.c index cd69caf6aa8d8eded2395eb4bc4051b78ec6aa33..2935d7fbac41548819a94dcc60566bd18cde819a 100644 --- a/mm/percpu-vm.c +++ b/mm/percpu-vm.c @@ -132,11 +132,20 @@ static void pcpu_pre_unmap_flush(struct pcpu_chunk *chunk, pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); } -static void __pcpu_unmap_pages(unsigned long addr, int nr_pages) +static void ___pcpu_unmap_pages(unsigned long addr, int nr_pages) { vunmap_range_noflush(addr, addr + (nr_pages << PAGE_SHIFT)); } +static void __pcpu_unmap_pages(unsigned long addr, int nr_pages, + unsigned long vm_flags) +{ + unsigned long size = nr_pages << PAGE_SHIFT; + + asi_unmap(ASI_GLOBAL_NONSENSITIVE, (void *)addr, size); + ___pcpu_unmap_pages(addr, nr_pages); +} + /** * pcpu_unmap_pages - unmap pages out of a pcpu_chunk * @chunk: chunk of interest @@ -153,6 +162,8 @@ static void __pcpu_unmap_pages(unsigned long addr, int nr_pages) static void pcpu_unmap_pages(struct pcpu_chunk *chunk, struct page **pages, int page_start, int page_end) { + struct vm_struct **vms = (struct vm_struct **)chunk->data; + unsigned long vm_flags = vms ? vms[0]->flags : VM_ALLOC; unsigned int cpu; int i; @@ -165,7 +176,7 @@ static void pcpu_unmap_pages(struct pcpu_chunk *chunk, pages[pcpu_page_idx(cpu, i)] = page; } __pcpu_unmap_pages(pcpu_chunk_addr(chunk, cpu, page_start), - page_end - page_start); + page_end - page_start, vm_flags); } } @@ -190,13 +201,38 @@ static void pcpu_post_unmap_tlb_flush(struct pcpu_chunk *chunk, pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); } -static int __pcpu_map_pages(unsigned long addr, struct page **pages, - int nr_pages) +/* + * __pcpu_map_pages() should not be called during the percpu initialization, + * as asi_map() depends on the page allocator (which isn't available yet + * during percpu initialization). Instead, ___pcpu_map_pages() can be used + * during the percpu initialization. But, any pages that are mapped with + * ___pcpu_map_pages() will be treated as sensitive memory, unless + * they are explicitly mapped with asi_map() later. + */ +static int ___pcpu_map_pages(unsigned long addr, struct page **pages, + int nr_pages) { return vmap_pages_range_noflush(addr, addr + (nr_pages << PAGE_SHIFT), PAGE_KERNEL, pages, PAGE_SHIFT); } +static int __pcpu_map_pages(unsigned long addr, struct page **pages, + int nr_pages, unsigned long vm_flags) +{ + unsigned long size = nr_pages << PAGE_SHIFT; + int err; + + err = ___pcpu_map_pages(addr, pages, nr_pages); + if (err) + return err; + + /* + * If this fails, pcpu_map_pages()->__pcpu_unmap_pages() will call + * asi_unmap() and clean up any partial mappings. + */ + return asi_map(ASI_GLOBAL_NONSENSITIVE, (void *)addr, size); +} + /** * pcpu_map_pages - map pages into a pcpu_chunk * @chunk: chunk of interest @@ -214,13 +250,15 @@ static int __pcpu_map_pages(unsigned long addr, struct page **pages, static int pcpu_map_pages(struct pcpu_chunk *chunk, struct page **pages, int page_start, int page_end) { + struct vm_struct **vms = (struct vm_struct **)chunk->data; + unsigned long vm_flags = vms ? vms[0]->flags : VM_ALLOC; unsigned int cpu, tcpu; int i, err; for_each_possible_cpu(cpu) { err = __pcpu_map_pages(pcpu_chunk_addr(chunk, cpu, page_start), &pages[pcpu_page_idx(cpu, page_start)], - page_end - page_start); + page_end - page_start, vm_flags); if (err < 0) goto err; @@ -232,7 +270,7 @@ static int pcpu_map_pages(struct pcpu_chunk *chunk, err: for_each_possible_cpu(tcpu) { __pcpu_unmap_pages(pcpu_chunk_addr(chunk, tcpu, page_start), - page_end - page_start); + page_end - page_start, vm_flags); if (tcpu == cpu) break; } diff --git a/mm/percpu.c b/mm/percpu.c index da21680ff294cb53dfb42bf0d3b3bbd2654d2cfa..c2d913c579bf07892957ac7f601a6a71defadc4b 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -3273,8 +3273,8 @@ int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_cpu_to_node_fn_t pcpu_populate_pte(unit_addr + (i << PAGE_SHIFT)); /* pte already populated, the following shouldn't fail */ - rc = __pcpu_map_pages(unit_addr, &pages[unit * unit_pages], - unit_pages); + rc = ___pcpu_map_pages(unit_addr, &pages[unit * unit_pages], + unit_pages); if (rc < 0) panic("failed to map percpu area, err=%d\n", rc); From patchwork Fri Jan 10 18:40:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3BE21E7719D for ; Fri, 10 Jan 2025 23:20:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=piwhqrElHKvGadACmc8P0S6wRegJ7/X+NgJ3JKZQPLo=; b=D/6BaJ7aT91uhVrPDsmwm3kKrn BLUm6cafBBC+/3znxWJwQkgH1GRcY9ndCTIJdIKyy9G4yJBcGQEChB7z0SCU/B9c++cOuiUnpVKMM 9Tsr6aPRZ4mrFOndxYFBfyfrY4zyvDEvhTd85EZnpGrfUKKjn+kHYl9qnW8e56OiQiLJ3+J1W30yf dyCN0xHtdsb11dSGyAWg+qwdEe32BudXt6n1YJNL5eA/5T1Or8qtGPhvfa3bgmSUtYTqPJeZZrHcJ Ltan/rAyxaOGzLZOIriUjvsMgJwLNZhMMeZn7E8bEOzNZeXg1QaKjk82NH08wYX81h0bUj9cSklNV BDQajzvA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIM-0000000HEfv-1Erc; Fri, 10 Jan 2025 23:20:02 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJws-0000000GblE-35Xd for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:34 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=lgAtsH7tF25YqHAENGvYLMvhBopdecMcg5W9s6t9Qm4=; b=TP5VEjc1mX2wWxicPPswZERaG2 Msakl1A6zExiHzAF6FzcJ/cp8OIdtnM9f9vdWLO7nIuK/KYMiJ/ZAKawy35DMObLfZ5xVJNDZJ4gr 6s2hUeqk48GrpglIoDLF7Ihssy6zIUUJ5PTdncvXdzmMblGtFgBrgy6PzQEl1U83ZkUGs/HQnsn4I hfT8ftc1quI0Y/U5Ch2zmg0UxYoUBWIVG7vwu0hWGrdKgoIxPCNvx9VvSlPBFZE8qifgkZZOpT+FY iw9rsDY56d7amnhnr3VJEtLGjAgMWWcRJ5olDuLRz6ZTYrY/NHng7DZ/9891iaHCXlG725+wPUpDC WM6C81+A==; Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwo-00000009sgb-2FRl for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:33 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-4361ac8b25fso12663735e9.2 for ; Fri, 10 Jan 2025 10:41:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534489; x=1737139289; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lgAtsH7tF25YqHAENGvYLMvhBopdecMcg5W9s6t9Qm4=; b=bfCO4nDbQMSDPYlPBMOy+6exWhsmiyWjOcEvtmGXjIh2YhgHb3DnFM7QYDhgedb678 uvdJZaETnoKxK4QuxmVAFN/ESWbBJ6lT22mRLtXRNQ0g790+gdXTzYlCl9P5smuX5ic7 Uw4YnEtiuNp8I6lbARbXtRMGQ+vj1FLcwOcqI7cHANdrTr9UJuwxCGK+yQqZ/87ZQOc3 kXZYYkgPXUEEzJ4hStw80bON8w3A/zgkjm4z7TsxsLVvAySVkWHP9scYLKmSNJ+BKF6p iR/ktuQJdko2cRRbkLLzdLhhTsA3lzM+zlWUIra7R4AJ4XfTH/L79kbC2ghiMoH14ZbV tYRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534489; x=1737139289; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lgAtsH7tF25YqHAENGvYLMvhBopdecMcg5W9s6t9Qm4=; b=WXeo3JiI0Gr1dWHLGwJHQeiU/dCWVyhv3glPB/akQ5yc70QXCC2YToWzARq6PGe/up kfBtW/6XLyVr/CvaW6QziI60C6PeYvZlwl4u09hut6aP1viI4dw5mPfVaMOA0DYp6w5o a6I9njWbpK+4jvEfYllc+aNlkabmEA1JXICuMSVqoJRK/LWK2+dpA/CmdwHLjxu9AWrI FV7hdMAmvTmSvUYlod1US4qklr6kiZmCadYJCJvf0PXdMrxUf30jS/P2E5u9LhMOe7rJ DNbbZDzX2J5sQE5lBNKSUhBHp21YkJNj+MaYDu8JR12MHZz8CNCEulOAaaDo7WbjbGCr SMUg== X-Forwarded-Encrypted: i=1; AJvYcCUrVHUXMHpiSJqt1IUomdAPoS5vMMl4Ynvp34j+jtFZEOiAIDa6eG6u2J/pz94rPaoZ1pMDHmDGmBWgTw==@lists.infradead.org X-Gm-Message-State: AOJu0YxfhOd85NoK88Kj3aDVgf3x24nZoYNrgc0YUzcrKYpSk0UJFhLu O/QtOX46De1T+CuHPBWyp5z+6bOSspOC+PhrhvTtNRn5vRLvW2xx1LzBpdazueLqPd+AI2GoS4L gIXozgVvVLA== X-Google-Smtp-Source: AGHT+IFeKlzdEnxlsx468YN6J+lzj2fJ2REIK9m1LE7fZOV2UV2JrR/PAzf48/PWmJ/jsXO7ZyblIN12CqtAPQ== X-Received: from wmqa17.prod.google.com ([2002:a05:600c:3491:b0:434:fa72:f1bf]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4586:b0:434:9e1d:7626 with SMTP id 5b1f17b1804b1-436e26f4b91mr97248925e9.25.1736534488470; Fri, 10 Jan 2025 10:41:28 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:45 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-19-8419288bc805@google.com> Subject: [PATCH RFC v2 19/29] mm: asi: Stabilize CR3 in switch_mm_irqs_off() From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184130_727529_D2A4ADBA X-CRM114-Status: GOOD ( 13.38 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org An ASI-restricted CR3 is unstable as interrupts can cause ASI-exits. Although we already unconditionally ASI-exit during context-switch, and before returning from the VM-run path, it's still possible to reach switch_mm_irqs_off() in a restricted context, because KVM code updates static keys, which requires using a temporary mm. Signed-off-by: Brendan Jackman --- arch/x86/mm/tlb.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index c55733e144c7538ce7f97b74ea2b1b9c22497c32..ce5598f96ea7a84dc0e8623022ab5bfbba401b48 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -546,6 +546,9 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, bool need_flush; u16 new_asid; + /* Stabilize CR3, before reading or writing CR3 */ + asi_exit(); + /* We don't want flush_tlb_func() to run concurrently with us. */ if (IS_ENABLED(CONFIG_PROVE_LOCKING)) WARN_ON_ONCE(!irqs_disabled()); From patchwork Fri Jan 10 18:40:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4BC15E77188 for ; Fri, 10 Jan 2025 23:20:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=BIhKWSWySy6W/MjpZfPHzJI3eJjSy93PjFGewrG7/o8=; b=v5DWQ6oFi1QeCKLSwbYbNnnqM0 +wisQ5uYKLWgrWO3bPwIeVf3AhxR6xtXAOnQdFujVZiJBHQFvVRzixptwsGsefa8X2/fLB8KPd4xX uMxnj9gX9HAmYUgQUhLMGO2DJgPNiFuONAh+VSFC9cJ5hUCvD0IBE9MCsSDmceqEjGye2tlO5j6I/ T5KMf70mf5bzGOBPQpmwXhLaVOv0mSAXiIFrtChUdzeeXGbecF4Bu9oIxmW1H4Ay6hDTvtBwkh9df 0kELdqMA+jBLr7Pvyx+XVJlzGv3OFpw7TNKxYLBAbrR4J4BO0ZJsRVRQDy1CNUbqHUg5Pob7vHG9z HVLW3iJw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIO-0000000HEiL-018i; Fri, 10 Jan 2025 23:20:04 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwt-0000000GbmD-17lz for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:35 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=z5b6fu2IPJ4vMWdeJyvKaXZwCsIFdjwPZGonrVH8VuY=; b=L9qB7j/GXMrZVQRy0mtOltmBC+ vKQTBqfUOknEsIsLokT+NzNB1Rw4xdlcARcgtSOpiPPxzhI6KIdNDkZeLcPXsgN0/z/BnYLDlhrHr EvM+plN2X0YfwrJZeSIlJ7Jvlbm/3YyiYgp+9epujynbTPEq47NhdWT/otGRfpWW3oyuQTf0OfBCO bR9AchQwtvEf9K4rs70mJRYOtOxiYC6cdvZoDvbDCtHV3xxpLeIjIhEnYq8WyWI0nEmmcigz600o7 IT1GQ3Yy7pAzDRX+H7H66GwjhHjVEkT9Y5VRLCCJTvYyAzIspLb9mHIoGeAN+YgUAOOPthFixhY2D Wxl0rRIQ==; Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by casper.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwq-0000000EBRj-1Nzp for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:34 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-385d6ee042eso1539653f8f.0 for ; Fri, 10 Jan 2025 10:41:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534491; x=1737139291; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z5b6fu2IPJ4vMWdeJyvKaXZwCsIFdjwPZGonrVH8VuY=; b=ZN72pOmq+KvqGOJhTcKfI5pjf3mieqcwagJZT5fndr69QSc+dVM39+0K0lrhFrc04g CzsSJ6Ns7fQNFHQ8GCAYDpXMwIMF3D2AhsI67rzHv2Vh/rR5NwXpZ6ESwkrSwSHnebLy p6TieOue9f1d0XuUSRXQWTYbTJ6jklvEvFbhXfljrIlWSwyOy4K/vUsBDwzAZdQ3kqDi dfNpOg39gu6ke8pCfd5PIe4yEMtp5fK7JZuEX/+67FNI8VslGyuPvHVWpmarafj2m85I uqYCKfPm9Yhcxx1BVkLIIzYVnbUZ7dDCTxDKN4pq+xTqhg1gOoOenoHRsj+LFrVrR9kC 2X7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534491; x=1737139291; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z5b6fu2IPJ4vMWdeJyvKaXZwCsIFdjwPZGonrVH8VuY=; b=U0KrEdtTF+Z6Aeu+cHwq9bXJnz911o4wmUGbzvjCOuMTjEpnyCqQ1ZhoDNJSQ9X05v bICzP/vybt4KFbSfp46oG7223JaFCmd0kA+0tg2WEtZMCqcRqvy5qt0UyX9WjaD5tO0Y iaivJbykk7y5B5CFsCY8x/PoI3h3FmD9CnN4SOFH2Q2TyiugWwwICstOJ1nZVSHMHWlw a9vgNMfXNrylzooDq8lf7CuWZlC1QUhD2w4MLNKS17Ij4OKYCaSJ5Z0s8qsXRvQEycGX S0uDAva0yB/NtkIuO9F+L4EX21cxCXLZhtWYhK/Ts2dv+RPosqNpg2XrsUkwVCusJm0e 5Umw== X-Forwarded-Encrypted: i=1; AJvYcCUyWNX6H/LE2g50+GmyOvwsIw5boeboHnbDDqMvX1gzaFaBibdkzFDGGT+mWZRnrG2KwJVsZeQl1NkUDQ==@lists.infradead.org X-Gm-Message-State: AOJu0YzOgCSgdgAuGxEiUow2YVZwR7v1o2HExWXwLORQSujUlzvn7GM2 rGDIgvKKYuyZONU4d7wy8z+lwal850JfndffkufpcR/sqYfVbnp4GX7j6BANzyOXgKPkpXtoDqv +M7S77dny7Q== X-Google-Smtp-Source: AGHT+IEE74PFwgraC6zRNOcME3rOW4swKtJd/bKE4AIl8MRowFtdpWNq19ZBTTKPz5sLGPxmEc1GHABVlwipiQ== X-Received: from wmjv9.prod.google.com ([2002:a7b:cb49:0:b0:434:f173:a51]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1a85:b0:38a:4184:1520 with SMTP id ffacd0b85a97d-38a872eb1eamr9947778f8f.27.1736534490587; Fri, 10 Jan 2025 10:41:30 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:46 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-20-8419288bc805@google.com> Subject: [PATCH RFC v2 20/29] mm: asi: Make TLB flushing correct under ASI From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184132_409501_11E25279 X-CRM114-Status: GOOD ( 23.66 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This is the absolute minimum change for TLB flushing to be correct under ASI. There are two arguably orthogonal changes in here but they feel small enough for a single commit. .:: CR3 stabilization As noted in the comment ASI can destabilize CR3, but we can stabilize it again by calling asi_exit, this makes it safe to read CR3 and write it back. This is enough to be correct - we don't have to worry about invalidating the other ASI address space (i.e. we don't need to invalidate the restricted address space if we are currently unrestricted / vice versa) because we currently never set the noflush bit in CR3 for ASI transitions. Even without using CR3's noflush bit there are trivial optimizations still on the table here: on where invpcid_flush_single_context is available (i.e. with the INVPCID_SINGLE feature) we can use that in lieu of the CR3 read/write, and avoid the extremely costly asi_exit. .:: Invalidating kernel mappings Before ASI, with KPTI off we always either disable PCID or use global mappings for kernel memory. However ASI disables global kernel mappings regardless of factors. So we need to invalidate other address spaces to trigger a flush when we switch into them. Note that there is currently a pointless write of cpu_tlbstate.invalidate_other in the case of KPTI and !PCID. We've added another case of that (ASI, !KPTI and !PCID). I think that's preferable to expanding the conditional in flush_tlb_one_kernel. Signed-off-by: Brendan Jackman --- arch/x86/mm/tlb.c | 27 ++++++++++++++++++++------- 1 file changed, 20 insertions(+), 7 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index ce5598f96ea7a84dc0e8623022ab5bfbba401b48..07b1657bee8e4cf17452ea57c838823e76f482c0 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -231,7 +231,7 @@ static void clear_asid_other(void) * This is only expected to be set if we have disabled * kernel _PAGE_GLOBAL pages. */ - if (!static_cpu_has(X86_FEATURE_PTI)) { + if (!static_cpu_has(X86_FEATURE_PTI) && !static_asi_enabled()) { WARN_ON_ONCE(1); return; } @@ -1040,7 +1040,6 @@ static void put_flush_tlb_info(void) noinstr u16 asi_pcid(struct asi *asi, u16 asid) { return kern_pcid(asid) | ((asi->class_id + 1) << X86_CR3_ASI_PCID_BITS_SHIFT); - // return kern_pcid(asid) | ((asi->index + 1) << X86_CR3_ASI_PCID_BITS_SHIFT); } void asi_flush_tlb_range(struct asi *asi, void *addr, size_t len) @@ -1192,15 +1191,19 @@ void flush_tlb_one_kernel(unsigned long addr) * use PCID if we also use global PTEs for the kernel mapping, and * INVLPG flushes global translations across all address spaces. * - * If PTI is on, then the kernel is mapped with non-global PTEs, and - * __flush_tlb_one_user() will flush the given address for the current - * kernel address space and for its usermode counterpart, but it does - * not flush it for other address spaces. + * If PTI or ASI is on, then the kernel is mapped with non-global PTEs, + * and __flush_tlb_one_user() will flush the given address for the + * current kernel address space and, if PTI is on, for its usermode + * counterpart, but it does not flush it for other address spaces. */ flush_tlb_one_user(addr); - if (!static_cpu_has(X86_FEATURE_PTI)) + /* Nothing more to do if PTI and ASI are completely off. */ + if (!static_cpu_has(X86_FEATURE_PTI) && !static_asi_enabled()) { + VM_WARN_ON_ONCE(static_cpu_has(X86_FEATURE_PCID) && + !(__default_kernel_pte_mask & _PAGE_GLOBAL)); return; + } /* * See above. We need to propagate the flush to all other address @@ -1289,6 +1292,16 @@ STATIC_NOPV void native_flush_tlb_local(void) invalidate_user_asid(this_cpu_read(cpu_tlbstate.loaded_mm_asid)); + /* + * Restricted ASI CR3 is unstable outside of critical section, so we + * couldn't flush via a CR3 read/write. asi_exit() stabilizes it. + * We don't expect any flushes in a critical section. + */ + if (WARN_ON(asi_in_critical_section())) + native_flush_tlb_global(); + else + asi_exit(); + /* If current->mm == NULL then the read_cr3() "borrows" an mm */ native_write_cr3(__native_read_cr3()); } From patchwork Fri Jan 10 18:40:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935582 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C336FE77188 for ; Fri, 10 Jan 2025 23:20:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Yc8F21kAxDIimfhGzmXLPhmHmiUgxjHBpKF15oxVyOc=; b=d5tDz0feiy//G3jjpc0l24b5GO E2uVWjnGx4G0xS8tjmzfYYAcE1nH5D0n1VRglC5Xc84Kf95q8VDbzALiMx56uhvehm715mSIjM83A 1rIBkbBMMci5sja5W5W7ju9sC2EsV4qeCLJHeCoSRpy1WPXAvKTq/aMxdBKTgInZMKI9Bfr6vUG0J 5X3tVj/eySJMZ+w+9ksc4pSFWal2wOzhFwc/OWvnv6Mw5BJ4Jt2Ca3zMkDM46yh0a+FDgU/OjaH7k b34wYhbVnhbKCtQ3Xh3pNtKU+QdCXH2FRmf9ImhtJ1vlMOBCm82+0636D6C1CTLJomG3cq+E3Ibpn MvH2sR2g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIO-0000000HEjb-3V5D; Fri, 10 Jan 2025 23:20:05 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJww-0000000GbqQ-2rFG for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:39 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=0rBDGQyJzQ3zE+AnCN6b+fGLld6jYpPH9L4MwLLlvpM=; b=ZXdQYIMR9IXGFHRWcdtieHb9qJ Gm/CYMNuzq+kF6ROf6kVfdmZP9UWc7ZZLfKmgnYkzpXey6NJAuO5zYHIYqkHYh8VYa5eyUMJQdGJF ScNb31i/MLAwIdKXgFit9r4WAFm19jny8/+uOLWsuwuZUxNJO5Ntfaex9+LuH01Hx/ztbyxzsjJe0 kDzRVfvkTD0p66GxHPmEUX2N/QFM4sGVycqvTIk3vOqt4WbyKTAcqwmeq9nZP4I8Rnc/o4YE5qsoZ 73AETEDp7DHhElHbzKhfrhdlh1KnBGM3+E05L+m5o6Wj+XK4GPTvP3+0NSaBcisE+J4oe+wo/o+g7 Y/tOjnWQ==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwt-00000009ske-0JVT for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:37 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-43625ceae52so12794485e9.0 for ; Fri, 10 Jan 2025 10:41:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534493; x=1737139293; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0rBDGQyJzQ3zE+AnCN6b+fGLld6jYpPH9L4MwLLlvpM=; b=kwLmix+nflcEcEgGtEMrwIYJOWAh1UwArQNP/dd4gFMyn5tQW7DUNqMOGl8l5fCuM0 829yLg+f441uyJ/dokeYnIESiZ2RRmNgBXLYHzzOujrXBlQx8kvdNVnF/DgtqVxQWqAC ivN5nTQ+TEOAYExHorIlDa6gC9oGmIJs4kdVmKvROzlJTxO6SWgMKs9ChLFnWd5jq0Lh mLCZ6feOvozwnGMh9HXEDzsMlPkjoHgXS8rOyEngVpHmyRZ/POvqc5S/wZ6MzE/HYTHD G2xHpUZGLGs7Ulub+h0HdKCJ4qOwP4Bq6yfpj1e+psw40K9j5mZvSx4Ii9Nqg3C9PE+a U0wQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534493; x=1737139293; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0rBDGQyJzQ3zE+AnCN6b+fGLld6jYpPH9L4MwLLlvpM=; b=giy7eU/kn+hDdAP+lELd0dNITGVK0LY4Mxkdg9RxHQD+hAo/HCAZKLAtOIdS8NN9wn kwMrx1MFibBj7vC1Qy7RRxaOZWyfvq74xMCCLRNwVLLKFTcYUjpWncBYEYSLyLAt7lGB F1yIc7fxz1gz7rlJK1hUGp9cqsIhmKcru6m3vBa5nmscBGxNgYCU40XRLgdTlo6VPzp2 LWEDMyKnEpdDxCEjQWZCOfnPA+zYjOkyVa0muJ5eFt1aYLdikkZXk+3SDD9Ht2aMIcOd QqxCgPIBEGr8gnjWY2ZgEjcxneI6tyhN9WPJCxyMosxlkpPHdTEDK3P5ZqifJftjt5By m5rQ== X-Forwarded-Encrypted: i=1; AJvYcCUSrYtJWiBWVL2yhu8hs6PuRii5JNRHV0Xee4Ku0oMGgCi7UdA5vr3uxNSwcyS1lzKO6r6Lcy3VR7VD+g==@lists.infradead.org X-Gm-Message-State: AOJu0YyfZfSE00opD8TdytwteXdhnlQ0RBOtip8O9dwCbY7SGkMQeS/k TIwI3BdViHt0F1YEL28XL/vy0tMAtZOtXNL2jR8eIAjivWdjdfqNKJuOObtYW0P1rrpq+OC7H1E XfWlvIGCR1g== X-Google-Smtp-Source: AGHT+IGSdZ2gZJ+03Djx+zc677EB8kBOQn6jxeBFpWfFWpyM+wf7VvlfzHG8+toKsNSgpgcgyJMUn+Q1Gcn1Dw== X-Received: from wmbfc10.prod.google.com ([2002:a05:600c:524a:b0:434:e9fe:f913]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3ba4:b0:431:3bf9:3ebb with SMTP id 5b1f17b1804b1-436e26f4805mr96359685e9.24.1736534492849; Fri, 10 Jan 2025 10:41:32 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:47 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-21-8419288bc805@google.com> Subject: [PATCH RFC v2 21/29] KVM: x86: asi: Restricted address space for VM execution From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184135_384544_F88A0F3F X-CRM114-Status: GOOD ( 34.14 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org An ASI restricted address space is added for KVM. This protects the userspace from attack by the guest, and the guest from attack by other processes. It doesn't attempt to prevent the guest from attack by the current process. This change incorporates an extra asi_exit at the end of vcpu_run. We expect later iterations of ASI to drop that call as we gain the ability to context switch within the ASI domain. Signed-off-by: Brendan Jackman --- arch/x86/include/asm/kvm_host.h | 3 ++ arch/x86/kvm/svm/svm.c | 2 ++ arch/x86/kvm/vmx/vmx.c | 38 ++++++++++++-------- arch/x86/kvm/x86.c | 77 ++++++++++++++++++++++++++++++++++++++++- 4 files changed, 105 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6d9f763a7bb9d5db422ea5625b2c28420bd14f26..00cda452dd6ca6ec57ff85ca194ee4aeb6af3be7 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -37,6 +37,7 @@ #include #include #include +#include #define __KVM_HAVE_ARCH_VCPU_DEBUGFS @@ -1535,6 +1536,8 @@ struct kvm_arch { */ #define SPLIT_DESC_CACHE_MIN_NR_OBJECTS (SPTE_ENT_PER_PAGE + 1) struct kvm_mmu_memory_cache split_desc_cache; + + struct asi *asi; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9df3e1e5ae81a1346409632edd693cb7e0740f72..f2c3154292b4f6c960b490b0773f53bea43897bb 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4186,6 +4186,7 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in guest_state_enter_irqoff(); amd_clear_divider(); + asi_enter(vcpu->kvm->arch.asi); if (sev_es_guest(vcpu->kvm)) __svm_sev_es_vcpu_run(svm, spec_ctrl_intercepted, @@ -4193,6 +4194,7 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in else __svm_vcpu_run(svm, spec_ctrl_intercepted); + asi_relax(); guest_state_exit_irqoff(); } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index d28618e9277ede83ad2edc1b1778ea44123aa797..181d230b1c057fed33f7b29b7b0e378dbdfeb174 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -49,6 +49,7 @@ #include #include #include +#include #include @@ -7282,14 +7283,34 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, unsigned int flags) { struct vcpu_vmx *vmx = to_vmx(vcpu); + unsigned long cr3; guest_state_enter_irqoff(); + asi_enter(vcpu->kvm->arch.asi); + + /* + * Refresh vmcs.HOST_CR3 if necessary. This must be done immediately + * prior to VM-Enter, as the kernel may load a new ASID (PCID) any time + * it switches back to the current->mm, which can occur in KVM context + * when switching to a temporary mm to patch kernel code, e.g. if KVM + * toggles a static key while handling a VM-Exit. + * Also, this must be done after asi_enter(), as it changes CR3 + * when switching address spaces. + */ + cr3 = __get_current_cr3_fast(); + if (unlikely(cr3 != vmx->loaded_vmcs->host_state.cr3)) { + vmcs_writel(HOST_CR3, cr3); + vmx->loaded_vmcs->host_state.cr3 = cr3; + } /* * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW * mitigation for MDS is done late in VMentry and is still * executed in spite of L1D Flush. This is because an extra VERW * should not matter much after the big hammer L1D Flush. + * + * This is only after asi_enter() for performance reasons. + * RFC: This also needs to be integrated with ASI's tainting model. */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); @@ -7310,6 +7331,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, vmx->idt_vectoring_info = 0; + asi_relax(); + vmx_enable_fb_clear(vmx); if (unlikely(vmx->fail)) { @@ -7338,7 +7361,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) { struct vcpu_vmx *vmx = to_vmx(vcpu); - unsigned long cr3, cr4; + unsigned long cr4; /* Record the guest's net vcpu time for enforced NMI injections. */ if (unlikely(!enable_vnmi && @@ -7381,19 +7404,6 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) vmcs_writel(GUEST_RIP, vcpu->arch.regs[VCPU_REGS_RIP]); vcpu->arch.regs_dirty = 0; - /* - * Refresh vmcs.HOST_CR3 if necessary. This must be done immediately - * prior to VM-Enter, as the kernel may load a new ASID (PCID) any time - * it switches back to the current->mm, which can occur in KVM context - * when switching to a temporary mm to patch kernel code, e.g. if KVM - * toggles a static key while handling a VM-Exit. - */ - cr3 = __get_current_cr3_fast(); - if (unlikely(cr3 != vmx->loaded_vmcs->host_state.cr3)) { - vmcs_writel(HOST_CR3, cr3); - vmx->loaded_vmcs->host_state.cr3 = cr3; - } - cr4 = cr4_read_shadow(); if (unlikely(cr4 != vmx->loaded_vmcs->host_state.cr4)) { vmcs_writel(HOST_CR4, cr4); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 83fe0a78146fc198115aba0e76ba57ecfb1dd8d9..3e0811eb510650abc601e4adce1ce4189835a730 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -85,6 +85,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include "trace.h" @@ -9674,6 +9675,55 @@ static void kvm_x86_check_cpu_compat(void *ret) *(int *)ret = kvm_x86_check_processor_compatibility(); } +#ifdef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION +static inline int kvm_x86_init_asi_class(void) +{ + static struct asi_taint_policy policy = { + /* + * Prevent going to the guest with sensitive data potentially + * left in sidechannels by code running in the unrestricted + * address space, or another MM. + */ + .protect_data = ASI_TAINT_KERNEL_DATA | ASI_TAINT_OTHER_MM_DATA, + /* + * Prevent going to the guest with branch predictor state + * influenced by other processes. Note this bit is about + * protecting the guest from other parts of the system, while + * data_taints is about protecting other parts of the system + * from the guest. + */ + .prevent_control = ASI_TAINT_OTHER_MM_CONTROL, + .set = ASI_TAINT_GUEST_DATA, + }; + + /* + * Inform ASI that the guest will gain control of the branch predictor, + * unless we're just unconditionally blasting it after VM Exit. + * + * RFC: This is a bit simplified - on some configurations we could avoid + * a duplicated RSB-fill if we had a separate taint specifically for the + * RSB. + */ + if (!cpu_feature_enabled(X86_FEATURE_IBPB_ON_VMEXIT) || + !IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) || + !cpu_feature_enabled(X86_FEATURE_RSB_VMEXIT)) + policy.set = ASI_TAINT_GUEST_CONTROL; + + /* + * And the same for data left behind by code in the userspace domain + * (i.e. the VMM itself, plus kernel code serving its syscalls etc). + * This should eventually be configurable: users whose VMMs contain + * no secrets can disable it to avoid paying a mitigation cost on + * transition between their guest and userspace. + */ + policy.protect_data |= ASI_TAINT_USER_DATA; + + return asi_init_class(ASI_CLASS_KVM, &policy); +} +#else /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ +static inline int kvm_x86_init_asi_class(void) { return 0; } +#endif /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ + int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) { u64 host_pat; @@ -9737,6 +9787,10 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) kvm_caps.supported_vm_types = BIT(KVM_X86_DEFAULT_VM); kvm_caps.supported_mce_cap = MCG_CTL_P | MCG_SER_P; + r = kvm_x86_init_asi_class(); + if (r < 0) + goto out_mmu_exit; + if (boot_cpu_has(X86_FEATURE_XSAVE)) { kvm_host.xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK); kvm_caps.supported_xcr0 = kvm_host.xcr0 & KVM_SUPPORTED_XCR0; @@ -9754,7 +9808,7 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) r = ops->hardware_setup(); if (r != 0) - goto out_mmu_exit; + goto out_asi_uninit; kvm_ops_update(ops); @@ -9810,6 +9864,8 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) out_unwind_ops: kvm_x86_ops.enable_virtualization_cpu = NULL; kvm_x86_call(hardware_unsetup)(); +out_asi_uninit: + asi_uninit_class(ASI_CLASS_KVM); out_mmu_exit: kvm_mmu_vendor_module_exit(); out_free_percpu: @@ -9841,6 +9897,7 @@ void kvm_x86_vendor_exit(void) cancel_work_sync(&pvclock_gtod_work); #endif kvm_x86_call(hardware_unsetup)(); + asi_uninit_class(ASI_CLASS_KVM); kvm_mmu_vendor_module_exit(); free_percpu(user_return_msrs); kmem_cache_destroy(x86_emulator_cache); @@ -11574,6 +11631,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) r = vcpu_run(vcpu); + /* + * At present ASI doesn't have the capability to transition directly + * from the restricted address space to the user address space. So we + * just return to the unrestricted address space in between. + */ + asi_exit(); + out: kvm_put_guest_fpu(vcpu); if (kvm_run->kvm_valid_regs) @@ -12705,6 +12769,14 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) if (ret) goto out_uninit_mmu; + ret = asi_init(kvm->mm, ASI_CLASS_KVM, &kvm->arch.asi); + if (ret) + goto out_uninit_mmu; + + ret = static_call(kvm_x86_vm_init)(kvm); + if (ret) + goto out_asi_destroy; + INIT_HLIST_HEAD(&kvm->arch.mask_notifier_list); atomic_set(&kvm->arch.noncoherent_dma_count, 0); @@ -12742,6 +12814,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return 0; +out_asi_destroy: + asi_destroy(kvm->arch.asi); out_uninit_mmu: kvm_mmu_uninit_vm(kvm); kvm_page_track_cleanup(kvm); @@ -12883,6 +12957,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_destroy_vcpus(kvm); kvfree(rcu_dereference_check(kvm->arch.apic_map, 1)); kfree(srcu_dereference_check(kvm->arch.pmu_event_filter, &kvm->srcu, 1)); + asi_destroy(kvm->arch.asi); kvm_mmu_uninit_vm(kvm); kvm_page_track_cleanup(kvm); kvm_xen_destroy_vm(kvm); From patchwork Fri Jan 10 18:40:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 997B1E7719D for ; Fri, 10 Jan 2025 23:20:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=mbTzds8MyZvxTgexmkCMke+ZrcYPX8FQcNilTqxyWJU=; b=1cE0ICF/qluUVUkKzvqhBFb3G5 fY4SC/SasYXMCFOUQU5dRgGQLqseK+8KlPoXzVWpm5HqJNchUMkOz13RMGdahHKRYU42lIG90hPLE Hj8A7urFMDgqphuXUu4onNp08bOB6C+ike7WychOWIMmgiFVsjYtf5d9k0NnbWB3g0Ju/RBKTfDuD Nso7Ibq4ewUKe7zhSw/5iQ4BdoamZoqi/sbXRFddfest6ij9VBdgGfc9Km58K31BLGE1bfRbSS3VM KOZITSc47pIpySaEIi6RLBxSf1ZPR2Poq179JKx7CLW10kGqT/JT5VHHC5no0vIeq6jnKRAuUFugn 1r62mQog==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIU-0000000HEof-0AaI; Fri, 10 Jan 2025 23:20:10 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwz-0000000Gbsw-1nCS for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:41 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=DPVLK2dGaKuBzLzuN4/iPTz64M+48MZ0VyD4n5dtrVo=; b=jJ18AxxK67D2FsKTgZLoBObYdz O0/aTAvH/GpTN+nev8p3jQYKdG8t9XKi+UBeTL3soymy3BJfMk4d12s1n9P34J98dd8OTc85mkrOt h3ZI6L0bK3LkGb/H0ZsPjYoRLeoEWkerexnOM7mdNP3ZapRLJCA+OUVB39pfjZz5RKqNb7tqeRvfz UvJjjzSU2ZAdi9vErDHRJDclApnZKlZDFl0tBW2/qPai7LT2wVsbVrvWDOQmgOUHToKvR7N4dG1qH Q8oxyVvQyXLVktbXWmdWGrvHmPVLh3nxroVOIe4CfGVzkddPBXU+e2BA9WiZD/7cqKdAdQ2AO5eAr 28JAewdg==; Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwu-00000009smp-3M1e for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:40 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-43625ceae52so12794745e9.0 for ; Fri, 10 Jan 2025 10:41:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534495; x=1737139295; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DPVLK2dGaKuBzLzuN4/iPTz64M+48MZ0VyD4n5dtrVo=; b=XZAPbGna/cU/h89WTD2pcQjbI78MTue+pl+HpCwMQjN9WT20MFTi8W++KSbPfcrE9w rPgtF3bSHhuWZ/YRYaEh4oB/I8u9cCUuh6/gpoXgSAHOBDy8y5b76BOPRsKXDNzuSvEJ FSEtT0Oi0mtvcGEh+9+huUOSdIrT/4aUGu7DA2rRUWxI7Lw86Ev6ZQsrX58MJfV1kxd5 0ikTUwuD7CRD+PDkMgAJQooz1Xt3P5v/tBrdb6jaapWrr22gCm299Tw/3erUI10g6N/f OL4OJD1ckgGUnKeAO3kZMDD2NOnXdcZmtWlpY73cSl7brXPDDFZO3NM5KojL+yIRmPuo 12Ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534495; x=1737139295; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DPVLK2dGaKuBzLzuN4/iPTz64M+48MZ0VyD4n5dtrVo=; b=YACQNc6qxJJKsv+W9pce3uuUcYy7CBq79vkQXtM05Xg3DxqMfck53PfOuahxLmYyaY z+3IJHyCuyiqDFqbi1RuCfbmOGV5rTtg8umM5m+QLn4zmq0wKdr6l+IL7ZiGZZUPGY2v wAuGWdRt/SHxVFirDXoQEU1IViog+zkibNlcUr9InUO4b6utLYdAsWG0wJRamMBYvVg8 IC9vISvOhjYOvWyrl5LS6v0RqCOoocYGRD1dUz7IzPU+nUy/lzkab5iuqbEPQPy3nD7Z q+JUIrCDSOW5anyyvKtchdKAerV/HS6uQ0l6f36naWKG9aznrH6Y2bGUdNiHsRCYHned 1X2g== X-Forwarded-Encrypted: i=1; AJvYcCVPyLPA3otyehWRigGkzANwAV26Ow3f46SylAIxPSpDlvvnYsqK5lELiH7oPbdJGrge6WtgPI8E/YpjEQ==@lists.infradead.org X-Gm-Message-State: AOJu0Yw9IRTwuwQV7yG7T1LhAFCwXeABrGwyJcPG0pY5F8Q0aGDcFS2y gyhJvp2pD62p9o87C1t8+u2IrXiHNVj9Sbqy0WIosJa5cU589s4DEpk2p9+6O88MB4QP1JwPt5v MHKajBG2q5A== X-Google-Smtp-Source: AGHT+IGa4COP7j9J4GokCTwr7XdO0ZTV5zuQPMzgfz+JYaXuU8H1oEoG9KOP7c/HjvN6cC1FzOnvXoWDhc/niQ== X-Received: from wmbbd12.prod.google.com ([2002:a05:600c:1f0c:b0:434:fd41:173c]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:25a:b0:431:547e:81d0 with SMTP id 5b1f17b1804b1-436ee0a061emr59243545e9.11.1736534495189; Fri, 10 Jan 2025 10:41:35 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:48 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-22-8419288bc805@google.com> Subject: [PATCH RFC v2 22/29] mm: asi: exit ASI before accessing CR3 from C code where appropriate From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Yosry Ahmed X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184137_150389_CB9B2B88 X-CRM114-Status: GOOD ( 31.87 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Because asi_exit()s can be triggered by NMIs, CR3 is unstable when in the ASI restricted address space. (Exception: code in the ASI critical section can treat it as stable, because if an asi_exit() occurs during an exception it will be undone before the critical section resumes). Code that accesses CR3 needs to become aware of this. Most importantly: if code reads CR3 and then writes a derived value back, if concurrent asi_exit() occurred in between then the address space switch would be undone, which would totally break ASI. So, make sure that an asi_exit() is performed before accessing CR3. Exceptions are made for cases that need to access the current CR3 value, restricted or not, without exiting ASI. (An alternative approach would be to enter an ASI critical section when a stable CR3 is needed. This would be worth exploring if the ASI exits introduced by this patch turned out to cause performance issues). Add calls to asi_exit() to __native_read_cr3() and native_write_cr3(), and introduce 'raw' variants that do not perform an ASI exit. Introduce similar variants for wrappers: __read_cr3(), read_cr3_pa(), and write_cr3(). A forward declaration of asi_exit(), because the one in asm-generic/asi.h is only declared when !CONFIG_ADDRESS_SPACE_ISOLATION, and arch/x86/asm/asi.h cannot be included either as it would cause a circular dependency. The 'raw' variants are used in the following cases: - In __show_regs() where the actual values of registers are dumped for debugging. - In dump_pagetable() and show_fault_oops() where the active page tables during a page fault are dumped for debugging. - In switch_mm_verify_cr3() and cr3_matches_current_mm() where the actual value of CR3 is needed for a debug check, and the code explicitly checks for ASI-restricted CR3. - In exc_page_fault() for ASI faults. The code is ASI-aware and explicitly performs an ASI exit before reading CR3. - In load_new_mm_cr3() where a new CR3 is loaded during context switching. At this point, it is guaranteed that ASI already exited. Calling asi_exit() at that point, where loaded_mm == LOADED_MM_SWITCHING, will cause VM_BUG_ON in asi_exit() to fire mistakenly even though loaded_mm is never accessed. - In __get_current_cr3_fast(), as it is called from an ASI critical section and the value is only used for debug checking. In nested_vmx_check_vmentry_hw(), do an explicit asi_exit() before calling __get_current_cr3_fast(), since in that case we are not in a critical section and do need a stable CR3 value. - In __asi_enter() and asi_exit() for obvious reasons. - In vmx_set_constant_host_state() when CR3 is initialized in the VMCS with the most likely value. Preemption is enabled, so once ASI supports context switching exiting ASI will not be reliable as rescheduling may cause re-entering ASI. It doesn't matter if the wrong value of CR3 is used in this context, before entering the guest, ASI is either explicitly entered or exited, and CR3 is updated again in the VMCS if needed. - In efi_5level_switch(), as it is called from startup_64_mixed_mode() during boot before ASI can be entered. startup_64_mixed_mode() is under arch/x86/boot/compressed/* and it cannot call asi_exit() anyway (see below). Finally, code in arch/x86/boot/compressed/ident_map_64.c and arch/x86/boot/compressed/pgtable_64.c extensively accesses CR3 during boot. This code under arch/x86/boot/compressed/* cannot call asi_exit() due to restriction on its compilation (it cannot use functions defined in .c files outside the directory). Instead of changing all CR3 accesses to use 'raw' variants, undefine CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION in these files. This will make the asi_exit() calls in CR3 helpers use the noop variant defined in include/asm-generic/asi.h. This is fine because the code is executed early in boot where asi_exit() would be noop anyway. With this change, the number of existing *_cr3() calls are 44, and the number of *_cr3_raw() are 22. The choice was made to make the existing functions exit ASI by default and adding new variants that do not exit ASI, because most call sites that use the new *_cr3_raw() variants are either ASI-aware code or debugging code. On the contrary, code that uses the existing variants is usually in important code paths (e.g. TLB flushes) and is ignorant of ASI. Hence, new code is most likely to be correct and less risky by using the variants that exit ASI by default. Signed-off-by: Yosry Ahmed Signed-off-by: Brendan Jackman --- arch/x86/Kconfig | 2 +- arch/x86/boot/compressed/ident_map_64.c | 10 ++++++++ arch/x86/boot/compressed/pgtable_64.c | 11 +++++++++ arch/x86/include/asm/processor.h | 5 ++++ arch/x86/include/asm/special_insns.h | 41 +++++++++++++++++++++++++++++++-- arch/x86/kernel/process_32.c | 2 +- arch/x86/kernel/process_64.c | 2 +- arch/x86/kvm/vmx/nested.c | 6 +++++ arch/x86/kvm/vmx/vmx.c | 8 ++++++- arch/x86/mm/asi.c | 4 ++-- arch/x86/mm/fault.c | 8 +++---- arch/x86/mm/tlb.c | 16 +++++++++---- arch/x86/virt/svm/sev.c | 2 +- drivers/firmware/efi/libstub/x86-5lvl.c | 2 +- include/asm-generic/asi.h | 1 + 15 files changed, 101 insertions(+), 19 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 1fcb52cb8cd5084ac3cef04af61b7d1653162bdb..ae31f36ce23d7c29d1e90b726c5a2e6ea5a63c8d 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2531,7 +2531,7 @@ config MITIGATION_ADDRESS_SPACE_ISOLATION The !PARAVIRT dependency is only because of lack of testing; in theory the code is written to work under paravirtualization. In practice there are likely to be unhandled cases, in particular concerning TLB - flushes. + flushes and CR3 manipulation. config ADDRESS_SPACE_ISOLATION_DEFAULT_ON diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c index dfb9c2deb77cfc4e9986976bf2fd1652666f8f15..957b6f818aec361191432b420b61ba6ae017cf6c 100644 --- a/arch/x86/boot/compressed/ident_map_64.c +++ b/arch/x86/boot/compressed/ident_map_64.c @@ -11,6 +11,16 @@ /* No MITIGATION_PAGE_TABLE_ISOLATION support needed either: */ #undef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION +/* + * CR3 access helpers (e.g. write_cr3()) will call asi_exit() to exit the + * restricted address space first. We cannot call the version defined in + * arch/x86/mm/asi.c here, so make sure we always call the noop version in + * asm-generic/asi.h. It does not matter because early during boot asi_exit() + * would be a noop anyway. The alternative is spamming the code with *_raw() + * variants of the CR3 helpers. + */ +#undef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + #include "error.h" #include "misc.h" diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c index c882e1f67af01c50a20bfe00a32138dc771ee88c..034ad7101126c19864cfacc7c363fd31fedecd2b 100644 --- a/arch/x86/boot/compressed/pgtable_64.c +++ b/arch/x86/boot/compressed/pgtable_64.c @@ -1,4 +1,15 @@ // SPDX-License-Identifier: GPL-2.0 + +/* + * CR3 access helpers (e.g. write_cr3()) will call asi_exit() to exit the + * restricted address space first. We cannot call the version defined in + * arch/x86/mm/asi.c here, so make sure we always call the noop version in + * asm-generic/asi.h. It does not matter because early during boot asi_exit() + * would be a noop anyway. The alternative is spamming the code with *_raw() + * variants of the CR3 helpers. + */ +#undef CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION + #include "misc.h" #include #include diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index a32a53405f45e4c0473fe081e216029cf5bd0cdd..9375a7f877d60e8f556dedefbe74593c1a5a6e10 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -226,6 +226,11 @@ static __always_inline unsigned long read_cr3_pa(void) return __read_cr3() & CR3_ADDR_MASK; } +static __always_inline unsigned long read_cr3_pa_raw(void) +{ + return __read_cr3_raw() & CR3_ADDR_MASK; +} + static inline unsigned long native_read_cr3_pa(void) { return __native_read_cr3() & CR3_ADDR_MASK; diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 6e103358966f6f1333aa07be97aec5f8af794120..1c886b3f04a56893b7408466a2c94d23f5d11857 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -5,6 +5,7 @@ #ifdef __KERNEL__ #include #include +#include #include #include @@ -42,18 +43,32 @@ static __always_inline void native_write_cr2(unsigned long val) asm volatile("mov %0,%%cr2": : "r" (val) : "memory"); } -static __always_inline unsigned long __native_read_cr3(void) +void asi_exit(void); + +static __always_inline unsigned long __native_read_cr3_raw(void) { unsigned long val; asm volatile("mov %%cr3,%0\n\t" : "=r" (val) : __FORCE_ORDER); return val; } -static __always_inline void native_write_cr3(unsigned long val) +static __always_inline unsigned long __native_read_cr3(void) +{ + asi_exit(); + return __native_read_cr3_raw(); +} + +static __always_inline void native_write_cr3_raw(unsigned long val) { asm volatile("mov %0,%%cr3": : "r" (val) : "memory"); } +static __always_inline void native_write_cr3(unsigned long val) +{ + asi_exit(); + native_write_cr3_raw(val); +} + static inline unsigned long native_read_cr4(void) { unsigned long val; @@ -152,17 +167,39 @@ static __always_inline void write_cr2(unsigned long x) /* * Careful! CR3 contains more than just an address. You probably want * read_cr3_pa() instead. + * + * The implementation interacts with ASI to ensure that the returned value is + * stable as long as preemption is disabled. */ static __always_inline unsigned long __read_cr3(void) { return __native_read_cr3(); } +/* + * The return value of this is unstable under ASI, even with preemption off. + * Call __read_cr3 instead unless you have a good reason not to. + */ +static __always_inline unsigned long __read_cr3_raw(void) +{ + return __native_read_cr3_raw(); +} + +/* This interacts with ASI like __read_cr3. */ static __always_inline void write_cr3(unsigned long x) { native_write_cr3(x); } +/* + * Like __read_cr3_raw, this doesn't interact with ASI. It's very unlikely that + * this should be called from outside ASI-specific code. + */ +static __always_inline void write_cr3_raw(unsigned long x) +{ + native_write_cr3_raw(x); +} + static inline void __write_cr4(unsigned long x) { native_write_cr4(x); diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c index 0917c7f25720be91372bacddb1a3032323b8996f..14828a867b713a50297953c5a0ccfd03d83debc0 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -79,7 +79,7 @@ void __show_regs(struct pt_regs *regs, enum show_regs_mode mode, cr0 = read_cr0(); cr2 = read_cr2(); - cr3 = __read_cr3(); + cr3 = __read_cr3_raw(); cr4 = __read_cr4(); printk("%sCR0: %08lx CR2: %08lx CR3: %08lx CR4: %08lx\n", log_lvl, cr0, cr2, cr3, cr4); diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 226472332a70dd02902f81c21207d770e698aeed..ca135731b54b7f5f1123c2b8b27afdca7b868bcc 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -113,7 +113,7 @@ void __show_regs(struct pt_regs *regs, enum show_regs_mode mode, cr0 = read_cr0(); cr2 = read_cr2(); - cr3 = __read_cr3(); + cr3 = __read_cr3_raw(); cr4 = __read_cr4(); printk("%sFS: %016lx(%04x) GS:%016lx(%04x) knlGS:%016lx\n", diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 931a7361c30f2da28073eb832efce0b378e3b325..7eb033dabe4a606947c4d7e5b96be2e42d8f2478 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3214,6 +3214,12 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) */ vmcs_writel(GUEST_RFLAGS, 0); + /* + * Stabilize CR3 to ensure the VM Exit returns to the correct address + * space. This is costly, we wouldn't do this in hot-path code. + */ + asi_exit(); + cr3 = __get_current_cr3_fast(); if (unlikely(cr3 != vmx->loaded_vmcs->host_state.cr3)) { vmcs_writel(HOST_CR3, cr3); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 181d230b1c057fed33f7b29b7b0e378dbdfeb174..0e90463f1f2183b8d716f85d5c8a8af8958fef0b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4323,8 +4323,14 @@ void vmx_set_constant_host_state(struct vcpu_vmx *vmx) /* * Save the most likely value for this task's CR3 in the VMCS. * We can't use __get_current_cr3_fast() because we're not atomic. + * + * Use __read_cr3_raw() to avoid exiting ASI if we are in the restrict + * address space. Preemption is enabled, so rescheduling could make us + * re-enter ASI anyway. It's okay to avoid exiting ASI here because + * vmx_vcpu_enter_exit() and nested_vmx_check_vmentry_hw() will + * explicitly enter or exit ASI and update CR3 in the VMCS if needed. */ - cr3 = __read_cr3(); + cr3 = __read_cr3_raw(); vmcs_writel(HOST_CR3, cr3); /* 22.2.3 FIXME: shadow tables */ vmx->loaded_vmcs->host_state.cr3 = cr3; diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index bc2cf0475a0e7344a66d81453f55034b2fc77eef..a9f9bfbf85eb47d16ef8d0bfbc7713f07052d3ed 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -488,7 +488,7 @@ noinstr void __asi_enter(void) pcid = asi_pcid(target, this_cpu_read(cpu_tlbstate.loaded_mm_asid)); asi_cr3 = build_cr3_pcid_noinstr(target->pgd, pcid, tlbstate_lam_cr3_mask(), false); - write_cr3(asi_cr3); + write_cr3_raw(asi_cr3); maybe_flush_data(target); /* @@ -559,7 +559,7 @@ noinstr void asi_exit(void) /* Tainting first makes reentrancy easier to reason about. */ this_cpu_or(asi_taints, ASI_TAINT_KERNEL_DATA); - write_cr3(unrestricted_cr3); + write_cr3_raw(unrestricted_cr3); /* * Must not update curr_asi until after CR3 write, otherwise a * re-entrant call might not enter this branch. (This means we diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index ee8f5417174e2956391d538f41e2475553ca4972..ca48e4f5a27be30ff93d1c3d194aad23d99ae43c 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -295,7 +295,7 @@ static bool low_pfn(unsigned long pfn) static void dump_pagetable(unsigned long address) { - pgd_t *base = __va(read_cr3_pa()); + pgd_t *base = __va(read_cr3_pa_raw()); pgd_t *pgd = &base[pgd_index(address)]; p4d_t *p4d; pud_t *pud; @@ -351,7 +351,7 @@ static int bad_address(void *p) static void dump_pagetable(unsigned long address) { - pgd_t *base = __va(read_cr3_pa()); + pgd_t *base = __va(read_cr3_pa_raw()); pgd_t *pgd = base + pgd_index(address); p4d_t *p4d; pud_t *pud; @@ -519,7 +519,7 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, unsigned long ad pgd_t *pgd; pte_t *pte; - pgd = __va(read_cr3_pa()); + pgd = __va(read_cr3_pa_raw()); pgd += pgd_index(address); pte = lookup_address_in_pgd_attr(pgd, address, &level, &nx, &rw); @@ -1578,7 +1578,7 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) * be losing some stats here. However for now this keeps ASI * page faults nice and fast. */ - pgd = (pgd_t *)__va(read_cr3_pa()) + pgd_index(address); + pgd = (pgd_t *)__va(read_cr3_pa_raw()) + pgd_index(address); if (!user_mode(regs) && kernel_access_ok(error_code, address, pgd)) { warn_if_bad_asi_pf(error_code, address); return; diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 07b1657bee8e4cf17452ea57c838823e76f482c0..0c9f477a44a4da971cb7744d01d9101900ead1a5 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -331,8 +331,14 @@ static void load_new_mm_cr3(pgd_t *pgdir, u16 new_asid, unsigned long lam, * Caution: many callers of this function expect * that load_cr3() is serializing and orders TLB * fills with respect to the mm_cpumask writes. + * + * The context switching code will explicitly exit ASI when needed, do + * not use write_cr3() as it has an implicit ASI exit. Calling + * asi_exit() here, where loaded_mm == LOADED_MM_SWITCHING, will cause + * the VM_BUG_ON() in asi_exit() to fire mistakenly even though + * loaded_mm is never accessed. */ - write_cr3(new_mm_cr3); + write_cr3_raw(new_mm_cr3); } void leave_mm(void) @@ -559,11 +565,11 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, * without going through leave_mm() / switch_mm_irqs_off() or that * does something like write_cr3(read_cr3_pa()). * - * Only do this check if CONFIG_DEBUG_VM=y because __read_cr3() + * Only do this check if CONFIG_DEBUG_VM=y because __read_cr3_raw() * isn't free. */ #ifdef CONFIG_DEBUG_VM - if (WARN_ON_ONCE(__read_cr3() != build_cr3(prev->pgd, prev_asid, + if (WARN_ON_ONCE(__read_cr3_raw() != build_cr3(prev->pgd, prev_asid, tlbstate_lam_cr3_mask()))) { /* * If we were to BUG here, we'd be very likely to kill @@ -1173,7 +1179,7 @@ noinstr unsigned long __get_current_cr3_fast(void) */ VM_WARN_ON_ONCE(asi && asi_in_critical_section()); - VM_BUG_ON(cr3 != __read_cr3()); + VM_BUG_ON(cr3 != __read_cr3_raw()); return cr3; } EXPORT_SYMBOL_GPL(__get_current_cr3_fast); @@ -1373,7 +1379,7 @@ static inline bool cr3_matches_current_mm(void) * find a current ASI domain. */ barrier(); - pgd_cr3 = __va(read_cr3_pa()); + pgd_cr3 = __va(read_cr3_pa_raw()); return pgd_cr3 == current->mm->pgd || pgd_cr3 == pgd_asi; } diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c index 9a6a943d8e410c0289200adb9deafe8e45d33a4b..63d391395a5c7f4ddec28116814ccd6c52bbb428 100644 --- a/arch/x86/virt/svm/sev.c +++ b/arch/x86/virt/svm/sev.c @@ -379,7 +379,7 @@ void snp_dump_hva_rmpentry(unsigned long hva) pgd_t *pgd; pte_t *pte; - pgd = __va(read_cr3_pa()); + pgd = __va(read_cr3_pa_raw()); pgd += pgd_index(hva); pte = lookup_address_in_pgd(pgd, hva, &level); diff --git a/drivers/firmware/efi/libstub/x86-5lvl.c b/drivers/firmware/efi/libstub/x86-5lvl.c index 77359e802181fd82b6a624cf74183e6a318cec9b..3b97a5aea983a109fbdc6d23a219e4a04024c512 100644 --- a/drivers/firmware/efi/libstub/x86-5lvl.c +++ b/drivers/firmware/efi/libstub/x86-5lvl.c @@ -66,7 +66,7 @@ void efi_5level_switch(void) bool have_la57 = native_read_cr4() & X86_CR4_LA57; bool need_toggle = want_la57 ^ have_la57; u64 *pgt = (void *)la57_toggle + PAGE_SIZE; - u64 *cr3 = (u64 *)__native_read_cr3(); + u64 *cr3 = (u64 *)__native_read_cr3_raw(); u64 *new_cr3; if (!la57_toggle || !need_toggle) diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h index 7867b8c23449058a1dd06308ab5351e0d210a489..4f033d3ef5929707fd280f74fc800193e45143c1 100644 --- a/include/asm-generic/asi.h +++ b/include/asm-generic/asi.h @@ -71,6 +71,7 @@ static inline pgd_t *asi_pgd(struct asi *asi) { return NULL; } static inline void asi_handle_switch_mm(void) { } +struct thread_struct; static inline void asi_init_thread_state(struct thread_struct *thread) { } static inline void asi_intr_enter(void) { } From patchwork Fri Jan 10 18:40:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935636 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B0821E77188 for ; Sat, 11 Jan 2025 00:23:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=FAMSvbymZsdifWxnp3EP4x0CjIxJkg4bI1ezO0Opr0E=; b=dmtdO1lIH3LVfI/m1RGOFiQhSH tca219SW3JstYojiVXnfs25J8qhjKIPKiaTU7PPTipjIlpUUqJXpqVOStQEcAnzbmS6rnq5sV/Dc5 6legl8ei+A/IjVf9tsv9rA9rd+fgSYyxdq46Frb1rsd959PFpG30l29gsOYCm9dDXJPTMyUViGQTw Ed4bzjqQkm8sl3xOl/20mBaHAiSFCf7ND68gCikr10eJfWNOPo3LydBGaOdmuCOHkcuuYParOPkwH 0BWBCqZGcxhWGXS1nvxXKc3BS9UwWx2bk3UeoLqqdc8oo/wjcbt0vk3Q3Cd+lQiGB6I5TT5Gt8G8F /3QMnp8g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWPGy-0000000HMbW-2Wxt; Sat, 11 Jan 2025 00:22:40 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJx3-0000000GbvD-0ReT for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:45 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=aGIlyFkOxdGIUAlLSc7B7HeTqLb/6yPSKg6NF5x4oeg=; b=jnyncBITSd8hvC0lFIheZO8SXL xk0T3sgf17Zx8zM+i7F5RS2CalHptKibk39wza6S/LROgsT9wWoXBhZAsqCZ7z2jG5Ckffw0TXvrj lHStfSVWNS4jgoaIcw4jiBFQybRijEpMq0+cd0qV9/IeSeTlkiNYqFJ7ZQY/Yd9a4G5GO0nh/GNzX J0Np99B3qngh5/MqVAWS7eHV2cdLA2F/qyHhkS3d4JuAVdZLtE2px9bWInex+YDlJ5ljiJ0MpCuSQ 20JTUzF0mAM668/GcUsg/Ujt7VKe9oTMWjcvljL2RW3opm10P5HyPjo+C1Fj2IwECGNhnNv9icfG0 EKOeceUQ==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwy-00000009sok-36Jk for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:43 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4362552ce62so12186415e9.0 for ; Fri, 10 Jan 2025 10:41:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534497; x=1737139297; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aGIlyFkOxdGIUAlLSc7B7HeTqLb/6yPSKg6NF5x4oeg=; b=JIduXtwxAbfW+uj7qj57ptRN2EBQHb4sS8t9VXBvc36RYueKdmK/my5pOphTyHJ4en RaxzTdhRiPHVR1WH40wixXxWQCdF+x9fRgd+TT/PMHPb/V9as2Eu3zRSxUEbPtPX5Drm e4tW/RFDrJ38VFOBBcvfr0rNnZq+vd5MoHLgBdQAgF4B72URpt45gdN3ve1iTa6EnYg/ tHECZSxCh6ouxt4R3JMih23iDeR7wTr/08HA1BEHV9GcVApd8U6Cdcg4V6uB6dB8x1C8 YAqzYbxwYLOqNuYgPxS2Yw/Bx0Z5Wn3mNQHGqyyqZJIDlqHDb+b5rnKNhVNH/9oTIxQO uTSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534497; x=1737139297; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aGIlyFkOxdGIUAlLSc7B7HeTqLb/6yPSKg6NF5x4oeg=; b=qAbWXdPS7UrLVuAguOCvR5+8wAGf+mL0WpXF7CAJjkqnbTdRXmBjpmK0Kr2AwNq1ym +g/jgF3fKS05BLia8qQB6c4k4Kof3K9cavn4DXcms1JnoRQi3Wv9eiIV++tf5eh4daCO ADThkVtIX+wZqynMIfZX4L6Ma0NYlY5LrMd8NLhFwYq0s8eQ9UfU1HWICdEjp3B3ZJgi rnSXcx7cmpg5xDK5X28MVS2smKAstn31VSI+Wt7IqKRUt0mOAGC3XgBj4ERBGFuQNN4o EmM6KAYxA4msLNX4ihggUAF7y6u7743VeUUnwJctQ8jRhMTxUaRsNcze74aAdWINesqx a/Dw== X-Forwarded-Encrypted: i=1; AJvYcCVq4OOPL8kv8iqXtj/g4zpocqiGnnKGguJl6q7sawlXd8q5ZL6RFTW8HccKGAUI66xf7EG9G1iIzxGyng==@lists.infradead.org X-Gm-Message-State: AOJu0Yx6q+mBDcT1C+xhLgi5PHFlvETmj7n8PmgC7v0+BiK7YTsQC7Lg XyvAMVrV09zumLT8gqUkoA8BUhHgxah1DFr2ca6cDpokHu3GZE+EF8MQDr9wKKhOCdErI3/OXP9 PlyrUjblJjw== X-Google-Smtp-Source: AGHT+IFRy9/+C5CnBm/ma6a3ETfEwH3prKyhyhclEMfLJAYKbacCY4ZfacRZW3At4TIIMxUaqY38zEy/zVSAnQ== X-Received: from wmdn10.prod.google.com ([2002:a05:600c:294a:b0:436:d819:e4eb]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5848:b0:436:f3f6:9582 with SMTP id 5b1f17b1804b1-436f3f695dfmr6272215e9.8.1736534497408; Fri, 10 Jan 2025 10:41:37 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:49 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-23-8419288bc805@google.com> Subject: [PATCH RFC v2 23/29] mm: asi: exit ASI before suspend-like operations From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Yosry Ahmed X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184140_927183_0EA59D7A X-CRM114-Status: GOOD ( 23.05 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Yosry Ahmed During suspend-like operations (suspend, hibernate, kexec w/ preserve_context), the processor state (including CR3) is usually saved and restored later. In the kexec case, this only happens when KEXEC_PRESERVE_CONTEXT is used to jump back to the original kernel. In relocate_kernel(), some registers including CR3 are stored in VA_CONTROL_PAGE. If preserve_context is set (passed into relocate_kernel() in RCX), after running the new kernel the code under 'virtual_mapped' restores these registers. This is similar to what happens in suspend and hibernate. Note that even when KEXEC_PRESERVE_CONTEXT is not set, relocate_kernel() still accesses CR3. It mainly reads and writes it to flush the TLB. This could be problematic and cause improper ASI enters (see below), but it is assumed to be safe because the kernel will essentially reboot in this case anyway. Saving and restoring CR3 in this fashion can cause a problem if the suspend/hibernate/kexec is performed within an ASI domain. A restricted CR3 will be saved, and later restored after ASI had potentially already exited (e.g. from an NMI after CR3 is stored). This will cause an _improper_ ASI enter, where code starts executing in a restricted address space, yet ASI metadata (especially curr_asi) says otherwise. Exit ASI early in all these paths by registering a syscore_suspend() callback. syscore_suspend() is called in all the above paths (for kexec, only with KEXEC_PRESERVE_CONTEXT) after IRQs are finally disabled before the operation. This is not currently strictly required but is convenient because when ASI gains the ability to persist across context switching, there will be additional synchronization requirements simplified by this. Note: If the CR3 accesses in relocate_kernel() when KEXEC_PRESERVE_CONTEXT is not set are concerning, they could be handled by registering a syscore_shutdown() callback to exit ASI. syscore_shutdown() is called in the kexec path where KEXEC_PRESERVE_CONTEXT is not set starting commit 7bb943806ff6 ("kexec: do syscore_shutdown() in kernel_kexec"). Signed-off-by: Yosry Ahmed Signed-off-by: Brendan Jackman --- arch/x86/mm/asi.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index a9f9bfbf85eb47d16ef8d0bfbc7713f07052d3ed..c5073af1a82ded1c6fc467cd7a5d29a39d676bb4 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -6,6 +6,7 @@ #include #include +#include #include #include @@ -243,6 +244,32 @@ static int asi_map_percpu(struct asi *asi, void *percpu_addr, size_t len) return 0; } +#ifdef CONFIG_PM_SLEEP +static int asi_suspend(void) +{ + /* + * Must be called after IRQs are disabled and rescheduling is no longer + * possible (so that we cannot re-enter ASI before suspending. + */ + lockdep_assert_irqs_disabled(); + + /* + * Suspend operations sometimes save CR3 as part of the saved state, + * which is restored later (e.g. do_suspend_lowlevel() in the suspend + * path, swsusp_arch_suspend() in the hibernate path, relocate_kernel() + * in the kexec path). Saving a restricted CR3 and restoring it later + * could leave to improperly entering ASI. Exit ASI before such + * operations. + */ + asi_exit(); + return 0; +} + +static struct syscore_ops asi_syscore_ops = { + .suspend = asi_suspend, +}; +#endif /* CONFIG_PM_SLEEP */ + static int __init asi_global_init(void) { int err; @@ -306,6 +333,10 @@ static int __init asi_global_init(void) asi_clone_pgd(asi_global_nonsensitive_pgd, init_mm.pgd, VMEMMAP_START + (1UL << PGDIR_SHIFT)); +#ifdef CONFIG_PM_SLEEP + register_syscore_ops(&asi_syscore_ops); +#endif + return 0; } subsys_initcall(asi_global_init) From patchwork Fri Jan 10 18:40:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A771DE77188 for ; Fri, 10 Jan 2025 23:20:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=04uHjKUyTWFbGw9XsUQM/4RpTm46PikrB5zM1HeAkFE=; b=zN6PCyJ4rprehf0quJAfajjCeD lxpweDkHm4KuYfOh88d5Z0ulTcJJVW0vXRVug3w8PwJnAfV3LcTelTw5SSFZdB7vIbiaO9HgC2B45 4Y2wgSg09dosNCyBbz+PDDe3FGElQ0IJ5jKREA1icnH5uBBRTUyAq4JS8DIsXGnTqVuNIO1ZO4ZRc KipZwu8rKwfZFVUFNyBpfzfDkhHz0Gxbl6/GZ536DjSR9+BDoFQ+QdTopPzfYaMywTtvcmq0KjBS6 6Hq/uSohxu9vyyTU9MxUMWjtBE1IwQEFWdgs5Z+/H42nkHmS2Ksclt9RbQY5HLQGyXwwGVujFq7ep lgGngmIQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIV-0000000HEqP-0jVv; Fri, 10 Jan 2025 23:20:11 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJx2-0000000GbuL-12kF for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:44 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=P63tWziIl5MLfOevD/STQnofaJPDmGipUyDaNTwON8A=; b=P/3BljkJ+MZocxMP/xEVEIKHRJ OBRbZSXftz/RN0uLSSWASK3Zlh1JVE3He8te1rwTzQa+OIczT8i4V4Z1fjnkJUU0nMOGnEdbDqYWi ANGp2zQohJGCyred9aK9LyNs5quZXRSdEmG9DrcpEIqK06JtO76+CCa514f7eS+ff9i9mVUmPVd7T ui5I9dTB44SO9B5sO8JVfbRknioRhu42BoHW6hnqBPsib2zVuIC6TMQftN+3uDFRzMy84usY4gOiX azwMXkIbWfgbMBVZDsF2a6jYIjg8+1A+ycKcJX+wUTL0ZN4R3vHTbtyrx/1N2Psn+/aBBa08S+Hv0 +mKl5VkA==; Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by casper.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJwz-0000000EBVB-1lBN for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:43 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-385d51ba2f5so1237649f8f.2 for ; Fri, 10 Jan 2025 10:41:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534500; x=1737139300; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=P63tWziIl5MLfOevD/STQnofaJPDmGipUyDaNTwON8A=; b=H66sv8ac/q9cNIe+j+5IiRc/lT2MPaOYpJ9yFI3utRDJPaEc0+CWaUHy7W6WUI+PrJ luob1cZV/GSNAT/Cb7mjBJIOwu/Es0CkWo0fRil3uoxQUDb0fTERUABNOwYAR0ICY5fe x/cbDv7mabtUukGaCavMJNgO7Cpixgrcb+S/05XQj/Q58dxxqBjQ2URGgwvYCKhKTWAj SVYnMYIvb+CpY1E84nyziVirlw3mgOGk8W73jn27fn3438pY+T0mQK4zv1+pye9RtYS7 WxB3J/EAwAkOkDcv0MQeDGCMvrbARhzv5CId7WM19cQJKA1jBXEEWh2eB5QDMv72uWCl q1Qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534500; x=1737139300; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=P63tWziIl5MLfOevD/STQnofaJPDmGipUyDaNTwON8A=; b=V9uscFX5YTqZ7m61tZYsED5gwkOrvPB1AqOqkQEwaKjupmapmfB+h2V7IgAj8HOeTC jQ0Sno4W2LdoJ2Jcge3Q1f8F+E9AzGBhLKDuDFGy+LSixFFYb2lBsQWEycF1R6YnkdDA jfHBUFBgubKCk1gQehuX51yL7s98gM9UZnKR72xNlqGw4FmC6hCyMXLhxP5BB2wD+cJm gqgeVPsYXFZPdQaSkL0+gNQibwHLUJT2TJ1kvv3C6jEWzZRgZ2rbfxLyGj74DdtZqILG c0lWfC3hRBjLXYk6lvV/JpYN38Mkj0EXom2ezQ0G4sROIdiRHiPbEf7unFPZjaugciMy TqYg== X-Forwarded-Encrypted: i=1; AJvYcCUtMxS6oseRd/S9oYFDsZBV0dvOzJ9BZPzSAYAPsXF/75dSP2BM+irlOSC2Xt+xv8FdmWmAgMYXdosJfA==@lists.infradead.org X-Gm-Message-State: AOJu0YwpePqj0r8CYQNac0kY/lWuaCsMzUHgWS3uk8soEBEA3otdJ0fJ jjZXwBRX3FDk01/EDmJtz/6FhacklF5psrkF4ygWplHxl5xGSHSE2njYMs1hAzWogETW1kqjbuG NsETTRGqquA== X-Google-Smtp-Source: AGHT+IFpLRoJ7ZhWq6f6CfOMCmb66msRmVLu03/rxTgeCGcpzkKTsEslMdO3oeVqCUER7+YbrfuCetVYZstiXQ== X-Received: from wmbew14.prod.google.com ([2002:a05:600c:808e:b0:436:16c6:831]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:156c:b0:385:e3c5:61ae with SMTP id ffacd0b85a97d-38a873125c3mr11343217f8f.31.1736534499784; Fri, 10 Jan 2025 10:41:39 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:50 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-24-8419288bc805@google.com> Subject: [PATCH RFC v2 24/29] mm: asi: Add infrastructure for mapping userspace addresses From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman , Junaid Shahid , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184141_499708_EEFCEF11 X-CRM114-Status: GOOD ( 29.30 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org In preparation for sandboxing bare-metal processes, teach ASI to map userspace addresses into the restricted address space. Add a new policy helper to determine based on the class whether to do this. If the helper returns true, mirror userspace mappings into the ASI pagetables. Later, it will be possible for users who do not have a significant security boundary between KVM guests and their VMM process, to take advantage of this to reduce mitigation costs when switching between those two domains - to illustrate this idea, it's now reflected in the KVM taint policy, although the KVM class is still hard-coded not to map userspace addresses. Co-developed-by: Junaid Shahid Signed-off-by: Junaid Shahid Co-developed-by: Reiji Watanabe Signed-off-by: Reiji Watanabe Signed-off-by: Brendan Jackman --- arch/x86/include/asm/asi.h | 11 +++++ arch/x86/include/asm/pgalloc.h | 6 +++ arch/x86/include/asm/pgtable_64.h | 4 ++ arch/x86/kvm/x86.c | 12 +++-- arch/x86/mm/asi.c | 92 +++++++++++++++++++++++++++++++++++++++ include/asm-generic/asi.h | 4 ++ 6 files changed, 125 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/asi.h b/arch/x86/include/asm/asi.h index 555edb5f292e4d6baba782f51d014aa48dc850b6..e925d7d2cfc85bca8480c837548654e7a5a7009e 100644 --- a/arch/x86/include/asm/asi.h +++ b/arch/x86/include/asm/asi.h @@ -133,6 +133,7 @@ struct asi { struct mm_struct *mm; int64_t ref_count; enum asi_class_id class_id; + spinlock_t pgd_lock; }; DECLARE_PER_CPU_ALIGNED(struct asi *, curr_asi); @@ -147,6 +148,7 @@ const char *asi_class_name(enum asi_class_id class_id); int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_asi); void asi_destroy(struct asi *asi); +void asi_clone_user_pgtbl(struct mm_struct *mm, pgd_t *pgdp); /* Enter an ASI domain (restricted address space) and begin the critical section. */ void asi_enter(struct asi *asi); @@ -286,6 +288,15 @@ static __always_inline bool asi_in_critical_section(void) void asi_handle_switch_mm(void); +/* + * This function returns true when we would like to map userspace addresses + * in the restricted address space. + */ +static inline bool asi_maps_user_addr(enum asi_class_id class_id) +{ + return false; +} + #endif /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ #endif diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h index dcd836b59bebd329c3d265b98e48ef6eb4c9e6fc..edf9fe76c53369eefcd5bf14a09cbf802cf1ea21 100644 --- a/arch/x86/include/asm/pgalloc.h +++ b/arch/x86/include/asm/pgalloc.h @@ -114,12 +114,16 @@ static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4d, pud_t *pud) { paravirt_alloc_pud(mm, __pa(pud) >> PAGE_SHIFT); set_p4d(p4d, __p4d(_PAGE_TABLE | __pa(pud))); + if (!pgtable_l5_enabled()) + asi_clone_user_pgtbl(mm, (pgd_t *)p4d); } static inline void p4d_populate_safe(struct mm_struct *mm, p4d_t *p4d, pud_t *pud) { paravirt_alloc_pud(mm, __pa(pud) >> PAGE_SHIFT); set_p4d_safe(p4d, __p4d(_PAGE_TABLE | __pa(pud))); + if (!pgtable_l5_enabled()) + asi_clone_user_pgtbl(mm, (pgd_t *)p4d); } extern void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud); @@ -137,6 +141,7 @@ static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, p4d_t *p4d) return; paravirt_alloc_p4d(mm, __pa(p4d) >> PAGE_SHIFT); set_pgd(pgd, __pgd(_PAGE_TABLE | __pa(p4d))); + asi_clone_user_pgtbl(mm, pgd); } static inline void pgd_populate_safe(struct mm_struct *mm, pgd_t *pgd, p4d_t *p4d) @@ -145,6 +150,7 @@ static inline void pgd_populate_safe(struct mm_struct *mm, pgd_t *pgd, p4d_t *p4 return; paravirt_alloc_p4d(mm, __pa(p4d) >> PAGE_SHIFT); set_pgd_safe(pgd, __pgd(_PAGE_TABLE | __pa(p4d))); + asi_clone_user_pgtbl(mm, pgd); } static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h index d1426b64c1b9715cd9e4d1d7451ae4feadd8b2f5..fe6d83ec632a6894527784f2ebdbd013161c6f09 100644 --- a/arch/x86/include/asm/pgtable_64.h +++ b/arch/x86/include/asm/pgtable_64.h @@ -157,6 +157,8 @@ static inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d) static inline void native_p4d_clear(p4d_t *p4d) { native_set_p4d(p4d, native_make_p4d(0)); + if (!pgtable_l5_enabled()) + asi_clone_user_pgtbl(NULL, (pgd_t *)p4d); } static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd) @@ -167,6 +169,8 @@ static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd) static inline void native_pgd_clear(pgd_t *pgd) { native_set_pgd(pgd, native_make_pgd(0)); + if (pgtable_l5_enabled()) + asi_clone_user_pgtbl(NULL, pgd); } /* diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3e0811eb510650abc601e4adce1ce4189835a730..920475fe014f6503dd88c7bbdb6b2707c084a689 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9712,11 +9712,15 @@ static inline int kvm_x86_init_asi_class(void) /* * And the same for data left behind by code in the userspace domain * (i.e. the VMM itself, plus kernel code serving its syscalls etc). - * This should eventually be configurable: users whose VMMs contain - * no secrets can disable it to avoid paying a mitigation cost on - * transition between their guest and userspace. + * + * + * If we decided to map userspace into the guest's restricted address + * space then we don't bother with this since we assume either no bugs + * allow the guest to leak that data, or the user doesn't care about + * that security boundary. */ - policy.protect_data |= ASI_TAINT_USER_DATA; + if (!asi_maps_user_addr(ASI_CLASS_KVM)) + policy.protect_data |= ASI_TAINT_USER_DATA; return asi_init_class(ASI_CLASS_KVM, &policy); } diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index c5073af1a82ded1c6fc467cd7a5d29a39d676bb4..093103c1bc2677c81d68008aca064fab53b73a62 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -14,6 +14,7 @@ #include #include #include +#include #include "mm_internal.h" #include "../../../mm/internal.h" @@ -351,6 +352,33 @@ static void __asi_destroy(struct asi *asi) memset(asi, 0, sizeof(struct asi)); } +static void __asi_init_user_pgds(struct mm_struct *mm, struct asi *asi) +{ + int i; + + if (!asi_maps_user_addr(asi->class_id)) + return; + + /* + * The code below must be executed only after the given asi is + * available in mm->asi[index] to ensure at least either this + * function or __asi_clone_user_pgd() will copy entries in the + * unrestricted pgd to the restricted pgd. + */ + if (WARN_ON_ONCE(&mm->asi[asi->class_id] != asi)) + return; + + /* + * See the comment for __asi_clone_user_pgd() why we hold the lock here. + */ + spin_lock(&asi->pgd_lock); + + for (i = 0; i < KERNEL_PGD_BOUNDARY; i++) + set_pgd(asi->pgd + i, READ_ONCE(*(mm->pgd + i))); + + spin_unlock(&asi->pgd_lock); +} + int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_asi) { struct asi *asi; @@ -388,6 +416,7 @@ int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_ asi->mm = mm; asi->class_id = class_id; + spin_lock_init(&asi->pgd_lock); for (i = KERNEL_PGD_BOUNDARY; i < PTRS_PER_PGD; i++) set_pgd(asi->pgd + i, asi_global_nonsensitive_pgd[i]); @@ -398,6 +427,7 @@ int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_ else *out_asi = asi; + __asi_init_user_pgds(mm, asi); mutex_unlock(&mm->asi_init_lock); return err; @@ -891,3 +921,65 @@ void asi_unmap(struct asi *asi, void *addr, size_t len) asi_flush_tlb_range(asi, addr, len); } + +/* + * This function is to copy the given unrestricted pgd entry for + * userspace addresses to the corresponding restricted pgd entries. + * It means that the unrestricted pgd entry must be updated before + * this function is called. + * We map entire userspace addresses to the restricted address spaces + * by copying unrestricted pgd entries to the restricted page tables + * so that we don't need to maintain consistency of lower level PTEs + * between the unrestricted page table and the restricted page tables. + */ +void asi_clone_user_pgtbl(struct mm_struct *mm, pgd_t *pgdp) +{ + unsigned long pgd_idx; + struct asi *asi; + int i; + + if (!static_asi_enabled()) + return; + + /* We shouldn't need to take care non-userspace mapping. */ + if (!pgdp_maps_userspace(pgdp)) + return; + + /* + * The mm will be NULL for p{4,g}d_clear(). We need to get + * the owner mm for this pgd in this case. The pgd page has + * a valid pt_mm only when SHARED_KERNEL_PMD == 0. + */ + BUILD_BUG_ON(SHARED_KERNEL_PMD); + if (!mm) { + mm = pgd_page_get_mm(virt_to_page(pgdp)); + if (WARN_ON_ONCE(!mm)) + return; + } + + /* + * Compute a PGD index of the given pgd entry. This will be the + * index of the ASI PGD entry to be updated. + */ + pgd_idx = pgdp - PTR_ALIGN_DOWN(pgdp, PAGE_SIZE); + + for (i = 0; i < ARRAY_SIZE(mm->asi); i++) { + asi = mm->asi + i; + + if (!asi_pgd(asi) || !asi_maps_user_addr(asi->class_id)) + continue; + + /* + * We need to synchronize concurrent callers of + * __asi_clone_user_pgd() among themselves, as well as + * __asi_init_user_pgds(). The lock makes sure that reading + * the unrestricted pgd and updating the corresponding + * ASI pgd are not interleaved by concurrent calls. + * We cannot rely on mm->page_table_lock here because it + * is not always held when pgd/p4d_clear_bad() is called. + */ + spin_lock(&asi->pgd_lock); + set_pgd(asi_pgd(asi) + pgd_idx, READ_ONCE(*pgdp)); + spin_unlock(&asi->pgd_lock); + } +} diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h index 4f033d3ef5929707fd280f74fc800193e45143c1..d103343292fad567dcd73e45e986fb3974e59898 100644 --- a/include/asm-generic/asi.h +++ b/include/asm-generic/asi.h @@ -95,6 +95,10 @@ void asi_flush_tlb_range(struct asi *asi, void *addr, size_t len) { } static inline void asi_check_boottime_disable(void) { } +static inline void asi_clone_user_pgtbl(struct mm_struct *mm, pgd_t *pgdp) { }; + +static inline bool asi_maps_user_addr(enum asi_class_id class_id) { return false; } + #endif /* !CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ #endif /* !_ASSEMBLY_ */ From patchwork Fri Jan 10 18:40:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 78BAFE77188 for ; Sat, 11 Jan 2025 00:22:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Q5Mwh4zmWrzDAADPPRxnYkIA7tuhxCoHhm2qEl/f+js=; b=A0JSX5fHVtmcPM42/QGwNIeOZz o1zTisSVPgC5Uq9nW8zqG9ZMJ6e1e9+SdCXA4DsP6pT0bXIZBOEfZfriicF0FgtBFBZGmxuAH3rww 8lNvBPEc6Ccqgi5iqP1mFIWJiFSMRNUYEkXjj3es4I4TflyeBf504kdvlygX/UkitCmpaTPld853R dJPCwWz49ZeihuSRUudIuMDkQjkD+ADgoH4i2VtcJg2e+IsOmlU2tChhMCRM2Cybjg5svliezYTHG aJWklkAPZ2HFYqmdGHyHDNDSjJ3NmqSrO+EuMz14ubyMMVAkuiyYvNmv+Ur7Vc/ex9Erhh3r0XS2R HbX9xE2w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWPGy-0000000HMbY-3bMe; Sat, 11 Jan 2025 00:22:40 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJx4-0000000Gbx6-4Bwt for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=lGjVOQFY6aPZLutIkt8TLO/unqgXoc0/gnPdxCEE5wQ=; b=M9c2RtRCRm+K4tZGZPmWe/cULW bb1dM11o69/L8rbHgiFK23sOFfG6cW79ZoF/1ngdmHqw46QbdHXUdQcTsBM3BewHh+qlkNgXJPixj Id8TAwqAwuIJu2Y3CEas8TYaog8ySIQ/A6CL/5ANQJp5mVFEu8Mp9CuuJQVdPvT8yQDqr+QTccWsi jZxJvkXuYPLO+vheEYqkEvfs+xI2zlUvR1FX8Jee/1fonHxyB3TN7k6YttjgHGN95dadNI5ZCbroq FQtlxl+taq/rc1LOsKkfiEjS4V/u5w9ViKLyjXkGQVgpnKLoFHxTuW1RozPVVaR2xCv1n72MG6xdo S0J17rig==; Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by casper.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJx1-0000000EBX0-45b5 for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:45 +0000 Received: by mail-wr1-x449.google.com with SMTP id ffacd0b85a97d-3862a49fbdaso1017621f8f.1 for ; Fri, 10 Jan 2025 10:41:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534502; x=1737139302; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lGjVOQFY6aPZLutIkt8TLO/unqgXoc0/gnPdxCEE5wQ=; b=zoXRU8UUh1kQFFLiNfZGOwE+smSwVfPKj2TX9nfQTt1rgOfESD4se8Z0rCw0pGgubB DBbK+c6QjHUYevfwcweI1Wh/TzC3H7SjHt+ITFHdBF/dxA9ndQhCo2TulsYduguFE/yQ yBv3+BtEXbCBowTB6xzoNfs1EDMKibh5Dx21POGn6ieA0OlbT83Az8v/NTZ+rVDhri+I 2fUCpGP1/9uvdBhH8CR5P/ZtRTfnY/gpiu2a3Ziht/mU/tCr2kuqKKFm+wgr/syxSBFM aRCP5xt7Mg7RObi01+gBsNKHsG5nSayix4OK9UCRXd8Gf7NWx67EsJx+CIsqFTTVDAI8 iSpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534502; x=1737139302; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lGjVOQFY6aPZLutIkt8TLO/unqgXoc0/gnPdxCEE5wQ=; b=wAmCwO/aeizWCiA36+QDCGeHYYNc3+Qql+IqSX77kmR6B43fdQvNmD9FplErPbsRcd 0qZOAPvet195VOR10I45l2Z+XSxzIinTHUZ+TcfproxszR2YBAMmjJkrEP4rMctkxmww As5ugLmwBIdedQbLPp+fXIGqHUleW6WvjJq08UrxltTmGPPPRRODNHAlTq3qNac7IcZJ 9X+NkRt+KY3vl4Z7Mlf2a4uiqKsAmbTeBzyI6OnGWw0BHzwBT3V0cG8QzoWEWyIaQylm pBH19bJJ/JCEkd4b29Le05MrSEYBFPh2wbfnG9kTKrwKOhmuOA4kR1jI1yuJFcAJ7gO0 TQqA== X-Forwarded-Encrypted: i=1; AJvYcCUsSerlkdHi+NB8EF3M7arpX5coj2xvaJZeIXwnAOdtLzlyXCjslrfmD20yWTlt1dp7xWI+Q5rPfOiBAQ==@lists.infradead.org X-Gm-Message-State: AOJu0YzWBfkLyqkflES6MafgdyibKz3epilKaxtFr7wCqbJUX9Pv7pwC Ndd4RFPznjbxZ3GDfZrAKGK8WJrawXYZcoGeDvZ4F9KkIZOJBwCn0sFK+HizW/ztNeuAsF3nLPd eB+TFYtzTKQ== X-Google-Smtp-Source: AGHT+IEWN36a0RhRoQNfORq+2yO51X6txHS1Rw2POezO/cSMdMR6AN2SOjOPPnxrT0b5W7xr5QSy1CVP6p9cPA== X-Received: from wmqa1.prod.google.com ([2002:a05:600c:3481:b0:436:1995:1888]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:588b:0:b0:385:df84:8496 with SMTP id ffacd0b85a97d-38a872c9432mr10863969f8f.3.1736534502226; Fri, 10 Jan 2025 10:41:42 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:51 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-25-8419288bc805@google.com> Subject: [PATCH RFC v2 25/29] mm: asi: Restricted execution fore bare-metal processes From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184144_023556_F6E84226 X-CRM114-Status: GOOD ( 29.50 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Now userspace gets a restricted address space too. The critical section begins on exit to userspace and ends when it makes a system call. Other entries from userspace just interrupt the critical section via asi_intr_enter(). The reason why system calls have to actually asi_relax() (i.e. fully terminate the critical section instead of just interrupting it) is that system calls are the type of kernel entry that can lead to transition into a _different_ ASI domain, namely the KVM one: it is not supported to transition into a different domain while a critical section exists (i.e. while asi_state.target is not NULL), even if it has been paused by asi_intr_enter() (i.e. even if asi_state.intr_nest_depth is nonzero) - there must be an asi_relax() between any two asi_enter()s. The restricted address space for bare-metal tasks naturally contains the entire userspace address region, although the task's own memory is still missing from the direct map. This implementation creates new userspace-specific APIs for asi_init(), asi_destroy() and asi_enter(), which seems a little ugly, maybe this suggest a general rework of these APIs given that the "generic" version only has one caller. For RFC code this seems good enough though. Signed-off-by: Brendan Jackman --- arch/x86/include/asm/asi.h | 8 ++++++-- arch/x86/mm/asi.c | 49 ++++++++++++++++++++++++++++++++++++++++---- include/asm-generic/asi.h | 9 +++++++- include/linux/entry-common.h | 11 ++++++++++ init/main.c | 2 ++ kernel/entry/common.c | 1 + kernel/fork.c | 4 +++- 7 files changed, 76 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/asi.h b/arch/x86/include/asm/asi.h index e925d7d2cfc85bca8480c837548654e7a5a7009e..c3c1a57f0147ae9bd11d89c8bf7c8a4477728f51 100644 --- a/arch/x86/include/asm/asi.h +++ b/arch/x86/include/asm/asi.h @@ -140,19 +140,23 @@ DECLARE_PER_CPU_ALIGNED(struct asi *, curr_asi); void asi_check_boottime_disable(void); -void asi_init_mm_state(struct mm_struct *mm); +int asi_init_mm_state(struct mm_struct *mm); int asi_init_class(enum asi_class_id class_id, struct asi_taint_policy *taint_policy); +void asi_init_userspace_class(void); void asi_uninit_class(enum asi_class_id class_id); const char *asi_class_name(enum asi_class_id class_id); int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_asi); void asi_destroy(struct asi *asi); +void asi_destroy_userspace(struct mm_struct *mm); void asi_clone_user_pgtbl(struct mm_struct *mm, pgd_t *pgdp); /* Enter an ASI domain (restricted address space) and begin the critical section. */ void asi_enter(struct asi *asi); +void asi_enter_userspace(void); + /* * Leave the "tense" state if we are in it, i.e. end the critical section. We * will stay relaxed until the next asi_enter. @@ -294,7 +298,7 @@ void asi_handle_switch_mm(void); */ static inline bool asi_maps_user_addr(enum asi_class_id class_id) { - return false; + return class_id == ASI_CLASS_USERSPACE; } #endif /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index 093103c1bc2677c81d68008aca064fab53b73a62..1e9dc568e79e8686a4dbf47f765f2c2535d025ec 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -25,6 +25,7 @@ const char *asi_class_names[] = { #if IS_ENABLED(CONFIG_KVM) [ASI_CLASS_KVM] = "KVM", #endif + [ASI_CLASS_USERSPACE] = "userspace", }; DEFINE_PER_CPU_ALIGNED(struct asi *, curr_asi); @@ -67,6 +68,32 @@ int asi_init_class(enum asi_class_id class_id, struct asi_taint_policy *taint_po } EXPORT_SYMBOL_GPL(asi_init_class); +void __init asi_init_userspace_class(void) +{ + static struct asi_taint_policy policy = { + /* + * Prevent going to userspace with sensitive data potentially + * left in sidechannels by code running in the unrestricted + * address space, or another MM. Note we don't check for guest + * data here. This reflects the assumption that the guest trusts + * its VMM (absent fancy HW features, which are orthogonal). + */ + .protect_data = ASI_TAINT_KERNEL_DATA | ASI_TAINT_OTHER_MM_DATA, + /* + * Don't go into userspace with control flow state controlled by + * other processes, or any KVM guest the process is running. + * Note this bit is about protecting userspace from other parts + * of the system, while data_taints is about protecting other + * parts of the system from the guest. + */ + .prevent_control = ASI_TAINT_GUEST_CONTROL | ASI_TAINT_OTHER_MM_CONTROL, + .set = ASI_TAINT_USER_CONTROL | ASI_TAINT_USER_DATA, + }; + int err = asi_init_class(ASI_CLASS_USERSPACE, &policy); + + WARN_ON(err); +} + void asi_uninit_class(enum asi_class_id class_id) { if (!boot_cpu_has(X86_FEATURE_ASI)) @@ -385,7 +412,8 @@ int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_ int err = 0; uint i; - *out_asi = NULL; + if (out_asi) + *out_asi = NULL; if (!boot_cpu_has(X86_FEATURE_ASI)) return 0; @@ -424,7 +452,7 @@ int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_ exit_unlock: if (err) __asi_destroy(asi); - else + else if (out_asi) *out_asi = asi; __asi_init_user_pgds(mm, asi); @@ -515,6 +543,12 @@ static __always_inline void maybe_flush_data(struct asi *next_asi) this_cpu_and(asi_taints, ~ASI_TAINTS_DATA_MASK); } +void asi_destroy_userspace(struct mm_struct *mm) +{ + VM_BUG_ON(!asi_class_initialized(ASI_CLASS_USERSPACE)); + asi_destroy(&mm->asi[ASI_CLASS_USERSPACE]); +} + noinstr void __asi_enter(void) { u64 asi_cr3; @@ -584,6 +618,11 @@ noinstr void asi_enter(struct asi *asi) } EXPORT_SYMBOL_GPL(asi_enter); +noinstr void asi_enter_userspace(void) +{ + asi_enter(¤t->mm->asi[ASI_CLASS_USERSPACE]); +} + noinstr void asi_relax(void) { if (static_asi_enabled()) { @@ -633,13 +672,15 @@ noinstr void asi_exit(void) } EXPORT_SYMBOL_GPL(asi_exit); -void asi_init_mm_state(struct mm_struct *mm) +int asi_init_mm_state(struct mm_struct *mm) { if (!boot_cpu_has(X86_FEATURE_ASI)) - return; + return 0; memset(mm->asi, 0, sizeof(mm->asi)); mutex_init(&mm->asi_init_lock); + + return asi_init(mm, ASI_CLASS_USERSPACE, NULL); } void asi_handle_switch_mm(void) diff --git a/include/asm-generic/asi.h b/include/asm-generic/asi.h index d103343292fad567dcd73e45e986fb3974e59898..c93f9e779ce1fa61e3df7835f5ab744cce7d667b 100644 --- a/include/asm-generic/asi.h +++ b/include/asm-generic/asi.h @@ -15,6 +15,7 @@ enum asi_class_id { #if IS_ENABLED(CONFIG_KVM) ASI_CLASS_KVM, #endif + ASI_CLASS_USERSPACE, ASI_MAX_NUM_CLASSES, }; static_assert(order_base_2(X86_CR3_ASI_PCID_BITS) <= ASI_MAX_NUM_CLASSES); @@ -37,8 +38,10 @@ int asi_init_class(enum asi_class_id class_id, static inline void asi_uninit_class(enum asi_class_id class_id) { } +static inline void asi_init_userspace_class(void) { } + struct mm_struct; -static inline void asi_init_mm_state(struct mm_struct *mm) { } +static inline int asi_init_mm_state(struct mm_struct *mm) { return 0; } static inline int asi_init(struct mm_struct *mm, enum asi_class_id class_id, struct asi **out_asi) @@ -48,8 +51,12 @@ static inline int asi_init(struct mm_struct *mm, enum asi_class_id class_id, static inline void asi_destroy(struct asi *asi) { } +static inline void asi_destroy_userspace(struct mm_struct *mm) { } + static inline void asi_enter(struct asi *asi) { } +static inline void asi_enter_userspace(void) { } + static inline void asi_relax(void) { } static inline bool asi_is_relaxed(void) { return true; } diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index 1e50cdb83ae501467ecc30ee52f1379d409f962e..f04c4c038556f84ddf3bc09b6c1dd22a9dbd2f6b 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -191,6 +191,16 @@ static __always_inline long syscall_enter_from_user_mode(struct pt_regs *regs, l { long ret; + /* + * End the ASI critical section for userspace. Syscalls are the only + * place this happens - all other entry from userspace is handled via + * ASI's interrupt-tracking. The reason syscalls are special is that's + * where it's possible to switch to another ASI domain within the same + * task (i.e. KVM_RUN), an asi_relax() is required here in case of an + * upcoming asi_enter(). + */ + asi_relax(); + enter_from_user_mode(regs); instrumentation_begin(); @@ -355,6 +365,7 @@ static __always_inline void exit_to_user_mode_prepare(struct pt_regs *regs) */ static __always_inline void exit_to_user_mode(void) { + instrumentation_begin(); trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(); diff --git a/init/main.c b/init/main.c index c4778edae7972f512d5eefe8400075ac35a70d1c..d19e149d385e8321d2f3e7c28aa75802af62d09c 100644 --- a/init/main.c +++ b/init/main.c @@ -953,6 +953,8 @@ void start_kernel(void) /* Architectural and non-timekeeping rng init, before allocator init */ random_init_early(command_line); + asi_init_userspace_class(); + /* * These use large bootmem allocations and must precede * initalization of page allocator diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 5b6934e23c21d36a3238dc03e391eb9e3beb4cfb..874254ed5958d62eaeaef4fe3e8c02e56deaf5ed 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -218,6 +218,7 @@ __visible noinstr void syscall_exit_to_user_mode(struct pt_regs *regs) __syscall_exit_to_user_mode_work(regs); instrumentation_end(); exit_to_user_mode(); + asi_enter_userspace(); } noinstr void irqentry_enter_from_user_mode(struct pt_regs *regs) diff --git a/kernel/fork.c b/kernel/fork.c index bb73758790d08112265d398b16902ff9a4c2b8fe..54068d2415939b92409ca8a45111176783c6acbd 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -917,6 +917,7 @@ void __mmdrop(struct mm_struct *mm) /* Ensure no CPUs are using this as their lazy tlb mm */ cleanup_lazy_tlbs(mm); + asi_destroy_userspace(mm); WARN_ON_ONCE(mm == current->active_mm); mm_free_pgd(mm); destroy_context(mm); @@ -1297,7 +1298,8 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, if (mm_alloc_pgd(mm)) goto fail_nopgd; - asi_init_mm_state(mm); + if (asi_init_mm_state(mm)) + goto fail_nocontext; if (init_new_context(p, mm)) goto fail_nocontext; From patchwork Fri Jan 10 18:40:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935634 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6C161E77188 for ; Sat, 11 Jan 2025 00:22:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=x3rchEyQvYBPWZyvjsEbXobGEC74oXI2WJp5RlmwnHc=; b=eBH516ZF4FejsDHGK+4o+X7NYi 7RzSDCksiEGTjACwxpzVuTQg/io4WJKIhF5/pyFDu+hGPTsmp3eODVxcKbZ8JaiIaq60XfoSy43Gi iirYYwXi/vYp+gRdnDQQ4nvdWwx/8N9lyqlV6DG5/g+x2s2+ZMxkiBs6DNzpDU8akh9vcBIF+asQ+ /KfdxbD+/437ArOTwofPu9t27DSCJR+CuAmJQuyHogvlqQYtyEupa5ANrupig76o5M9RGe3FKUD46 D9RUpXQjJ2zj5wf6xXNUMwozZnt8581RkfjYk1nkrubwaA1gZon1LAyqbpLCfAxTO+a3cSdb3uCPQ sARJucpw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWPGz-0000000HMbe-1QMZ; Sat, 11 Jan 2025 00:22:41 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJxB-0000000Gc1V-1pvr for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=CAllTjvu+aeRJsuZfnIbaARP70yHbpTXyh5BDwNTLpw=; b=M5OJwz71x+E209KhhTWSCJSPC+ rTQnQlevcRMyA4dzFH1fHdoszJrRe0PA/sN4hqR71PyoZQmNQzGP0ZLQXhrn32DyEYg8SO7yY9GmE kFunVuraNhTvt4C7/wgDidF4ja2EuhsGbLMVuZHt1CT9W54VnnQ70Gw04fIsc3Rr3AuWcJYbGqAek mQP4BUuNSpHj9SFRSYN6CItRW+YmfUJEolh9qR8w9EN/rJ/cl8sTugvECZjvMlsHooMs1ve+5TBya IPNu+clEAOgW1qqjo01dRdYhUiG9Ok+KCbdpOkw79yOrZaSlPvfvtgJWerhnsYQKPyXtf5kSmk+Iv mGL3U+lA==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJx4-00000009ssY-1hdE for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:50 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-436289a570eso19449995e9.0 for ; Fri, 10 Jan 2025 10:41:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534505; x=1737139305; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CAllTjvu+aeRJsuZfnIbaARP70yHbpTXyh5BDwNTLpw=; b=rqaSIbFEaRsw1XLV5/Y9dwc/CINHbljVbDhmf2YGrl4CMKhk9l/gZfcBRgbi/xD48J Ms3mWypknpNEdCsHMbIiQxC0xEhGdx5rW9cuWToeDAAPRNKFD1eIgFAygXocaWn/AZBV yLPEVTUKQ67c4O5AYT9vl8HZw7TLB3bs8YzRuB3fWtgDIe4xiNt3bwLoawLS3BJ1ysbT u8Xa+0hg4dQ1N0lSZmTu6HjHiL2HeL8P6A2DOXHByWvh4oCjyTaedhyHlkXyx4rhadOS 8sk7//6/1IeZWmPcSJl2iFLaKArjCdWlQ/Q8nyO6jrw9nbqMx21im7U5GwrMj0njDies Lqiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534505; x=1737139305; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CAllTjvu+aeRJsuZfnIbaARP70yHbpTXyh5BDwNTLpw=; b=sdCd/1Orf2msNWVILBzLdwAbrxPTCAWJ8JLwdH9VDSEuwndT8yq0shVJiDH66fZgIp 9flv6jpBlyXTCz9hq6J8jTGE0M4eucrUO09VxPHqOq/z0Bb2bikZuqlccnWDDL7A2k3i 0JxgQQ9jGvBNhQIQwlPS0f5cf0dTLHuDyYg+9aWqHvgnjIrQL+20RjgEYPkW+dHg+7G1 p9TtGYDVDwbJCPkIz9i8+WbX0jAmqYDP39co1qDHV3KwbCZE98Gk/ySvlUMc8H0q6ox/ Kzdk3As+UCweVj2gcrcjlFvW/MNlwJjgMRaA8xu2OrJ4fSH3nFRiQHcMCZ9ocsk3LTtE PNDA== X-Forwarded-Encrypted: i=1; AJvYcCUw5t5LxokZaLaqExb6VkBxhuFOCVDuwNlWqlSsgqR2SWOuVxOHyTgUIJLj6eDnMNM65bvT0H7GesKeYA==@lists.infradead.org X-Gm-Message-State: AOJu0Yx62Z+sOEqUtNY80YZEsMaas/VkhrZDO67+6Kd8s+TPch3bPoq7 3oPr4605tWSbDlgysdZqIRH7uLq31SMcDhkRyhbngUOcxWsb2EG8qGDfky+XD4NVNlxHLaEj7ve ymR5GuF5lWQ== X-Google-Smtp-Source: AGHT+IFbu2VfquZETrYawBwFSqjLwW/Qzimn4MDKfCetEVKQSMACYt32ELZsfcDoj+dSkmxIf50qY6QNFb/+Xg== X-Received: from wmqe1.prod.google.com ([2002:a05:600c:4e41:b0:434:a050:ddcf]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3c85:b0:436:18d0:aa6e with SMTP id 5b1f17b1804b1-436e2679a7cmr125841955e9.5.1736534504638; Fri, 10 Jan 2025 10:41:44 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:52 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-26-8419288bc805@google.com> Subject: [PATCH RFC v2 26/29] x86: Create library for flushing L1D for L1TF From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184146_616895_093BA024 X-CRM114-Status: GOOD ( 36.84 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org ASI will need to use this L1D flushing logic so put it in a library where it can be used independently of KVM. Since we're creating this library, it starts to look messy if we don't also use it in the double-opt-in (both kernel cmdline and prctl) mm-switching flush logic which is there for mitigating Snoop-Assisted L1 Data Sampling ("SAL1DS"). However, that logic doesn't use any software-based fallback for flushing on CPUs without the L1D_FLUSH command. In that case the prctl opt-in will fail. One option would be to just start using the software fallback sequence currently done by VMX code, but Linus didn't seem happy with a similar sequence being used here [1]. CPUs affected by SAL1DS are a subset of those affected by L1TF, so it wouldn't be completely insane to assume that the same sequence works for both cases, but I'll err on the side of caution and avoid risk of giving users a false impression that the kernel has really flushed L1D for them. [1] https://lore.kernel.org/linux-kernel/CAHk-=whC4PUhErcoDhCbTOdmPPy-Pj8j9ytsdcyz9TorOb4KUw@mail.gmail.com/ Instead, create this awkward library that is scoped specifically to L1TF, which will be used only by VMX and ASI, and has an annoying "only sometimes works" doc-comment. Users of the library can then infer from that comment whether they have flushed L1D. No functional change intended. Checkpatch-args: --ignore=COMMIT_LOG_LONG_LINE Signed-off-by: Brendan Jackman --- arch/x86/Kconfig | 4 ++ arch/x86/include/asm/l1tf.h | 11 ++++++ arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/vmx/vmx.c | 66 +++---------------------------- arch/x86/lib/Makefile | 1 + arch/x86/lib/l1tf.c | 94 +++++++++++++++++++++++++++++++++++++++++++++ 6 files changed, 117 insertions(+), 60 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index ae31f36ce23d7c29d1e90b726c5a2e6ea5a63c8d..ca984dc7ee2f2b68c3ce1bcb5055047ca4f2a65d 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2523,6 +2523,7 @@ config MITIGATION_ADDRESS_SPACE_ISOLATION bool "Allow code to run with a reduced kernel address space" default n depends on X86_64 && !PARAVIRT && !UML + select X86_L1TF_FLUSH_LIB help This feature provides the ability to run some kernel code with a reduced kernel address space. This can be used to @@ -3201,6 +3202,9 @@ config HAVE_ATOMIC_IOMAP def_bool y depends on X86_32 +config X86_L1TF_FLUSH_LIB + def_bool n + source "arch/x86/kvm/Kconfig" source "arch/x86/Kconfig.assembler" diff --git a/arch/x86/include/asm/l1tf.h b/arch/x86/include/asm/l1tf.h new file mode 100644 index 0000000000000000000000000000000000000000..e0be19c588bb5ec5c76a1861492e48b88615b4b8 --- /dev/null +++ b/arch/x86/include/asm/l1tf.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_L1TF_FLUSH_H +#define _ASM_L1TF_FLUSH_H + +#ifdef CONFIG_X86_L1TF_FLUSH_LIB +int l1tf_flush_setup(void); +void l1tf_flush(void); +#endif /* CONFIG_X86_L1TF_FLUSH_LIB */ + +#endif + diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index f09f13c01c6bbd28fa37fdf50547abf4403658c9..81c71510e33e52447882ab7b22682199c57b492e 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -92,6 +92,7 @@ config KVM_SW_PROTECTED_VM config KVM_INTEL tristate "KVM for Intel (and compatible) processors support" depends on KVM && IA32_FEAT_CTL + select X86_L1TF_FLUSH_LIB help Provides support for KVM on processors equipped with Intel's VT extensions, a.k.a. Virtual Machine Extensions (VMX). diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0e90463f1f2183b8d716f85d5c8a8af8958fef0b..b1a02f27b3abce0ef6ac448b66bef2c653a52eef 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -42,6 +42,7 @@ #include #include #include +#include #include #include #include @@ -250,9 +251,6 @@ static void *vmx_l1d_flush_pages; static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf) { - struct page *page; - unsigned int i; - if (!boot_cpu_has_bug(X86_BUG_L1TF)) { l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_NOT_REQUIRED; return 0; @@ -288,26 +286,11 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf) l1tf = VMENTER_L1D_FLUSH_ALWAYS; } - if (l1tf != VMENTER_L1D_FLUSH_NEVER && !vmx_l1d_flush_pages && - !boot_cpu_has(X86_FEATURE_FLUSH_L1D)) { - /* - * This allocation for vmx_l1d_flush_pages is not tied to a VM - * lifetime and so should not be charged to a memcg. - */ - page = alloc_pages(GFP_KERNEL, L1D_CACHE_ORDER); - if (!page) - return -ENOMEM; - vmx_l1d_flush_pages = page_address(page); + if (l1tf != VMENTER_L1D_FLUSH_NEVER) { + int err = l1tf_flush_setup(); - /* - * Initialize each page with a different pattern in - * order to protect against KSM in the nested - * virtualization case. - */ - for (i = 0; i < 1u << L1D_CACHE_ORDER; ++i) { - memset(vmx_l1d_flush_pages + i * PAGE_SIZE, i + 1, - PAGE_SIZE); - } + if (err) + return err; } l1tf_vmx_mitigation = l1tf; @@ -6652,20 +6635,8 @@ int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) return ret; } -/* - * Software based L1D cache flush which is used when microcode providing - * the cache control MSR is not loaded. - * - * The L1D cache is 32 KiB on Nehalem and later microarchitectures, but to - * flush it is required to read in 64 KiB because the replacement algorithm - * is not exactly LRU. This could be sized at runtime via topology - * information but as all relevant affected CPUs have 32KiB L1D cache size - * there is no point in doing so. - */ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) { - int size = PAGE_SIZE << L1D_CACHE_ORDER; - /* * This code is only executed when the flush mode is 'cond' or * 'always' @@ -6695,32 +6666,7 @@ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) vcpu->stat.l1d_flush++; - if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) { - native_wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH); - return; - } - - asm volatile( - /* First ensure the pages are in the TLB */ - "xorl %%eax, %%eax\n" - ".Lpopulate_tlb:\n\t" - "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" - "addl $4096, %%eax\n\t" - "cmpl %%eax, %[size]\n\t" - "jne .Lpopulate_tlb\n\t" - "xorl %%eax, %%eax\n\t" - "cpuid\n\t" - /* Now fill the cache */ - "xorl %%eax, %%eax\n" - ".Lfill_cache:\n" - "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" - "addl $64, %%eax\n\t" - "cmpl %%eax, %[size]\n\t" - "jne .Lfill_cache\n\t" - "lfence\n" - :: [flush_pages] "r" (vmx_l1d_flush_pages), - [size] "r" (size) - : "eax", "ebx", "ecx", "edx"); + l1tf_flush(); } void vmx_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile index 98583a9dbab337e09a2e58905e5200499a496a07..b0a45bd70b40743a3fccb352b9641caacac83275 100644 --- a/arch/x86/lib/Makefile +++ b/arch/x86/lib/Makefile @@ -37,6 +37,7 @@ lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o insn-eval.o lib-$(CONFIG_RANDOMIZE_BASE) += kaslr.o lib-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o lib-$(CONFIG_MITIGATION_RETPOLINE) += retpoline.o +lib-$(CONFIG_X86_L1TF_FLUSH_LIB) += l1tf.o obj-y += msr.o msr-reg.o msr-reg-export.o hweight.o obj-y += iomem.o diff --git a/arch/x86/lib/l1tf.c b/arch/x86/lib/l1tf.c new file mode 100644 index 0000000000000000000000000000000000000000..c474f18ae331c8dfa7a029c457dd3cf75bebf808 --- /dev/null +++ b/arch/x86/lib/l1tf.c @@ -0,0 +1,94 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +#include +#include +#include + +#define L1D_CACHE_ORDER 4 +static void *l1tf_flush_pages; + +int l1tf_flush_setup(void) +{ + struct page *page; + unsigned int i; + + if (l1tf_flush_pages || boot_cpu_has(X86_FEATURE_FLUSH_L1D)) + return 0; + + page = alloc_pages(GFP_KERNEL, L1D_CACHE_ORDER); + if (!page) + return -ENOMEM; + l1tf_flush_pages = page_address(page); + + /* + * Initialize each page with a different pattern in + * order to protect against KSM in the nested + * virtualization case. + */ + for (i = 0; i < 1u << L1D_CACHE_ORDER; ++i) { + memset(l1tf_flush_pages + i * PAGE_SIZE, i + 1, + PAGE_SIZE); + } + + return 0; +} +EXPORT_SYMBOL(l1tf_flush_setup); + +/* + * Flush L1D in a way that: + * + * - definitely works on CPUs X86_FEATURE_FLUSH_L1D (because the SDM says so). + * - almost definitely works on other CPUs with L1TF (because someone on LKML + * said someone from Intel said so). + * - may or may not work on other CPUs. + * + * Don't call unless l1tf_flush_setup() has returned successfully. + */ +noinstr void l1tf_flush(void) +{ + int size = PAGE_SIZE << L1D_CACHE_ORDER; + + if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) { + native_wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH); + return; + } + + if (WARN_ON(!l1tf_flush_pages)) + return; + + /* + * This sequence was provided by Intel for the purpose of mitigating + * L1TF on VMX. + * + * The L1D cache is 32 KiB on Nehalem and some later microarchitectures, + * but to flush it is required to read in 64 KiB because the replacement + * algorithm is not exactly LRU. This could be sized at runtime via + * topology information but as all relevant affected CPUs have 32KiB L1D + * cache size there is no point in doing so. + */ + asm volatile( + /* First ensure the pages are in the TLB */ + "xorl %%eax, %%eax\n" + ".Lpopulate_tlb:\n\t" + "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" + "addl $4096, %%eax\n\t" + "cmpl %%eax, %[size]\n\t" + "jne .Lpopulate_tlb\n\t" + "xorl %%eax, %%eax\n\t" + "cpuid\n\t" + /* Now fill the cache */ + "xorl %%eax, %%eax\n" + ".Lfill_cache:\n" + "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" + "addl $64, %%eax\n\t" + "cmpl %%eax, %[size]\n\t" + "jne .Lfill_cache\n\t" + "lfence\n" + :: [flush_pages] "r" (l1tf_flush_pages), + [size] "r" (size) + : "eax", "ebx", "ecx", "edx"); +} +EXPORT_SYMBOL(l1tf_flush); From patchwork Fri Jan 10 18:40:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935586 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2D83FE77188 for ; Fri, 10 Jan 2025 23:20:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=mzU7IjJ3Zn1mQf1cZZ4rdDv7wbdRJTNk8ioJE76TKbc=; b=HQKeho9Y7H5Kx0rqSkh0o90ouC ipwD+J5vvhFzgpwXJ6dBOAi0sLM7qcYooJNJ+YAtVaI7cmQCodPWMFdtVcIxN/zCiERyul0t4qTtb u40mY/yjB08hoCtkrbGD6RG5gQOhfw+pL6QBkJtzXDhp+3UcSxlNNUT+nck27elpjF9/xoMDmNL2W yDty1SVv2TGOr/YSm3lOEAQK4Yu0eI8mRwkF9fW+h6wlXQE6cCsageqvqVq2CjdRDPO5mn2qwQ/Hp GEf+gO5GBFdNceTbbOpRULDhcxYmukM2mjz1jQ2IKu9e2q6g/T2LbgsgPasnqg1kKCEjrR5DueNs5 CjTmZKbw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIp-0000000HFB9-3nyt; Fri, 10 Jan 2025 23:20:31 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJxF-0000000Gc3q-11a9 for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=YGrdWVEeU4Ptbj/O5LlMCmxq+nbCGI8+4U1HKBQ0dew=; b=Zl2FnhqWPbLmeCj5HSTubRksFd TEdXQ6YW9PKtdvCAsB9asgznmiy4i7S6ktIREauLpa50kT60cWtbu4fJ6SKywZAaR2i+8YbK+BHAV /Q5sOUtORiDJ+6F7itWt19cfRZYxMEZo6TSl5gODPplW04lBifSxEQjapfR9+czSaIXoaoyHUxGrG NlJSDynshbAMshb9+AmKlGNxqVG8/MyLel6JaIwb46v8o2F/2zIxXnIZdQl5ZpTC2YmoU34HnvdnF c+U45/QQQx6h9SwN6mrfMCWBKFZfdtjMvBwGE2wUyJSs2Q5ngGgX1WagEvNNTk5Dr8z+ieYfIlRYP id5hWESg==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJx6-00000009suE-2ppw for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:56 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4361fc2b2d6so13067385e9.3 for ; Fri, 10 Jan 2025 10:41:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534507; x=1737139307; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YGrdWVEeU4Ptbj/O5LlMCmxq+nbCGI8+4U1HKBQ0dew=; b=rv/Cm3Dt0glor2WRsL8CYI9aYsNrVpZpkk0IRS9Sb/6fGgUYb1J8vBOUYFABelmmBo ESJk6U3ZnWzLLtVG0lziW4aSYW9McZc3EFOeOSqZn9eonNfnBmeKaxONrZnhe9tHhuOI WsgifU6FF9PRU5/dFY2a9WKWBS3xo7zqmXZSbD8YQSgd78A+GEkESeRq4F6YdJYVzBi7 gHH+nlqlrJ6uTPFzJy5bqXz5gMDuU9vcSaMu512NnnEmN0b90VelLGYvPdaHYBj7q/zz HrBvCJbQ1tE13V81L3sxaQZP9K5SKoW/mjt4O0FayD4NlWZh6KX9jT8oFgboNMH31KCW b6aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534507; x=1737139307; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YGrdWVEeU4Ptbj/O5LlMCmxq+nbCGI8+4U1HKBQ0dew=; b=uxGei7QHcCsHKv7aFvYFPrURYmsZje/+VRt/KlGU6ctlwXX9edEhArocRxZzrRyAFw KppCtMWGSsynoPv1QbxgDA1AvAsCFwfvV+B7Piuxx3RM3UAXDlOZpIvfrVUrHzFoPbRa OQaoqBH1gy2WOomvtdd0s+t+Zx2K6Gaz85odHftWfvtjZO3WltNuY4jxq9+abEI7NKIf Cva7eNEXbs9royE4WJfN9VZ/OfWBgklX3/k+qTMrf9/ctZ4B0g5uvYgqvVYfjQ3ktyK6 NcW+iWdr2LveyRQrsERh/CfVgQrBrEAVJfuDJPErwocsEs03vLsjnYVC7APdGeVb9afR 0btA== X-Forwarded-Encrypted: i=1; AJvYcCUoTep+lea8r4kQ807kAfiD90GQtVUyBnENtW98Vk2Nnv3TLs1qeMFOJy63woFmRAvZfOCb1j836JlQrQ==@lists.infradead.org X-Gm-Message-State: AOJu0YyDnDARor84akLFwmWHtYe457E5UjS0I9dW4y2OYst6Tu84T0ip sLS2GubyV5Ux6JGI85KY48R6FBM6/zBn/LtkL8O7QzIUZFBKqRJsTLhao63+uj42o+gXbrb5ha4 4fP0dBq5uwQ== X-Google-Smtp-Source: AGHT+IEgsXbDoolH8oLFMalWBy0djT9uJM60zMy+Lo0ErgHawFZLEGOSZUiE5P0d6bxzOZZfpK4Vh92Jd6K16A== X-Received: from wmrn43.prod.google.com ([2002:a05:600c:502b:b0:434:a9bd:e68c]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4f81:b0:434:f1d5:1453 with SMTP id 5b1f17b1804b1-436e2531ec8mr120237105e9.0.1736534506703; Fri, 10 Jan 2025 10:41:46 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:53 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-27-8419288bc805@google.com> Subject: [PATCH RFC v2 27/29] mm: asi: Add some mitigations on address space transitions From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184152_507225_C029A999 X-CRM114-Status: GOOD ( 23.13 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Here we ASI actually starts becoming a real exploit mitigation, On CPUs with L1TF, flush L1D when the ASI data taints say so. On all CPUs, do some general branch predictor clearing whenever the control taints say so. This policy is very much just a starting point for discussion. Primarily it's a vague gesture at the fact that there is leeway in how ASI is used: it can be used to target CPU-specific issues (as is the case for L1TF here), or it can be used as a fairly broad mitigation (asi_maybe_flush_control() mitigates several known Spectre-style attacks and very likely also some unknown ones). Signed-off-by: Brendan Jackman --- arch/x86/include/asm/nospec-branch.h | 2 ++ arch/x86/kvm/vmx/vmx.c | 1 + arch/x86/lib/l1tf.c | 2 ++ arch/x86/lib/retpoline.S | 10 ++++++++++ arch/x86/mm/asi.c | 29 +++++++++++++++++++++-------- 5 files changed, 36 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 96b410b1d4e841eb02f53a4691ee794ceee4ad2c..4582fb1fb42f6fd226534012d969ed13085e943a 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -614,6 +614,8 @@ static __always_inline void mds_idle_clear_cpu_buffers(void) mds_clear_cpu_buffers(); } +extern void fill_return_buffer(void); + #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_NOSPEC_BRANCH_H_ */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b1a02f27b3abce0ef6ac448b66bef2c653a52eef..a532783caaea97291cd92a2e2cac617f74f76c7e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6635,6 +6635,7 @@ int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) return ret; } +/* Must be reentrant, for use by vmx_post_asi_enter. */ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) { /* diff --git a/arch/x86/lib/l1tf.c b/arch/x86/lib/l1tf.c index c474f18ae331c8dfa7a029c457dd3cf75bebf808..ffe1c3d0ef43ff8f1781f2e446aed041f4ce3179 100644 --- a/arch/x86/lib/l1tf.c +++ b/arch/x86/lib/l1tf.c @@ -46,6 +46,8 @@ EXPORT_SYMBOL(l1tf_flush_setup); * - may or may not work on other CPUs. * * Don't call unless l1tf_flush_setup() has returned successfully. + * + * Must be reentrant, for use by ASI. */ noinstr void l1tf_flush(void) { diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 391059b2c6fbc4a571f0582c7c4654147a930cef..6d126fff6bf839889086fe21464d8af07316d7e5 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -396,3 +396,13 @@ SYM_CODE_END(__x86_return_thunk) EXPORT_SYMBOL(__x86_return_thunk) #endif /* CONFIG_MITIGATION_RETHUNK */ + +.pushsection .noinstr.text, "ax" +SYM_CODE_START(fill_return_buffer) + UNWIND_HINT_FUNC + ENDBR + __FILL_RETURN_BUFFER(%_ASM_AX,RSB_CLEAR_LOOPS) + RET +SYM_CODE_END(fill_return_buffer) +__EXPORT_THUNK(fill_return_buffer) +.popsection diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index 1e9dc568e79e8686a4dbf47f765f2c2535d025ec..f10f6614b26148e5ba423d8a44f640674573ee40 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -10,6 +10,7 @@ #include #include +#include #include #include #include @@ -38,6 +39,8 @@ struct asi __asi_global_nonsensitive = { .mm = &init_mm, }; +static bool do_l1tf_flush __ro_after_init; + static inline bool asi_class_id_valid(enum asi_class_id class_id) { return class_id >= 0 && class_id < ASI_MAX_NUM_CLASSES; @@ -361,6 +364,15 @@ static int __init asi_global_init(void) asi_clone_pgd(asi_global_nonsensitive_pgd, init_mm.pgd, VMEMMAP_START + (1UL << PGDIR_SHIFT)); + if (boot_cpu_has_bug(X86_BUG_L1TF)) { + int err = l1tf_flush_setup(); + + if (err) + pr_warn("Failed to setup L1TF flushing for ASI (%pe)", ERR_PTR(err)); + else + do_l1tf_flush = true; + } + #ifdef CONFIG_PM_SLEEP register_syscore_ops(&asi_syscore_ops); #endif @@ -512,10 +524,12 @@ static __always_inline void maybe_flush_control(struct asi *next_asi) if (!taints) return; - /* - * This is where we'll do the actual dirty work of clearing uarch state. - * For now we just pretend, clear the taints. - */ + /* Clear normal indirect branch predictions, if we haven't */ + if (cpu_feature_enabled(X86_FEATURE_IBPB)) + __wrmsr(MSR_IA32_PRED_CMD, PRED_CMD_IBPB, 0); + + fill_return_buffer(); + this_cpu_and(asi_taints, ~ASI_TAINTS_CONTROL_MASK); } @@ -536,10 +550,9 @@ static __always_inline void maybe_flush_data(struct asi *next_asi) if (!taints) return; - /* - * This is where we'll do the actual dirty work of clearing uarch state. - * For now we just pretend, clear the taints. - */ + if (do_l1tf_flush) + l1tf_flush(); + this_cpu_and(asi_taints, ~ASI_TAINTS_DATA_MASK); } From patchwork Fri Jan 10 18:40:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CD747E7719C for ; Fri, 10 Jan 2025 23:20:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=MObo77NNuvx1Myl+ds1G0qafmg8tb+8QkSyaWMPJA18=; b=Y7Y8ONF9C+W1UxXB3qDdHej75J VnxRRwiWsWNiIs5oLuVOKIE7dvsySWnnewtOGRU9xVhrWvCDaArrnuD/fYlPuy6oyceajlfhs3dAb 8A9Iw1Ab4z9cR6TtG/RbK4nwU7PM04qZShpKTHmz7E71afNTJN4k+ATAwWXol5Pj/f9zxNibdLAfY j8VA8PxzucgppPVyIdIIIOgqQkYI/6o5reM/MQYwTbiqMLjPZd5jQpLZOchhizKwlvhoRtK6W3kRS dbyLefHRvVEP15N08swyNB7MvqVIMESAb25CeoqnHLjcWOBvKnJw3R/v/yLnq1sskZtvU9lh2OEqE 6D5Zo/Cw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIw-0000000HFJA-09xe; Fri, 10 Jan 2025 23:20:38 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJxE-0000000Gc3n-3xXv for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=49F9KmkByYRspQkaSYzsGBFq+GnQYA+RnfOGnAD1TNs=; b=ZGPX86UQr0X9VJBrdJq4NBBZ9x IkqvzCCHiG3sLaROS+eh0FCWeBD/k4XYYRZBVRmLAK6Wah1KEWe9Zmex6ZSj+2KKs8wTn1ZDFzev+ n/W2Jl4xYg1yXfhaB0+CYzQn30emcvgc4aluVFwH1sAvZHL2Oz0Ox57HfxuSW4dX5noZlW8chI9gI fg8O0zYviRkb1AclHCnBgzP6uAlaAtHWDJNE0jVMYfWhrzeWH3W6TxJ6Vi362i0CzCgS5+AqtqpUB /ptjZ7Q1o9q4i/II3O86TASO/MasfOu19+gNLtQ57VH4mGaKiFBBpnF+IC4178mnoBvim/tD5607j uz4RcBkQ==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJxA-00000009suK-2GJQ for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:55 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4361eb83f46so20341795e9.3 for ; Fri, 10 Jan 2025 10:41:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534509; x=1737139309; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=49F9KmkByYRspQkaSYzsGBFq+GnQYA+RnfOGnAD1TNs=; b=tDVCq9we3SKR3ixrUjhqHqcUdE9VaneseBUzlFxy42OdQt/tGZT93O00OwWibhlJ82 7Ja3MfKZk2PcyiDADRzdkXgGwg6RSaVelkBmsAf8KISrrBDR7xwufLB7tjnacXaGen55 gHnsKYGbhbNiTIRtIEtgk86BvoQVr+Alk0nfCQu1ZTjaORnAEm9msd32Vc1fQ6DHcQ3v xPBOVpdsFHyZg4nwU5nlRAFZTM3VhIFIlJD3ArlgGqXPFlXmpBXGh0TdNHBHiaGQ6LAf MJlZCDFnoIH/Dd4pTbKOfAzcDPDt43tF/tQwR6V3Fk2BKJDcQWXSAr+X1u5ex8pQUiyM wbvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534509; x=1737139309; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=49F9KmkByYRspQkaSYzsGBFq+GnQYA+RnfOGnAD1TNs=; b=bjob7E7QunSFvw1OisN+yza+W3EpX06mwDrG0IhjTpVouPua4tpT688YEXn0ICgxn/ UVDQ2Uqlvleoyqk32HrrfvHqy8nM1HtBqo6OFTk3pCESagWi6GKv3HP7mr0wFkmHE7O/ qIVynzNgV5Wf1FXwcCUZQQRGRWN3z8D9b2YMjlkjANBADC+97xT7ueFwTwRSnnSHFlNa nnrQwhNiAQLOD3uORMd7gHMWwrU3mDiuB/HJNUI7fGelD6tCaXOHhBOqALzdDOzQkpN2 CDhafdZXbaRwCX1Gf2al3DXr2g7eM8GRjo5LzH2Unzky0L0VObOWhMCGmDr/PzqNjy3G HqPg== X-Forwarded-Encrypted: i=1; AJvYcCUVetMlxtWTo+ABzPbORj/5zrQ0BaQkOLcKP5ulCvt4GQB3NsVWd4BFfk+qVqanFREwdPlkN+tECVKbRw==@lists.infradead.org X-Gm-Message-State: AOJu0YxjsR20WtmLRpHZSt1D61wkEo706G731XhsS3UhzI4DuwWNQ3dw IerpBj5cUktX+Uf0u4DSIrBMmhDRao2Mv2CBi/h6U9rZBbB69TYEItPXnGEPLLZpfP/3E6Nr3Fi VF80ZfrkGXg== X-Google-Smtp-Source: AGHT+IFKWz0raFavEVPdqLi0m3c/SgK/uJNRKjAz6M2EuA2Ei6coxzOA0we1XFhGSIskVPMsumh+r4On/rf0kg== X-Received: from wmrn35.prod.google.com ([2002:a05:600c:5023:b0:434:f2eb:aa72]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d07:b0:434:fa73:a907 with SMTP id 5b1f17b1804b1-436e269a5f5mr112362055e9.13.1736534508901; Fri, 10 Jan 2025 10:41:48 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:54 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-28-8419288bc805@google.com> Subject: [PATCH RFC v2 28/29] x86/pti: Disable PTI when ASI is on From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184152_715817_6B37126D X-CRM114-Status: GOOD ( 21.47 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Now that ASI has support for sandboxing userspace, although userspace now has much more mapped than it would under KPTI, in theory none of that data is important to protect. Note that one particular impact of this is it makes locally defeating KASLR easier. I don't think this is a great loss given [1] etc. Why do we pass in an argument instead of just having pti_check_boottime_disable() check boot_cpu_has(X86_FEATURE_ASI)? Just for clarity: I wanted it to be at least _sort of_ visible that it would break if you reordered asi_check_boottime_disable() afterwards. [1]: https://gruss.cc/files/prefetch.pdf and https://dl.acm.org/doi/pdf/10.1145/3623652.3623669 Signed-off-by: Brendan Jackman --- arch/x86/include/asm/pti.h | 6 ++++-- arch/x86/mm/init.c | 2 +- arch/x86/mm/pti.c | 14 +++++++++++++- 3 files changed, 18 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/pti.h b/arch/x86/include/asm/pti.h index ab167c96b9ab474b33d778453db0bb550f42b0ac..79b9ba927db9b76ac3cc72cdda6f8b5fc413d352 100644 --- a/arch/x86/include/asm/pti.h +++ b/arch/x86/include/asm/pti.h @@ -3,12 +3,14 @@ #define _ASM_X86_PTI_H #ifndef __ASSEMBLY__ +#include + #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION extern void pti_init(void); -extern void pti_check_boottime_disable(void); +extern void pti_check_boottime_disable(bool asi_enabled); extern void pti_finalize(void); #else -static inline void pti_check_boottime_disable(void) { } +static inline void pti_check_boottime_disable(bool asi_enabled) { } #endif #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index ded3a47f2a9c1f554824d4ad19f3b48bce271274..4ccf6d60705652805342abefc5e71cd00c563207 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -754,8 +754,8 @@ void __init init_mem_mapping(void) { unsigned long end; - pti_check_boottime_disable(); asi_check_boottime_disable(); + pti_check_boottime_disable(boot_cpu_has(X86_FEATURE_ASI)); probe_page_size_mask(); setup_pcid(); diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c index 851ec8f1363a8b389ea4579cc68bf3300a4df27c..b7132080d3c9b6962a0252383190335e171bafa6 100644 --- a/arch/x86/mm/pti.c +++ b/arch/x86/mm/pti.c @@ -76,7 +76,7 @@ static enum pti_mode { PTI_FORCE_ON } pti_mode; -void __init pti_check_boottime_disable(void) +void __init pti_check_boottime_disable(bool asi_enabled) { if (hypervisor_is_type(X86_HYPER_XEN_PV)) { pti_mode = PTI_FORCE_OFF; @@ -91,6 +91,18 @@ void __init pti_check_boottime_disable(void) return; } + if (asi_enabled) { + /* + * Having both ASI and PTI enabled is not a totally ridiculous + * thing to do; if you want ASI but you are not confident in the + * sensitivity annotations then it provides useful + * defence-in-depth. But, the implementation doesn't support it. + */ + if (pti_mode != PTI_FORCE_OFF) + pti_print_if_insecure("disabled by ASI"); + return; + } + if (pti_mode == PTI_FORCE_ON) pti_print_if_secure("force enabled on command line."); From patchwork Fri Jan 10 18:40:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 12112E77188 for ; Fri, 10 Jan 2025 23:20:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=y5nASDQfmstVrbXVrIuMIrjaAS/aSo6I69rfpM5Ll8M=; b=j9dV7MLA44uBl+A5uVrURVmQBX irVScLurkOqHmaoCq0g7r7EUmK+vC5fjQXUzMA6ZO/XPHF+4bzbosfvlnLIpb1UAjMQml5qesgFtZ zAehNWpFB696RPlHGGn/JaYztDvwpJswmtYI7StR8JKmIB65D+KjqnqQ5Ffkah/kvwPVAUiXacVTt +mJc9FPo0ILhVMObqCrrBbOiq+xEbjzRXHdN0xMp8QfVka/bcHaM5DpsxgcqbGyOmFR7j7YQo7xxN HOb0oJvnhuFAuGms4LIchcQ2RSfLjlAgm0WvRCe2ptNDmVVOPJ+arGXKpVEZ8f029tbaavLHE1vZN KFqZ+Z4A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWOIi-0000000HF3t-2Kf1; Fri, 10 Jan 2025 23:20:27 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJxD-0000000Gc39-3ZVm for linux-riscv@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:55 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=t56obMWOlMyKyO7q6Si1cnEbfn2qyXwrpnzxR+4ONcI=; b=hSpQDkQDGOjfdvifojSIiuY5Bd aBJQfDre1afrujOxRCuEeaU2+UonPEYUiO9nr72pC8vq1GenE0SVBfYypYc14xZ3GyAwnFkaf2AlI tz0//v/wpD4g16GUGyjiNRLhXPxn+gBiGuxJvp0S7ru2khz5WZVqhi7N6sRmVfNtlz8k3QvjeDTlc A0QBGdn9zjIICLmflhULluIrZlKBdVTu0HTC09CGeTZQ/EtCWWVZkpYIVB4GZo6r4zg3hT2gFzbmM /JlFvzGv3bVyVlkCjljBnubA2qp4poGUd7oh18HCmNiuFJoInVe/ZbSQ+H3tIRtLRH5nzrPMX23tx 2v0VF28A==; Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by casper.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJxB-0000000EBY5-09P4 for linux-riscv@lists.infradead.org; Fri, 10 Jan 2025 18:41:54 +0000 Received: by mail-wr1-x449.google.com with SMTP id ffacd0b85a97d-38a35a65575so1747297f8f.1 for ; Fri, 10 Jan 2025 10:41:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534511; x=1737139311; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=t56obMWOlMyKyO7q6Si1cnEbfn2qyXwrpnzxR+4ONcI=; b=iqgBYshtD8GWwIf/nvQbEXu9FOquhQvGgiUyDKZPcTaANtqEfE/JbmfWixMhPeUkBp sjxAYT3AwkTEPDVE4wk8uZUv7libglHHgUULBFoKlhDZeQ0wLGDyJIqAi30+ivj1RPWo 9muoA4/OnERlgJVLCgD3q7cJR5gqYJMgolIM50tcOldkG9ob5dJpB5NS3ei+JRQu78nv yDYSkAyBHSaKhiB6yOOxSY74MnqisrYRE0kJC9FQbz/aAfK4qA4uGbs4V8TQQXnZL7VW a4fyjppmYkOjpfW/LYcZECxRO/1fRAXGgKFbQhmNaFyVh9oNPMoZoFvUKZv3lCx4eis4 3hZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534511; x=1737139311; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=t56obMWOlMyKyO7q6Si1cnEbfn2qyXwrpnzxR+4ONcI=; b=a4K4pOIIzDBbOJhjxIm93a8r7PPk+S+0cel265aNy30EAmEMy2cTJV0R48z3Bb2z// NcJQjnIfJ2OUu8S0VPwLXPF+CDcVt9vARV3SsHsq8Ye7T7rNyLOs3t6BRGQ62/KT4ZZL HF0z7/3yyAczwPA7hE4hXG1wLw5pUOdNvkAietHopu4aBappG8Y9tE7SDfo/kTq+/bxf Upe0yfe/h3h9mzgxxqOBIEOyfs6eh+UrOTuXxe8xLNd4vBBICT+D6VhHctFrs+ZCoCvB CxIbDr4nQRK60/iqEilvpuV23+es9tBm5Mwhy00mHv0yUbjLsXyy7JV+wAAfDiZ0gxYy yE7A== X-Forwarded-Encrypted: i=1; AJvYcCXEU7/XtpLLgFiUebREt4MTJa3QHSrp2mfs21HQRAsZKL/5KSb6s86Qnp9+MaiTwJz/1cZw9DLJoI/pQA==@lists.infradead.org X-Gm-Message-State: AOJu0YxWgdpEeJVGQHyqyfFcNukmgfNaEbDOIVqKrVpqeQXSjCuq/lJG Ds48z5EAh342ePJURiQOQkdC6ztHjVGdISOokQMIJaQb4gTyFDbkvQ85M5xQXdbaT0j6iABHwOa cpg5F1JFbig== X-Google-Smtp-Source: AGHT+IHNze04KHr3mNrPu4oWzk1uJr6kUgfUG8uUPvBwU8SZTj2uxyjbDaOwELGdGilpgEGrHE3bExV1JNVQyA== X-Received: from wmba16.prod.google.com ([2002:a05:600c:6dd0:b0:434:f350:9fc]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:70a:b0:38a:4184:1519 with SMTP id ffacd0b85a97d-38a873051e1mr10550801f8f.23.1736534511095; Fri, 10 Jan 2025 10:41:51 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:55 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-29-8419288bc805@google.com> Subject: [PATCH RFC v2 29/29] mm: asi: Stop ignoring asi=on cmdline flag From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184153_099063_60FB70E6 X-CRM114-Status: GOOD ( 15.39 ) X-Mailman-Approved-At: Fri, 10 Jan 2025 15:19:44 -0800 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org At this point the minimum requirements are in place for the kernel to operate correctly with ASI enabled. Signed-off-by: Brendan Jackman --- arch/x86/mm/asi.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/mm/asi.c b/arch/x86/mm/asi.c index f10f6614b26148e5ba423d8a44f640674573ee40..3e3956326936ea8550308ad004dbbb3738546f9f 100644 --- a/arch/x86/mm/asi.c +++ b/arch/x86/mm/asi.c @@ -207,14 +207,14 @@ void __init asi_check_boottime_disable(void) pr_info("ASI disabled through kernel command line.\n"); } else if (ret == 2 && !strncmp(arg, "on", 2)) { enabled = true; - pr_info("Ignoring asi=on param while ASI implementation is incomplete.\n"); + pr_info("ASI enabled through kernel command line.\n"); } else { pr_info("ASI %s by default.\n", enabled ? "enabled" : "disabled"); } if (enabled) - pr_info("ASI enablement ignored due to incomplete implementation.\n"); + setup_force_cpu_cap(X86_FEATURE_ASI); } /*