From patchwork Fri Feb 21 00:53:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13984689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75E65C021B3 for ; Fri, 21 Feb 2025 00:55:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C2151280016; Thu, 20 Feb 2025 19:55:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B7F6B280014; Thu, 20 Feb 2025 19:55:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9335F280016; Thu, 20 Feb 2025 19:55:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6C149280014 for ; Thu, 20 Feb 2025 19:55:28 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 25EE0120285 for ; Fri, 21 Feb 2025 00:55:28 +0000 (UTC) X-FDA: 83142133536.09.76CAD87 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf24.hostedemail.com (Postfix) with ESMTP id 9FB5418000C for ; Fri, 21 Feb 2025 00:55:26 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf24.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740099326; a=rsa-sha256; cv=none; b=5fllDAAXlf3bHJZdOh2y72EVXsaHB3dq+ogUarArHYwowff/HR1pwAlEfyfFxv+IRXydhm ZNTq5VioXuGeXgy/CokU9HvjukLdQnxLI0McWGPFTSU2HcbAM0t02I0caIBx7KO8PsgeeE gVDOz5+h5e1iTsfvzWvnVjwtxgFOm28= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf24.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740099326; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FDNW7A3jW76TJvTwQhr2RDVesMXpIDtxJIAfA8ejwZs=; b=6cgWYABbKUZBS8l/GvL2v2wtq01S7mDjPXpAUZtsu+vOK/T9APNhNnSqjALkXDyOsRfvKE LXkjnwMPeOXIsNcdYSM3p+lEUa4JFi4sm/j1fBcdMOnbcjq82optDLtg4vaqiudHS+et2Y WOlpCOeyg54fRhUj/2SR84R2YfFVRGk= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tlHIZ-000000003Qf-0zou; Thu, 20 Feb 2025 19:53:47 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jackmanb@google.com, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Manali.Shukla@amd.com, Rik van Riel , Dave Hansen Subject: [PATCH v12 05/16] x86/mm: add INVLPGB support code Date: Thu, 20 Feb 2025 19:53:04 -0500 Message-ID: <20250221005345.2156760-6-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250221005345.2156760-1-riel@surriel.com> References: <20250221005345.2156760-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 9FB5418000C X-Stat-Signature: 8c3eyegecq93o9zzxg81roxdozi6e479 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1740099326-355752 X-HE-Meta: U2FsdGVkX19zvlb5pb0Bq3GkQBa+1WkhHaMa/jWnBgqAZUiN2r1AsmAMeO2QhUDH4VYD+fCp3sKey3RNgzJwj/a71xAkIN9BgVKcCdj6GnGObiBcBFO9gpz08+uLz6WfWXI1lMVfpA1bwAUYLYoTChcP/qisLT2Yn60uZJbxuv0z7nEzqx7mhO4yAT+6OwwhhcT3zVYjgoAC6zecxn4br/AH74yREBDkamyYXkJ896EENqMM+xEVqxxNreI2uMZ/SvYuYxJTFHhWAEmRSnYDz/V8Oxxy/DnbMnXMAKnpnm17sB1U+dZEPpbgTU/Ybj/4yHNpXllvWfuzPOoqIy/ECnTaCXfIwmoVkMj26A+pS26Nu+Bc3gJyRavXONcpoHYQzBIIgRNkgFLqeLIIEKgM2ZHlhhY1idpy4E+/UbVfa2WXhmMAoUiNudtmLBRNIqJdAFdSrEQxBDIzkSDO2ZKejmLjZQbREbQIlXGJiuRKirij1TZAVz1Rn/R2EAjtK92AsaCwdDOqlwzDLB0EyO3jsF7R+b6+lIwMeo1evtJlBeGjNk0mYuXc1pDRaVzq14E2GgdMQ0qmFpGOubLEEFnRBOWNicR554nBQnTQp7zQCE2uPdpdZiRfv6zHfaKIwMMBjuyj6Qw0Cuo8M62e/biFNFaa9GwLJ9JJSuymaKqSamtzbhmEEKVeen87pX4EbK2bzJgPxKTY3pYQPct61bm8+wzdz3WTUwG7N6Fvjkx8eLrg6074DLH7TX/fVgC7+dxQg8pvL85A5U0ENMZyh8+qxxezta7c3IewFJ5wMUbK8AM0k1hHsOwVqFAPK+J2UPJako9xKCTslk3FrsSEpOSpC2MMukhB4PgQ3dQIKr+zRj1bFN2x0vEsRsZfGPQarkZuLt419sdIKhi6evkeSgjZzfwePF9btZQJ8DgR5XsZBoKxBDcCWc6gUS2baKDryIkziknsNb3gWs3Obnyts+C tSqTFkcH iNkInTqnMrLVA6ktWhmZc/4W4+jBMUP/7jc7QBJL/3vbum7VBh5jY+K2b3salncF76br1GS9jE5hAdQc06h9OCFmlRfPEX9hEYj5Da6e/Q98xE9qIIF//ldoJCV8BCcE8f9diGmPnKAJ3zNi9gSa7b+dExYtUgDbkjiFS7MadrAgZeZpPuqWHXLXd14rkpByYvLUNsXNdFBjmVxh8h9QfpB0BTyyU/ftvYP316Zpiacj/lczIDAmoF8DEJ0B4nLu5GTgUKYVOIMOZeJokWho9LUWEZHnXuGiLZIrpz5Go3ILwgNwwaywQcOdf8NFPCLsImP027uexw8mlI9sCndl3UPIip2jwIfCcYH7O/88hYrU3SK/jmXqjdmZ7XUDPCTA++YashgNe9c7zxRr+XuPDcV0Ik5I3FX1mPMrvwGIVt4KhcWMkKpTfFIPEIWPtz8vyiG82Sk6Mf1YUYWjuRj4SAcfwZya0TBpnxt1R X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add invlpgb.h with the helper functions and definitions needed to use broadcast TLB invalidation on AMD EPYC 3 and newer CPUs. All the functions defined in invlpgb.h are used later in the series. Compile time disabling X86_FEATURE_INVLPGB when the config option is not set allows the compiler to omit unnecessary code. Signed-off-by: Rik van Riel Tested-by: Manali Shukla Tested-by: Brendan Jackman Tested-by: Michael Kelley Acked-by: Dave Hansen --- arch/x86/include/asm/disabled-features.h | 9 ++- arch/x86/include/asm/tlb.h | 92 ++++++++++++++++++++++++ 2 files changed, 100 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index c492bdc97b05..95997caf0935 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -129,6 +129,13 @@ #define DISABLE_SEV_SNP (1 << (X86_FEATURE_SEV_SNP & 31)) #endif +#ifdef CONFIG_X86_BROADCAST_TLB_FLUSH +#define DISABLE_INVLPGB 0 +#else +/* Keep 32 bit kernels smaller by compiling out the INVLPGB code. */ +#define DISABLE_INVLPGB (1 << (X86_FEATURE_INVLPGB & 31)) +#endif + /* * Make sure to add features to the correct mask */ @@ -146,7 +153,7 @@ #define DISABLED_MASK11 (DISABLE_RETPOLINE|DISABLE_RETHUNK|DISABLE_UNRET| \ DISABLE_CALL_DEPTH_TRACKING|DISABLE_USER_SHSTK) #define DISABLED_MASK12 (DISABLE_FRED|DISABLE_LAM) -#define DISABLED_MASK13 0 +#define DISABLED_MASK13 (DISABLE_INVLPGB) #define DISABLED_MASK14 0 #define DISABLED_MASK15 0 #define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57|DISABLE_UMIP| \ diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 77f52bc1578a..b3cd521e5e2f 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -6,6 +6,9 @@ static inline void tlb_flush(struct mmu_gather *tlb); #include +#include +#include +#include static inline void tlb_flush(struct mmu_gather *tlb) { @@ -25,4 +28,93 @@ static inline void invlpg(unsigned long addr) asm volatile("invlpg (%0)" ::"r" (addr) : "memory"); } + +/* + * INVLPGB does broadcast TLB invalidation across all the CPUs in the system. + * + * The INVLPGB instruction is weakly ordered, and a batch of invalidations can + * be done in a parallel fashion. + * + * The instruction takes the number of extra pages to invalidate, beyond + * the first page, while __invlpgb gets the more human readable number of + * pages to invalidate. + * + * TLBSYNC is used to ensure that pending INVLPGB invalidations initiated from + * this CPU have completed. + */ +static inline void __invlpgb(unsigned long asid, unsigned long pcid, + unsigned long addr, u16 nr_pages, + bool pmd_stride, u8 flags) +{ + u32 edx = (pcid << 16) | asid; + u32 ecx = (pmd_stride << 31) | (nr_pages - 1); + u64 rax = addr | flags; + + /* The low bits in rax are for flags. Verify addr is clean. */ + VM_WARN_ON_ONCE(addr & ~PAGE_MASK); + + /* INVLPGB; supported in binutils >= 2.36. */ + asm volatile(".byte 0x0f, 0x01, 0xfe" : : "a" (rax), "c" (ecx), "d" (edx)); +} + +/* Wait for INVLPGB originated by this CPU to complete. */ +static inline void __tlbsync(void) +{ + cant_migrate(); + /* TLBSYNC: supported in binutils >= 0.36. */ + asm volatile(".byte 0x0f, 0x01, 0xff" ::: "memory"); +} + +/* + * INVLPGB can be targeted by virtual address, PCID, ASID, or any combination + * of the three. For example: + * - INVLPGB_VA | INVLPGB_INCLUDE_GLOBAL: invalidate all TLB entries at the address + * - INVLPGB_PCID: invalidate all TLB entries matching the PCID + * + * The first can be used to invalidate (kernel) mappings at a particular + * address across all processes. + * + * The latter invalidates all TLB entries matching a PCID. + */ +#define INVLPGB_VA BIT(0) +#define INVLPGB_PCID BIT(1) +#define INVLPGB_ASID BIT(2) +#define INVLPGB_INCLUDE_GLOBAL BIT(3) +#define INVLPGB_FINAL_ONLY BIT(4) +#define INVLPGB_INCLUDE_NESTED BIT(5) + +static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, + unsigned long addr, + u16 nr, + bool pmd_stride) +{ + __invlpgb(0, pcid, addr, nr, pmd_stride, INVLPGB_PCID | INVLPGB_VA); +} + +/* Flush all mappings for a given PCID, not including globals. */ +static inline void invlpgb_flush_single_pcid_nosync(unsigned long pcid) +{ + __invlpgb(0, pcid, 0, 1, 0, INVLPGB_PCID); +} + +/* Flush all mappings, including globals, for all PCIDs. */ +static inline void invlpgb_flush_all(void) +{ + __invlpgb(0, 0, 0, 1, 0, INVLPGB_INCLUDE_GLOBAL); + __tlbsync(); +} + +/* Flush addr, including globals, for all PCIDs. */ +static inline void invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) +{ + __invlpgb(0, 0, addr, nr, 0, INVLPGB_INCLUDE_GLOBAL); +} + +/* Flush all mappings for all PCIDs except globals. */ +static inline void invlpgb_flush_all_nonglobals(void) +{ + __invlpgb(0, 0, 0, 1, 0, 0); + __tlbsync(); +} + #endif /* _ASM_X86_TLB_H */