From patchwork Wed Feb 26 03:00:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13991449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1876C18E7C for ; Wed, 26 Feb 2025 03:02:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F6CB6B0099; Tue, 25 Feb 2025 22:02:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A4D5280007; Tue, 25 Feb 2025 22:02:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 060BB6B009D; Tue, 25 Feb 2025 22:02:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C8FDE6B0099 for ; Tue, 25 Feb 2025 22:02:25 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5651852B71 for ; Wed, 26 Feb 2025 03:02:25 +0000 (UTC) X-FDA: 83160597450.16.61B96A9 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf22.hostedemail.com (Postfix) with ESMTP id 332A6C0007 for ; Wed, 26 Feb 2025 03:02:22 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740538943; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ck5/KrUElZLi961Ay6jgBkJTqAyumMCttJJQNOmZKNM=; b=CGrsY7COTqKvGv4kF1GxHvEh78TthIo1LWvUu9cYxvW+igghB2MfWQNOJtOnme2jhbiZXN 63Rh8HN0UAE/g9govpLfhpqhSZeuS25Uz5/nzHlHoYrOPngCbMPXcYU7TetW7gshFdceu+ n4hCFUuH5fP2MVySq1AhdBkzWcxduO8= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740538943; a=rsa-sha256; cv=none; b=Il8Q9NfwOeUmzVjYcwlrHWlPQxybHmTRSC4QpgcA48Fb4EtP3B++vwGsyJ3IIdUnSCfrb0 xgijEP9qx5vsW0Z2NI8PdpHQ8wkno21O16+4iltNMOFNVAjDuqclgvcnr2k8GnJQhKvil7 rr70d+oePat+LpSjenZ8o5MEYbBCTr4= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tn7fw-000000001Y5-0wG8; Tue, 25 Feb 2025 22:01:32 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jackmanb@google.com, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Manali.Shukla@amd.com, mingo@kernel.org, Rik van Riel Subject: [PATCH v14 07/13] x86/mm: add global ASID allocation helper functions Date: Tue, 25 Feb 2025 22:00:42 -0500 Message-ID: <20250226030129.530345-8-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250226030129.530345-1-riel@surriel.com> References: <20250226030129.530345-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 332A6C0007 X-Stat-Signature: u4s1aczgbxz318zg6sju3arojnidsfmy X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1740538942-198311 X-HE-Meta: U2FsdGVkX1916ILuo1mh2M6x8UPj14dkAyEycSf8RSAfEMzMPQ5cGnliIX7W71yP5he7KGx78AeUOuk9k7FbSwoBKZlIXI8utVMJVssq19QaE30uKPFD3D7jc6ycCy99HUm5fwGShjkv1eM/XWJTlinLsNLACt06MpS7a+Iz40C7eIaTxHHqYfWBS/gMq9O8tOCUbOYbiBYyQnBgpLc+I3AGJZBYRys8LsESfZ0V4AGYAZsBZFbpp7J88xNHSLU9c98QxOm9OJvKmbBKznDpyEh8WCRaCi33rjCdDTzrMDOuhDMwLvjfsCVZm4mNz8wYMLBg4S4lfFvjTXOq97vgwnr/eAWo3Nvc92Fcen8KGEp/CqVrV6AU/6bXyVgvaiSS13OH57uJhdWXMVWMPey0OKzzKV0+XzauDwYs7TF6tqHBylzKkVTXsAP44IGLO6AHkDyFYkDavWKa9G66ROeuPc2AQ0tbMrIDXm4m9OWJfOlGYqtm2qghV8pvApgJTMSqtjJS7pVNeBdGQIB7/RrtQEtUrpNt1HNIxdah7N+whhjIGlJ1jN8BvBezy4OTYvqkHEI+CKegxMnKtT+5CiXeDKrO6XubC2wafuPhO+0iHL/5++lBPfcbzOPoSYNPmbGVMLqO/z5nSl2UzOKSzRGQ0rkxu6vEw6NdhP2khEGlHLkO6uYJJ2f4CN8rgv2KUBCQHKwuD4h3BdgSZk0u9yzMjqmG6g1jR7h07k3n7KqudDrrcr+ZoHz4cADVlINR1WnTSzOjaW8BBAuFKIcVsQN3k4ANyZ2yIKn75tJuTlI4rjCSTzhvup4KRqpUtM9MTc93jb4aEyqwRcj1TWZXd5JwwfW2n5z9TsZs2KpWeJfEcMB1JpRPvAQ1e2sSr3kS0dqvWdpOe32Ulc1n1bUrAwnK+P6+OMCAxOtZzIOsWZhNTN8SIdx7OykjAEMmcKMayIYigq5UmiDy7zp+dKiMdky ob68Y7UU 90LsZICHzh3N2VPHzJYVtkai7/m4T7WPpiDnAJ906tExw5Qp58vD4LnPoiMqoDnj1czROoCuDTAueUsPv3IypTBlaVEN0H99ivJv6JzQBKAX62OnCt1tQHNbCa1l3oXPDW0HijHEW7uCAdZQ13olVQZAAKGLqoUorqsEuLDGbMrEecGYCfvzcd08dKnlW5OhtLMr0QcZ80WFE8/S4hhkVn2kGPPQBSJvETmov3L0vo/OVkdkhX7q/rXRAO4xi3g4kM4KKM2ltf9eM9NkjxOB+6+ykyba5uny4JUzO5pRSSdVKrn1AKorGIl141xC9k3Ps5lFsGoi9w1f0wlXJD56nFW7UKwW3L21bt51HyN9nbHAa6sZ0sTcfyBdPFsEKLPtxp55J8b7hS+yXjeAHU39G3jlbohsxKdn63yfSipf7wMZilaaCtM1GGe3+B+/aQgJ74d28fUU8phX3VvU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add functions to manage global ASID space. Multithreaded processes that are simultaneously active on 4 or more CPUs can get a global ASID, resulting in the same PCID being used for that process on every CPU. This in turn will allow the kernel to use hardware-assisted TLB flushing through AMD INVLPGB or Intel RAR for these processes. Signed-off-by: Rik van Riel Tested-by: Manali Shukla Tested-by: Brendan Jackman Tested-by: Michael Kelley --- arch/x86/include/asm/mmu.h | 11 +++ arch/x86/include/asm/mmu_context.h | 2 + arch/x86/include/asm/tlbflush.h | 43 +++++++++ arch/x86/mm/tlb.c | 146 ++++++++++++++++++++++++++++- 4 files changed, 199 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h index 3b496cdcb74b..edb5942d4829 100644 --- a/arch/x86/include/asm/mmu.h +++ b/arch/x86/include/asm/mmu.h @@ -69,6 +69,17 @@ typedef struct { u16 pkey_allocation_map; s16 execute_only_pkey; #endif + +#ifdef CONFIG_X86_BROADCAST_TLB_FLUSH + /* + * The global ASID will be a non-zero value when the process has + * the same ASID across all CPUs, allowing it to make use of + * hardware-assisted remote TLB invalidation like AMD INVLPGB. + */ + u16 global_asid; + /* The process is transitioning to a new global ASID number. */ + bool asid_transition; +#endif } mm_context_t; #define INIT_MM_CONTEXT(mm) \ diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 795fdd53bd0a..a2c70e495b1b 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -139,6 +139,8 @@ static inline void mm_reset_untag_mask(struct mm_struct *mm) #define enter_lazy_tlb enter_lazy_tlb extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk); +extern void mm_free_global_asid(struct mm_struct *mm); + /* * Init a new mm. Used on mm copies, like at fork() * and on mm's that are brand-new, like at execve(). diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 855c13da2045..8e7df0ed7005 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -6,6 +6,7 @@ #include #include +#include #include #include #include @@ -234,6 +235,48 @@ void flush_tlb_one_kernel(unsigned long addr); void flush_tlb_multi(const struct cpumask *cpumask, const struct flush_tlb_info *info); +static inline bool is_dyn_asid(u16 asid) +{ + return asid < TLB_NR_DYN_ASIDS; +} + +#ifdef CONFIG_X86_BROADCAST_TLB_FLUSH +static inline u16 mm_global_asid(struct mm_struct *mm) +{ + u16 asid; + + if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return 0; + + asid = smp_load_acquire(&mm->context.global_asid); + + /* mm->context.global_asid is either 0, or a global ASID */ + VM_WARN_ON_ONCE(asid && is_dyn_asid(asid)); + + return asid; +} + +static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) +{ + /* + * Notably flush_tlb_mm_range() -> broadcast_tlb_flush() -> + * finish_asid_transition() needs to observe asid_transition = true + * once it observes global_asid. + */ + mm->context.asid_transition = true; + smp_store_release(&mm->context.global_asid, asid); +} +#else +static inline u16 mm_global_asid(struct mm_struct *mm) +{ + return 0; +} + +static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) +{ +} +#endif + #ifdef CONFIG_PARAVIRT #include #endif diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 1cc25e83bd34..9b1652c02452 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -74,13 +74,15 @@ * use different names for each of them: * * ASID - [0, TLB_NR_DYN_ASIDS-1] - * the canonical identifier for an mm + * the canonical identifier for an mm, dynamically allocated on each CPU + * [TLB_NR_DYN_ASIDS, MAX_ASID_AVAILABLE-1] + * the canonical, global identifier for an mm, identical across all CPUs * - * kPCID - [1, TLB_NR_DYN_ASIDS] + * kPCID - [1, MAX_ASID_AVAILABLE] * the value we write into the PCID part of CR3; corresponds to the * ASID+1, because PCID 0 is special. * - * uPCID - [2048 + 1, 2048 + TLB_NR_DYN_ASIDS] + * uPCID - [2048 + 1, 2048 + MAX_ASID_AVAILABLE] * for KPTI each mm has two address spaces and thus needs two * PCID values, but we can still do with a single ASID denomination * for each mm. Corresponds to kPCID + 2048. @@ -251,6 +253,144 @@ static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen, *need_flush = true; } +/* + * Global ASIDs are allocated for multi-threaded processes that are + * active on multiple CPUs simultaneously, giving each of those + * processes the same PCID on every CPU, for use with hardware-assisted + * TLB shootdown on remote CPUs, like AMD INVLPGB or Intel RAR. + * + * These global ASIDs are held for the lifetime of the process. + */ +static DEFINE_RAW_SPINLOCK(global_asid_lock); +static u16 last_global_asid = MAX_ASID_AVAILABLE; +static DECLARE_BITMAP(global_asid_used, MAX_ASID_AVAILABLE); +static DECLARE_BITMAP(global_asid_freed, MAX_ASID_AVAILABLE); +static int global_asid_available = MAX_ASID_AVAILABLE - TLB_NR_DYN_ASIDS - 1; + +/* + * When the search for a free ASID in the global ASID space reaches + * MAX_ASID_AVAILABLE, a global TLB flush guarantees that previously + * freed global ASIDs are safe to re-use. + * + * This way the global flush only needs to happen at ASID rollover + * time, and not at ASID allocation time. + */ +static void reset_global_asid_space(void) +{ + lockdep_assert_held(&global_asid_lock); + + invlpgb_flush_all_nonglobals(); + + /* + * The TLB flush above makes it safe to re-use the previously + * freed global ASIDs. + */ + bitmap_andnot(global_asid_used, global_asid_used, + global_asid_freed, MAX_ASID_AVAILABLE); + bitmap_clear(global_asid_freed, 0, MAX_ASID_AVAILABLE); + + /* Restart the search from the start of global ASID space. */ + last_global_asid = TLB_NR_DYN_ASIDS; +} + +static u16 allocate_global_asid(void) +{ + u16 asid; + + lockdep_assert_held(&global_asid_lock); + + /* The previous allocation hit the edge of available address space */ + if (last_global_asid >= MAX_ASID_AVAILABLE - 1) + reset_global_asid_space(); + + asid = find_next_zero_bit(global_asid_used, MAX_ASID_AVAILABLE, last_global_asid); + + if (asid >= MAX_ASID_AVAILABLE && !global_asid_available) { + /* This should never happen. */ + VM_WARN_ONCE(1, "Unable to allocate global ASID despite %d available\n", + global_asid_available); + return 0; + } + + /* Claim this global ASID. */ + __set_bit(asid, global_asid_used); + last_global_asid = asid; + global_asid_available--; + return asid; +} + +/* + * Check whether a process is currently active on more than @threshold CPUs. + * This is a cheap estimation on whether or not it may make sense to assign + * a global ASID to this process, and use broadcast TLB invalidation. + */ +static bool mm_active_cpus_exceeds(struct mm_struct *mm, int threshold) +{ + int count = 0; + int cpu; + + /* This quick check should eliminate most single threaded programs. */ + if (cpumask_weight(mm_cpumask(mm)) <= threshold) + return false; + + /* Slower check to make sure. */ + for_each_cpu(cpu, mm_cpumask(mm)) { + /* Skip the CPUs that aren't really running this process. */ + if (per_cpu(cpu_tlbstate.loaded_mm, cpu) != mm) + continue; + + if (per_cpu(cpu_tlbstate_shared.is_lazy, cpu)) + continue; + + if (++count > threshold) + return true; + } + return false; +} + +/* + * Assign a global ASID to the current process, protecting against + * races between multiple threads in the process. + */ +static void use_global_asid(struct mm_struct *mm) +{ + u16 asid; + + guard(raw_spinlock_irqsave)(&global_asid_lock); + + /* This process is already using broadcast TLB invalidation. */ + if (mm_global_asid(mm)) + return; + + /* The last global ASID was consumed while waiting for the lock. */ + if (!global_asid_available) { + VM_WARN_ONCE(1, "Ran out of global ASIDs\n"); + return; + } + + asid = allocate_global_asid(); + if (!asid) + return; + + mm_assign_global_asid(mm, asid); +} + +void mm_free_global_asid(struct mm_struct *mm) +{ + if (!mm_global_asid(mm)) + return; + + guard(raw_spinlock_irqsave)(&global_asid_lock); + + /* The global ASID can be re-used only after flush at wrap-around. */ +#ifdef CONFIG_X86_BROADCAST_TLB_FLUSH + __set_bit(mm->context.global_asid, global_asid_freed); + + mm->context.global_asid = 0; + global_asid_available++; +#endif +} + /* * Given an ASID, flush the corresponding user ASID. We can delay this * until the next time we switch to it.