From patchwork Wed Feb 26 03:00:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13991454 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0D1FC021BF for ; Wed, 26 Feb 2025 03:02:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6032D28000D; Tue, 25 Feb 2025 22:02:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5670E28000B; Tue, 25 Feb 2025 22:02:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36CEC28000D; Tue, 25 Feb 2025 22:02:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 14E8128000B for ; Tue, 25 Feb 2025 22:02:36 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 8F723A163A for ; Wed, 26 Feb 2025 03:02:35 +0000 (UTC) X-FDA: 83160597870.30.BD4CC25 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf06.hostedemail.com (Postfix) with ESMTP id F246118000B for ; Wed, 26 Feb 2025 03:02:33 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf06.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740538954; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zQYvgERn9mBuahDJCLE4ZCsMCyI24IUkE+WnPUtfWUc=; b=0PmhBn9aZzprUt7LySFjMo37hkFHg8tUwxzhhyZmwDhyuECIX7XtTC8JkRQS+OGQWsr49y oLCk0+rNiKSR4enXTqTKBpxfi0ot7Q2UDC7ldmeEEqPbey7yH1GacEtVc9xAfTso+Q/Aho 33M2FBCwyZSt0Zn6PyNhmiquCTAPfDI= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf06.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740538954; a=rsa-sha256; cv=none; b=VUndLYpPHqgaNrL5z7XojmrwTQRB7GHsp65RXQe5iitxoPQSl2IW6kr/I5FuUBSWd5IetG BARw9mwae/ot+wCJ2ok6QZ1lk73AjrGpEiSfyv5es3tY5djMGXN1pVflYb1Wcc1UDKoZfK Q4oK1E3Ow2KMz5rtvPYvEkWnyW2TGfk= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tn7fw-000000001Y5-1HJG; Tue, 25 Feb 2025 22:01:32 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jackmanb@google.com, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Manali.Shukla@amd.com, mingo@kernel.org, Rik van Riel Subject: [PATCH v14 11/13] x86/mm: do targeted broadcast flushing from tlbbatch code Date: Tue, 25 Feb 2025 22:00:46 -0500 Message-ID: <20250226030129.530345-12-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250226030129.530345-1-riel@surriel.com> References: <20250226030129.530345-1-riel@surriel.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: uq7dueyb1matafp4h6cjs5dwkkihxayx X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: F246118000B X-HE-Tag: 1740538953-74101 X-HE-Meta: U2FsdGVkX18bQm2adnXQF3NE9Sc92GUc1op7Gffodx2YE7u0Ml5OAyvMtZ/gx4tIEoKKhg+6VMtQZubOG1ihChsQdGVdszYaj5fSdB09ZugDCMXl7YMHqDej2rXodD2chIohYUaO2L4o9d45qlz52y6g4kNxErqwCpkbC8YSBc3IIjko06wGsUOxwnKt62zvMwsDYXWh73ykfv84pp2cdCxkK7/dGDGFHHw/9DqV3KtGxNqB5RGfufS354eZJYfpXYWzq3hehAMjnelaDzIv3/NUdbsjzjAUxJe96vrnAX8lTp8bdZJs+Rzi8dxeOhn+pYwxf+q3N7JAgHLhEXmuDh9yBah6tMziCukwb9xuqpXTs/ydPmYNDNxWAiosIECAJZR+tt9fBOobgd+n5w7MdVPvC2iW1PrDimWDOV4Jp/70kXwxS6CzAo1hgdKHHyr8kgarxKkn5QnIM+zLn3596CtEdrHlsmbIBl0aOjMuhz7SZV+2wwWSZuOQXzGmI8GVAXA1YwUp5jW4hlJ7Orc01zb3U55LRqYTxO8lsE1NGaDS3c5U1RoM1AfTgwT5dulEc4Vv8sibPczWVE26QWK6arhfZR2WwYZZ4XmAXz4lfwj/o/mP6x36eg6qVH6PBUWOokmZeRqfDXuBU4I9CVZFV6xmr3g1r4FEDSwZrDk8hjLCqfGfKgBNuFiM1g0aan7qXx6X4CM5CfmiL9Tru68Lb9b5eDHrOKmJrYxRLDEKUqLU44tuI96cAOgnmZRh0VVghf49PJT9c0JUtUo8SLA52ATECCm/qJBZfWVlUXQgSpMI+jqfnkNOa3UvdBkLxP8DqFe5iuAJvDy+Gaauzk5l/iBadTwDlGwMQcZpf18LEiGtTMwp9OxdDCLYJdGV/1nG8IDE7E4YpVZv72Fy9nlADnsrRYfv5ihbAz0MNLQNyrtZrRcNCpsN5woa9v0004MDEo+jfd4weUONQYyTGJ5 5/Uce8nt m8eYW53goATADkMw7F93PSvEZ8d/8yDJOBTWIIrSgzDJzA/Sv45gzo2VfpXvrq2n8MS4cusZaHsz1QufE4xZDEx4eseBHAPFmLrO8qR8Du1RgtCKlTlQDY0DJcX5HFHfrRwoFz+koZh3OJQq42wJogILNqVzGUwoACWbtp0YHgafZWYRGcZq4ASaC4nTlPGpXGtBA1iNjFGjQyc1H7olFSoZTkpiWIHRXcQR5Gp9WsxyjXhXQnwihlOtPiGyIUyZp1v1YWzdV8uxs/FGzJ27CkkOG58vq9XAyCvKYtyci4fpQpJEU4Xz6uxRum0vV0lPBjtxm6jVnWASx1Yvn8MMY8r0EFYILChdFTBrv6Yu0zWiYLUebI2uNLhqrqntDdHPKEbnNEx9uG87ra+tJjdEDl3xibwMoSEhdzgBgg4yHVzoEl0d7mc4OIVVDgdglxnXxeDIbMxg6SCAf+JI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Instead of doing a system-wide TLB flush from arch_tlbbatch_flush, queue up asynchronous, targeted flushes from arch_tlbbatch_add_pending. This also allows us to avoid adding the CPUs of processes using broadcast flushing to the batch->cpumask, and will hopefully further reduce TLB flushing from the reclaim and compaction paths. Signed-off-by: Rik van Riel Tested-by: Manali Shukla Tested-by: Brendan Jackman Tested-by: Michael Kelley --- arch/x86/include/asm/tlb.h | 12 ++--- arch/x86/include/asm/tlbflush.h | 34 ++++++++++---- arch/x86/mm/tlb.c | 79 +++++++++++++++++++++++++++++++-- 3 files changed, 107 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 91c9a4da3ace..e645884a1877 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -89,16 +89,16 @@ static inline void __tlbsync(void) #define INVLPGB_FINAL_ONLY BIT(4) #define INVLPGB_INCLUDE_NESTED BIT(5) -static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, - unsigned long addr, - u16 nr, - bool pmd_stride) +static inline void __invlpgb_flush_user_nr_nosync(unsigned long pcid, + unsigned long addr, + u16 nr, + bool pmd_stride) { __invlpgb(0, pcid, addr, nr, pmd_stride, INVLPGB_PCID | INVLPGB_VA); } /* Flush all mappings for a given PCID, not including globals. */ -static inline void invlpgb_flush_single_pcid_nosync(unsigned long pcid) +static inline void __invlpgb_flush_single_pcid_nosync(unsigned long pcid) { __invlpgb(0, pcid, 0, 1, 0, INVLPGB_PCID); } @@ -111,7 +111,7 @@ static inline void invlpgb_flush_all(void) } /* Flush addr, including globals, for all PCIDs. */ -static inline void invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) +static inline void __invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) { __invlpgb(0, 0, addr, nr, 0, INVLPGB_INCLUDE_GLOBAL); } diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 811dd70eb6b8..22462bd4b1ee 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -105,6 +105,9 @@ struct tlb_state { * need to be invalidated. */ bool invalidate_other; +#ifdef CONFIG_X86_BROADCAST_TLB_FLUSH + bool need_tlbsync; +#endif #ifdef CONFIG_ADDRESS_MASKING /* @@ -284,6 +287,16 @@ static inline bool in_asid_transition(struct mm_struct *mm) return mm && READ_ONCE(mm->context.asid_transition); } + +static inline bool cpu_need_tlbsync(void) +{ + return this_cpu_read(cpu_tlbstate.need_tlbsync); +} + +static inline void cpu_write_tlbsync(bool state) +{ + this_cpu_write(cpu_tlbstate.need_tlbsync, state); +} #else static inline u16 mm_global_asid(struct mm_struct *mm) { @@ -302,6 +315,15 @@ static inline bool in_asid_transition(struct mm_struct *mm) { return false; } + +static inline bool cpu_need_tlbsync(void) +{ + return false; +} + +static inline void cpu_write_tlbsync(bool state) +{ +} #endif #ifdef CONFIG_PARAVIRT @@ -351,21 +373,15 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) return atomic64_inc_return(&mm->context.tlb_gen); } -static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr) -{ - inc_mm_tlb_gen(mm); - cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); - mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); -} - static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) { flush_tlb_mm(mm); } extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); +extern void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, + unsigned long uaddr); static inline bool pte_flags_need_flush(unsigned long oldflags, unsigned long newflags, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index cd109bdf0dd9..4d56d22b9893 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -487,6 +487,37 @@ static void finish_asid_transition(struct flush_tlb_info *info) clear_asid_transition(mm); } +static inline void tlbsync(void) +{ + if (!cpu_need_tlbsync()) + return; + __tlbsync(); + cpu_write_tlbsync(false); +} + +static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, + unsigned long addr, + u16 nr, bool pmd_stride) +{ + __invlpgb_flush_user_nr_nosync(pcid, addr, nr, pmd_stride); + if (!cpu_need_tlbsync()) + cpu_write_tlbsync(true); +} + +static inline void invlpgb_flush_single_pcid_nosync(unsigned long pcid) +{ + __invlpgb_flush_single_pcid_nosync(pcid); + if (!cpu_need_tlbsync()) + cpu_write_tlbsync(true); +} + +static inline void invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) +{ + __invlpgb_flush_addr_nosync(addr, nr); + if (!cpu_need_tlbsync()) + cpu_write_tlbsync(true); +} + static void broadcast_tlb_flush(struct flush_tlb_info *info) { bool pmd = info->stride_shift == PMD_SHIFT; @@ -785,6 +816,8 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, if (IS_ENABLED(CONFIG_PROVE_LOCKING)) WARN_ON_ONCE(!irqs_disabled()); + tlbsync(); + /* * Verify that CR3 is what we think it is. This will catch * hypothetical buggy code that directly switches to swapper_pg_dir @@ -961,6 +994,8 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, */ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) { + tlbsync(); + if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm) return; @@ -1624,9 +1659,7 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) * a local TLB flush is needed. Optimize this use-case by calling * flush_tlb_func_local() directly in this case. */ - if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { - invlpgb_flush_all_nonglobals(); - } else if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { + if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { flush_tlb_multi(&batch->cpumask, info); } else if (cpumask_test_cpu(cpu, &batch->cpumask)) { lockdep_assert_irqs_enabled(); @@ -1635,12 +1668,52 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) local_irq_enable(); } + /* + * If we issued (asynchronous) INVLPGB flushes, wait for them here. + * The cpumask above contains only CPUs that were running tasks + * not using broadcast TLB flushing. + */ + tlbsync(); + cpumask_clear(&batch->cpumask); put_flush_tlb_info(); put_cpu(); } +void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, + unsigned long uaddr) +{ + u16 asid = mm_global_asid(mm); + + if (asid) { + invlpgb_flush_user_nr_nosync(kern_pcid(asid), uaddr, 1, false); + /* Do any CPUs supporting INVLPGB need PTI? */ + if (static_cpu_has(X86_FEATURE_PTI)) + invlpgb_flush_user_nr_nosync(user_pcid(asid), uaddr, 1, false); + + /* + * Some CPUs might still be using a local ASID for this + * process, and require IPIs, while others are using the + * global ASID. + * + * In this corner case we need to do both the broadcast + * TLB invalidation, and send IPIs. The IPIs will help + * stragglers transition to the broadcast ASID. + */ + if (in_asid_transition(mm)) + asid = 0; + } + + if (!asid) { + inc_mm_tlb_gen(mm); + cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); + } + + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); +} + /* * Blindly accessing user memory from NMI context can be dangerous * if we're in the middle of switching the current user task or