From patchwork Mon Dec 30 17:53:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13923394 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBAFEE77188 for ; Mon, 30 Dec 2024 17:57:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 898CF8D0003; Mon, 30 Dec 2024 12:57:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 818CC8D0006; Mon, 30 Dec 2024 12:57:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50CAB8D0003; Mon, 30 Dec 2024 12:57:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 081A78D0005 for ; Mon, 30 Dec 2024 12:57:34 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A77E2449CA for ; Mon, 30 Dec 2024 17:57:33 +0000 (UTC) X-FDA: 82952382282.28.0D2DE70 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf03.hostedemail.com (Postfix) with ESMTP id C812A20006 for ; Mon, 30 Dec 2024 17:57:10 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf03.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735581430; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Do+LP4XQNWNMPC/i/rud3jVU52LLXdIBm5f/1WHcQCo=; b=RtJ1n4P5//jL7UhWOcp2mMDgWC1COilHjkOWGBcOjehPQJ4pT2Ieyh+tAOak8cvhpTwJRZ av5pRKB7RnKaIfwnnLROswdczyd808WZqrieaNA2jSpDNEyPYuhOo4uhPcmPIzyE9nT/dr JZZs3sUkjgn+ePBP5ndII7fcl5h9poQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735581430; a=rsa-sha256; cv=none; b=SZDTgg2CFb24JtS00H3P+cuoU3RUHboLX4aBOsqehaLE6Yzxy6XlMYGgwBBwZ/6izv2W7E +5M85+hJqQzpWOWzIw9jhR0FwSLS5FapBFk6uZfBOHknloD6R8xCklnwwW3fujpOFtwBI8 1gYY3DW6rHLFz9Q0Ve547DPG3oanyxg= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf03.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tSJzf-000000008Lf-0ChP; Mon, 30 Dec 2024 12:55:55 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, akpm@linux-foundation.org, nadav.amit@gmail.com, zhengqi.arch@bytedance.com, linux-mm@kvack.org, Rik van Riel Subject: [PATCH 10/12] x86,tlb: do targeted broadcast flushing from tlbbatch code Date: Mon, 30 Dec 2024 12:53:11 -0500 Message-ID: <20241230175550.4046587-11-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241230175550.4046587-1-riel@surriel.com> References: <20241230175550.4046587-1-riel@surriel.com> MIME-Version: 1.0 X-Stat-Signature: opig48bi1wp8keupykaj53c4jieodean X-Rspamd-Queue-Id: C812A20006 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1735581430-256942 X-HE-Meta: U2FsdGVkX1+J5e8rQi6Pg+mY7alg722Au8oKwk5hIj9AQ0/GZariqmxEU0s+0dWC+kovJEwKfvYBqiS4qJAWL6+6g9ZfBH3YhQJcJ3cfEz8OEV4OgChKDZXicf1o87pUnl38ZuK/USbAEUyvZ/DM8s6scJwPSSAcUzxEKzlctLhjapXzX+wEFqGAbWb+ZiH1rMPrpKDIdyDlwJjNtVBKg76ogzLhcI3x28zd0JQkvupbdcZ2zy7gqNbdXtZuSO/fMzzAOF11hSyU9Kga819EGLIpQLiUMXAAJKjJcr6QvkKzNYixr+guOY/S7XhMnPRJPoNjYpCqna09vuUmSgaYmJdmMVeF+bKYiNP1QqoZaq3xjRd7+ll0xwIi/HYC8AQ/9KcvnBBXu7Pzo6QOY/o3k/077X4J3W5BriLdbppSWZNs+fGuPqKc9eE7REIadWoSqrrvPrW+0nvHQ1ITnLQOqhvVQpy9PPUAlH9xAFY5f2eDJvz4mHxnXnU5COq25C+TPkh28qJwZHuXqaXseItRLfeJ7NGPQDPBRgkDX7pIQkAjrMobFeCiKJ861f2w5aVRb2Ri2BYdLszrGzJia08A/W9HRq+H+ikIPUibbireuiVxaXQMkqngIofl0DfTjHOhypwv2WxgyOmhPpV89rI9RoWPeup7j3uoxfPr6PqsNRN3Q+cgLArwlG+8Wc5SSY955u8Os7HrpVXOB/io7T048Tbecyc4acf34CWlwhKNJZR5zN3D8YbtNDfZehIlAH9x1/ZIlT62tK15aWeW3gw118ZFmA8dIKRbTr21Ron50Oh5fiwu/bg7NpHgmwIqSV+FuRIoWNcqdTQ8whEA3yLMvZXxrMylzT0q3MnxYzKWYDbbKSAdUQbhXoKPuDrLiudOEr3yd7duut29YaUe3euNyIAtXws4q4EvdR8k6oziQa2YdE6cM+es8+JpNAMHeCvaap51VPrI1VHo/ctlQIP K7RiGWsM OIhpoUVQEEU/f9VDAibU0YP3+nOXPueVdmg4R5WJBNX/YzPLh8Xxgu6tmWU5jISBJJAEkvG3Bv3aFNR7zYZLtCNuPYPgExsmskdGZdHnGmobzfR6IVHrUOcTRdQKA4D8DCNKjYIH9LJgSvsTP97Ec5AWHD+rhI7JiOzLgUgJNIFfeD05HB0ftPp0oQ+KLFZ6lk+QQbDbs+aKbAUWOKHBP61dELwGG42FrkIG0lXv9HA17f+wbxXCqH6w7r3OQ8/BD8NU5FSKP/zuFX0AR5P7d/3XbdmopdbH8kVdXbyWQvepvJ2muQuMr9ER5bCN88uWtDsDZ/SKaaIqHN0TTLtPUB2AGIRgingGb1toKs7gkj6Cyk7IgNLBC5OAOjBEfRx0peief X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Instead of doing a system-wide TLB flush from arch_tlbbatch_flush, queue up asynchronous, targeted flushes from arch_tlbbatch_add_pending. This also allows us to avoid adding the CPUs of processes using broadcast flushing to the batch->cpumask, and will hopefully further reduce TLB flushing from the reclaim and compaction paths. Signed-off-by: Rik van Riel --- arch/x86/include/asm/tlbbatch.h | 1 + arch/x86/include/asm/tlbflush.h | 12 +++------ arch/x86/mm/tlb.c | 48 ++++++++++++++++++++++++++------- 3 files changed, 42 insertions(+), 19 deletions(-) diff --git a/arch/x86/include/asm/tlbbatch.h b/arch/x86/include/asm/tlbbatch.h index 1ad56eb3e8a8..f9a17edf63ad 100644 --- a/arch/x86/include/asm/tlbbatch.h +++ b/arch/x86/include/asm/tlbbatch.h @@ -10,6 +10,7 @@ struct arch_tlbflush_unmap_batch { * the PFNs being flushed.. */ struct cpumask cpumask; + bool used_invlpgb; }; #endif /* _ARCH_X86_TLBBATCH_H */ diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 5e9956af98d1..17ec1b169ebd 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -297,21 +297,15 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) return atomic64_inc_return(&mm->context.tlb_gen); } -static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr) -{ - inc_mm_tlb_gen(mm); - cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); - mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); -} - static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) { flush_tlb_mm(mm); } extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); +extern void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, + unsigned long uaddr); static inline bool pte_flags_need_flush(unsigned long oldflags, unsigned long newflags, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index eb83391385ce..454a370494d3 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1602,16 +1602,7 @@ EXPORT_SYMBOL_GPL(__flush_tlb_all); void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) { struct flush_tlb_info *info; - int cpu; - - if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { - guard(preempt)(); - invlpgb_flush_all_nonglobals(); - tlbsync(); - return; - } - - cpu = get_cpu(); + int cpu = get_cpu(); info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false, TLB_GENERATION_INVALID); @@ -1629,12 +1620,49 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) local_irq_enable(); } + /* + * If we issued (asynchronous) INVLPGB flushes, wait for them here. + * The cpumask above contains only CPUs that were running tasks + * not using broadcast TLB flushing. + */ + if (cpu_feature_enabled(X86_FEATURE_INVLPGB) && batch->used_invlpgb) { + tlbsync(); + migrate_enable(); + batch->used_invlpgb = false; + } + cpumask_clear(&batch->cpumask); put_flush_tlb_info(); put_cpu(); } +void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, + unsigned long uaddr) +{ + if (static_cpu_has(X86_FEATURE_INVLPGB) && mm_broadcast_asid(mm)) { + u16 asid = mm_broadcast_asid(mm); + /* + * Queue up an asynchronous invalidation. The corresponding + * TLBSYNC is done in arch_tlbbatch_flush(), and must be done + * on the same CPU. + */ + if (!batch->used_invlpgb) { + batch->used_invlpgb = true; + migrate_disable(); + } + invlpgb_flush_user_nr(kern_pcid(asid), uaddr, 1, 0); + /* Do any CPUs supporting INVLPGB need PTI? */ + if (static_cpu_has(X86_FEATURE_PTI)) + invlpgb_flush_user_nr(user_pcid(asid), uaddr, 1, 0); + } else { + inc_mm_tlb_gen(mm); + cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); + } + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); +} + /* * Blindly accessing user memory from NMI context can be dangerous * if we're in the middle of switching the current user task or