From patchwork Fri Feb 21 00:53:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13984682 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D950C021B2 for ; Fri, 21 Feb 2025 00:55:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9AFB828000E; Thu, 20 Feb 2025 19:55:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 86FCE28000B; Thu, 20 Feb 2025 19:55:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 64DAD28000E; Thu, 20 Feb 2025 19:55:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 38CDE28000B for ; Thu, 20 Feb 2025 19:55:18 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id F224C1A0205 for ; Fri, 21 Feb 2025 00:55:17 +0000 (UTC) X-FDA: 83142133074.26.A20CA2F Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf04.hostedemail.com (Postfix) with ESMTP id 6D8D940007 for ; Fri, 21 Feb 2025 00:55:16 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf04.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740099316; a=rsa-sha256; cv=none; b=6JmuJBgY35lc9gaVlEs6yrgAjTemxQVX25RFjzAnUrHyDy6RC1d/GOOYncx6lSXvOh+nZ5 G0qIYhsxHdi97+1wnSUFoJqCkOX/gsAU15dGsLW9irr3j/XUq+5mYKcUg2UAHPoYb8Y3Lb 2McKdkFM8h1jDzBRz7jYbPZ9BQsVAMU= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf04.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740099316; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CLu07lJ1w+aXrVKjC6QVxImYk8xVdT6cyZPEzHI0oRM=; b=o18vZXpGZQ9Gijjlv0zoiYi4h3dh660P811EpFGeQP6efKrVgspbrGhMBwOlDRaXNcah94 CpqwJvg46aCCVu5Z0nmdoAf8J4T1qsg9VfZyS3HrnEhFQjRTviNEFPoglZNwjCU0HRewxD b89gsN/JzAwunCAw0Dfh0r/R1E80UJ8= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tlHIZ-000000003Qf-1l2C; Thu, 20 Feb 2025 19:53:47 -0500 From: Rik van Riel To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, bp@alien8.de, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jackmanb@google.com, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Manali.Shukla@amd.com, Rik van Riel Subject: [PATCH v12 12/16] x86/mm: enable broadcast TLB invalidation for multi-threaded processes Date: Thu, 20 Feb 2025 19:53:11 -0500 Message-ID: <20250221005345.2156760-13-riel@surriel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250221005345.2156760-1-riel@surriel.com> References: <20250221005345.2156760-1-riel@surriel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 6D8D940007 X-Rspamd-Server: rspam12 X-Stat-Signature: eha1hgw39iteko77acd9sdjxcupe3bqr X-HE-Tag: 1740099316-937835 X-HE-Meta: U2FsdGVkX18KhOsWjsKrSPx6Iz3Ie11FR/h+tdcCUjMttdUl0CjqixHoTSq3rooG/oTtcFIDA+YHa+nw8aZv/0b2jJDY4JrLS+SBfhTBl2vfIm5DVa7HAqYoO/n1HM6a4xSsaO+v2gUyME1uDHq3XU6rEHiEbMD1WHvQI43TpBwHXSV/AaC8PXwkYMCgPg+SIXdS5JXi2p00mpSo9nisaFkHu97dliacLcUXrOXXFc1O2tljJjPGs0EtgCSDNFHzKIeAV3my7yqn23PjtARCqJ9DUN7WL+AcjfiP4CPkT9DSerQoBrYSGr0cppPBhqhnJrTMXCWWrYQUVbsHetFg5q9qGMltNarh3Q6ubGreydutTM03IKIcyEuOeU3h8o6Aq3aGn89SSnP3ifWJsdxh4ZhT/yfARYqhA/NZOibUvmPn/ISMve37A6B4JYKoFVTMdzhz8sSczWx4hadB4cLkh8qoHllSyqpHFavVK23eewZFYYJwBuOnsZMlro9xA/k9lfPnThs3NnAjGSsyb/u1XMhU92+TQ7rimXQ1236bfWkSTNETTcU/dAz9txx8b14jv2ZRhDwYG5U8V/nOJj5/jEPWUtmNpJG62YFdf8DnUvuHZ5kM97PGijYv+UbxGz2K7LvdxssC0PNU+U12o4Z3V3VfEvZYKU1YfRb8OXTxAlvT1T+EsgAKzdx7oBbopvgehe+guPRsBjWtZOw+HY8cg9X1JXXAz5CyQ/RUA9jKEifTCYkXnp28oMVeOTfNQhirUYZQW6F3ybnUlWTsVVvV+RxnyQ+uUJOTiNWvZGR96YCPCyaw8+VfY8F2Rm1xtITnzYcVLL4dQ3Mw4857I95c9Buo/IV8y+N9+tRa/DRNOLYg44f0ZqmH7l13KZ/vN3E8sJ7kT6MnukUySfinydXmqSdaOSUpeKRZvKtS87+Sw9BWi3PH+2YbtyZnMzjZ6URvLrrMY2p898ZCR9c7h1/ pEtLyclc qjRS2L02wqLiR+Ir0ldGoU7pFk2RY8NgdaiRlBL8z+76jdZFmxCzyMAbV/243HiCpgFQ1TuDAtj0dUzlXtJ0UnO+SM7ioc1DhDeGpKzLqImgbP6fo4/rVfR5hDjilxg99v5ccbpgYE5dZIgTaXp9THL+An9t9VRMkqakypEilqOGoLmd5l4MxPHm8N++1QQw49jQcwlkk4bYlkiXmRRswcaVkE/kfBqbVj2lxiQd4RiQpEoVnvTLlzPDKfn7BBCb43Ez8Ih6oqP6esA6aQsWe7e/lNVJos2E0NaVCTWe48l0Ng/eG/wCy6mnL0Ly79i2k5I1khbO/qPU0dqAQhfc+tZJdg/aCh9rUBYH5D30vIbo36CQ2eGl00V+e0sfW0SV0qcEagT1fc3v8F508pPMfp7sRwYVqM2x4An4ehDUN+9xrMzIsGaYWvX8pXhRtsDG530nj2ev3tX4YBd94YeVxNXUhrJymdrto6yxp X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use broadcast TLB invalidation, using the INVPLGB instruction. There is not enough room in the 12-bit ASID address space to hand out broadcast ASIDs to every process. Only hand out broadcast ASIDs to processes when they are observed to be simultaneously running on 4 or more CPUs. This also allows single threaded process to continue using the cheaper, local TLB invalidation instructions like INVLPGB. Signed-off-by: Rik van Riel Reviewed-by: Nadav Amit Tested-by: Manali Shukla Tested-by: Brendan Jackman Tested-by: Michael Kelley --- arch/x86/mm/tlb.c | 107 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 106 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index d8a04e398615..01a5edb51ebe 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -420,6 +420,108 @@ static bool needs_global_asid_reload(struct mm_struct *next, u16 prev_asid) return false; } +/* + * x86 has 4k ASIDs (2k when compiled with KPTI), but the largest + * x86 systems have over 8k CPUs. Because of this potential ASID + * shortage, global ASIDs are handed out to processes that have + * frequent TLB flushes and are active on 4 or more CPUs simultaneously. + */ +static void consider_global_asid(struct mm_struct *mm) +{ + if (!static_cpu_has(X86_FEATURE_INVLPGB)) + return; + + /* Check every once in a while. */ + if ((current->pid & 0x1f) != (jiffies & 0x1f)) + return; + + if (!READ_ONCE(global_asid_available)) + return; + + /* + * Assign a global ASID if the process is active on + * 4 or more CPUs simultaneously. + */ + if (mm_active_cpus_exceeds(mm, 3)) + use_global_asid(mm); +} + +static void finish_asid_transition(struct flush_tlb_info *info) +{ + struct mm_struct *mm = info->mm; + int bc_asid = mm_global_asid(mm); + int cpu; + + if (!READ_ONCE(mm->context.asid_transition)) + return; + + for_each_cpu(cpu, mm_cpumask(mm)) { + /* + * The remote CPU is context switching. Wait for that to + * finish, to catch the unlikely case of it switching to + * the target mm with an out of date ASID. + */ + while (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)) == LOADED_MM_SWITCHING) + cpu_relax(); + + if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)) != mm) + continue; + + /* + * If at least one CPU is not using the global ASID yet, + * send a TLB flush IPI. The IPI should cause stragglers + * to transition soon. + * + * This can race with the CPU switching to another task; + * that results in a (harmless) extra IPI. + */ + if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm_asid, cpu)) != bc_asid) { + flush_tlb_multi(mm_cpumask(info->mm), info); + return; + } + } + + /* All the CPUs running this process are using the global ASID. */ + WRITE_ONCE(mm->context.asid_transition, false); +} + +static void broadcast_tlb_flush(struct flush_tlb_info *info) +{ + bool pmd = info->stride_shift == PMD_SHIFT; + unsigned long asid = info->mm->context.global_asid; + unsigned long addr = info->start; + + /* + * TLB flushes with INVLPGB are kicked off asynchronously. + * The inc_mm_tlb_gen() guarantees page table updates are done + * before these TLB flushes happen. + */ + if (info->end == TLB_FLUSH_ALL) { + invlpgb_flush_single_pcid_nosync(kern_pcid(asid)); + /* Do any CPUs supporting INVLPGB need PTI? */ + if (static_cpu_has(X86_FEATURE_PTI)) + invlpgb_flush_single_pcid_nosync(user_pcid(asid)); + } else do { + unsigned long nr = 1; + + if (info->stride_shift <= PMD_SHIFT) { + nr = (info->end - addr) >> info->stride_shift; + nr = clamp_val(nr, 1, invlpgb_count_max); + } + + invlpgb_flush_user_nr_nosync(kern_pcid(asid), addr, nr, pmd); + if (static_cpu_has(X86_FEATURE_PTI)) + invlpgb_flush_user_nr_nosync(user_pcid(asid), addr, nr, pmd); + + addr += nr << info->stride_shift; + } while (addr < info->end); + + finish_asid_transition(info); + + /* Wait for the INVLPGBs kicked off above to finish. */ + __tlbsync(); +} + /* * Given an ASID, flush the corresponding user ASID. We can delay this * until the next time we switch to it. @@ -1250,9 +1352,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, * a local TLB flush is needed. Optimize this use-case by calling * flush_tlb_func_local() directly in this case. */ - if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) { + if (mm_global_asid(mm)) { + broadcast_tlb_flush(info); + } else if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) { info->trim_cpumask = should_trim_cpumask(mm); flush_tlb_multi(mm_cpumask(mm), info); + consider_global_asid(mm); } else if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) { lockdep_assert_irqs_enabled(); local_irq_disable();