From patchwork Sat Jan 8 16:44:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 12707551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AC5DC433FE for ; Sat, 8 Jan 2022 16:44:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D0D36B0099; Sat, 8 Jan 2022 11:44:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EAF8C6B009A; Sat, 8 Jan 2022 11:44:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D4EF36B009C; Sat, 8 Jan 2022 11:44:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id BA3BA6B0099 for ; Sat, 8 Jan 2022 11:44:44 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 879798CE33 for ; Sat, 8 Jan 2022 16:44:44 +0000 (UTC) X-FDA: 79007693688.13.EA33C15 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf01.hostedemail.com (Postfix) with ESMTP id 1F24B40007 for ; Sat, 8 Jan 2022 16:44:43 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 25380B80B44; Sat, 8 Jan 2022 16:44:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3E3FC36AED; Sat, 8 Jan 2022 16:44:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1641660282; bh=vHQAtfJ1+OgjeriZePikcaC9D/iAK/oYRR/naawsr2g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aACd1MXlpkQJQQcnnNNlV12g06l4RWnDcLP8nKrsMAbq33gFkGpGx/qLyCU2Y+301 4mLpwCwOq8IV9o3P+SqaG8R306edy1pe9BXs8dlAvDoLvXh81DM9sIyHyFp6OZMLVO 3M2F8QA3Ez+S15ZocfYALEDWw8vlKNsQesnBHqGK+6yfSuhT/CWtgWWMMF6mZECaaU TrzQBxV1v0XCTMhj5bAZhXSohc12CCoo7ntkh0gYVqZTBFtMqEdyo600w4JT1sP1el aje6L7XWI/jygOh1JR92JrtYEqdH1V2FxPXaLMj5jM0kPBn3c87iH6UY4R33oe/a9H CFkWflxotEV6g== From: Andy Lutomirski To: Andrew Morton , Linux-MM Cc: Nicholas Piggin , Anton Blanchard , Benjamin Herrenschmidt , Paul Mackerras , Randy Dunlap , linux-arch , x86@kernel.org, Rik van Riel , Dave Hansen , Peter Zijlstra , Nadav Amit , Mathieu Desnoyers , Andy Lutomirski Subject: [PATCH 22/23] x86/mm: Optimize for_each_possible_lazymm_cpu() Date: Sat, 8 Jan 2022 08:44:07 -0800 Message-Id: <13849aa0218e0f32ac16b82950c682395a8fb5c7.1641659630.git.luto@kernel.org> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 1F24B40007 X-Stat-Signature: cco44js8yt63yaib74pfawjr1pmtb3bu Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=aACd1MXl; spf=pass (imf01.hostedemail.com: domain of luto@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=luto@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-HE-Tag: 1641660283-138436 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that x86 does not switch away from a lazy mm behind the scheduler's back and thus clear a CPU from mm_cpumask() that the scheduler thinks is lazy, x86 can use mm_cpumask() to optimize for_each_possible_lazymm_cpu(). Signed-off-by: Andy Lutomirski --- arch/x86/include/asm/mmu.h | 4 ++++ arch/x86/mm/tlb.c | 4 +++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h index 03ba71420ff3..da55f768e68c 100644 --- a/arch/x86/include/asm/mmu.h +++ b/arch/x86/include/asm/mmu.h @@ -63,5 +63,9 @@ typedef struct { .lock = __MUTEX_INITIALIZER(mm.context.lock), \ } +/* On x86, mm_cpumask(mm) contains all CPUs that might be lazily using mm */ +#define for_each_possible_lazymm_cpu(cpu, mm) \ + for_each_cpu((cpu), mm_cpumask((mm))) + #endif /* _ASM_X86_MMU_H */ diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 225b407812c7..04eb43e96e23 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -706,7 +706,9 @@ temp_mm_state_t use_temporary_mm(struct mm_struct *mm) /* * Make sure not to be in TLB lazy mode, as otherwise we'll end up * with a stale address space WITHOUT being in lazy mode after - * restoring the previous mm. + * restoring the previous mm. Additionally, once we switch mms, + * for_each_possible_lazymm_cpu() will no longer report this CPU, + * so a lazymm pin wouldn't work. */ if (this_cpu_read(cpu_tlbstate_shared.is_lazy)) unlazy_mm_irqs_off();