From patchwork Mon Mar 3 14:15:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13998905 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79562C282CD for ; Mon, 3 Mar 2025 14:16:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 126AD6B008A; Mon, 3 Mar 2025 09:16:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0AF96280003; Mon, 3 Mar 2025 09:16:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E447A280001; Mon, 3 Mar 2025 09:16:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C64E96B008A for ; Mon, 3 Mar 2025 09:16:08 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 3DABC1C8769 for ; Mon, 3 Mar 2025 14:16:08 +0000 (UTC) X-FDA: 83180439216.12.073FFF2 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf16.hostedemail.com (Postfix) with ESMTP id 25B79180020 for ; Mon, 3 Mar 2025 14:16:02 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741011363; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tHJ25R2ZImmoUWVAHUetOKuyTvwkL7+LmpJWn/XLHCo=; b=FMOe7h4ME+JH+c6ZkSJHz7s/aT3VOdlSPzjHRRRogvKKvPeInNnYsHZKIDM0mWoQvHnhvp dxwwjpMo6GNr6b1PocjgwRPcovYQarTmuW7ZXHtY1QYX/ZJEOR0rQM6dTaFvG2ODE795Hi BHbDqn1hK/Ma7lYvywGLbnQuXUj0mHQ= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741011363; a=rsa-sha256; cv=none; b=YOdyygYkEMzxgpGYrBzS41Yz3lMDzzpN41CShF2jeR1dN7N6otKXGwzO0NfKuVq4uh8Jfc oaFscbk6dYscSfgwof/t3X/U/Ggg2LneiWxxPZWEQgAaa8/yyZKfdovvMmwCnCdqTzk3MC JrqbEsNoLmQkMz8hoWZ2fz/Dwp2za58= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 865D01BCA; Mon, 3 Mar 2025 06:16:15 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 48FF23F66E; Mon, 3 Mar 2025 06:15:59 -0800 (PST) From: Ryan Roberts To: Andrew Morton , "David S. Miller" , Andreas Larsson , Juergen Gross , Boris Ostrovsky , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , "Matthew Wilcox (Oracle)" , Catalin Marinas Cc: Ryan Roberts , linux-mm@kvack.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org, David Hildenbrand Subject: [PATCH v2 3/5] sparc/mm: Disable preemption in lazy mmu mode Date: Mon, 3 Mar 2025 14:15:37 +0000 Message-ID: <20250303141542.3371656-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250303141542.3371656-1-ryan.roberts@arm.com> References: <20250303141542.3371656-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: 35zbgqumc78s61hbmhqeuhyw16uhrejh X-Rspamd-Queue-Id: 25B79180020 X-Rspamd-Server: rspam07 X-HE-Tag: 1741011362-793673 X-HE-Meta: U2FsdGVkX1/y87kWf76MqK3rwaMUjr9eyjwarzTF4BeV1RzG3d7WS8NMmFT1PTTOFHgktlLTvmFWMsdH7UvW+CyXwmg6rsB8H4PYcQmxNnaez0rnvUkq+TQSCsxA3pD0HCmVzkzq03mgLND0/bppYAgZKaRV34sEHxFzcQDs1O3IQEbr87thS35xxa5hXeoVav0ZmiTPMi9YqiThtUUpIk2RIKijhyW/lmZiFBGsvVixAyLNZ1MlT1JZLM7qwwZdvko60niGuHma3efygQKI23kYRqpBjVAdW6gXh4OmCGNFLVhshHyn5D9guIyEkSsuc+FITf/ZPoOp/YYZMKlTLDDFWmwzYMvp3fCyrp7EwH2SCIU8RS6+sRD0Jb5MCzVmxjLiku77dIAXg4jpaVsZJf50X4N8LYipL0eBlky9EqLGD9GiO9Fs3g5eWgfGJGJgGuQ+MU+AZMYyobEkGC6YMwedTM+IqA+qm83rxb12gAcDXDQFFqCLBVrsRFBh48QbkIZo+AIInQeJ69tKNCuekLVsNs+RPDDh/MlAih8S4ZlTEisjIzoiBgYiJeuww3nBdwFjU7NXzHSGNOhasUEDfCpZd1yjxdhidZuI+FX5LcN1PPrT7+tOCJQx8A04VHASGbPVoTkRcI9WkD3G4UT8gZaxrCGfrBKyu9iqVRLvmiHewzpB1XpZAMv6LalpoyMFeUqw+V0KgxFldk2EH1Iu3a9qs9Eamh+OMvPghJl2lIqAmMj8tMYdfn/avE8roVYMib3vac3SD/pNGc8rWdpf+xfa6+agZeC4qfV7Ww7SqfGQ0/wDNJQcd8qSINL55yUOyuqy8Y83cL1repQvSDOJX7/Ru2E0Jio45md0kXXdoGPyLMrU3PLyA/s5YrptmM0qIqEifuC0z9lcQDTwj/C3U0P6/CUcrPsFZVS11vZgt66qh0q89UafrBQeSRuYKs3ICpAdeV8tYCFeSKs1GFx I2R/Q6V5 ci9wraUZmBDVOdT6S3cy/3opvHTDgR7mi6OeIkAcGOf0JK947QK1XeJQETKISOusnLFZs4QaJZhtSkOyXDqVOF5ixp9+vc7E5cud3IEOdGGv+kqPVZ6BF44wIEuK7nMzVhQU8/IfhWnwQrOMnnRxU1CVXzcjEQk3W8pV6ODuwXDn+Zexta52k+Yttdb/5rbHXowUKhACWP2nqWAKf792vgwJ479mwvcSZ3LVLHmcS1L/J2ER4yiJ8PLjy1fvyfh/kOt5G X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Since commit 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy updates") it's been possible for arch_[enter|leave]_lazy_mmu_mode() to be called without holding a page table lock (for the kernel mappings case), and therefore it is possible that preemption may occur while in the lazy mmu mode. The Sparc lazy mmu implementation is not robust to preemption since it stores the lazy mode state in a per-cpu structure and does not attempt to manage that state on task switch. Powerpc had the same issue and fixed it by explicitly disabling preemption in arch_enter_lazy_mmu_mode() and re-enabling in arch_leave_lazy_mmu_mode(). See commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu mode"). Given Sparc's lazy mmu mode is based on powerpc's, let's fix it in the same way here. Cc: Fixes: 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy updates") Acked-by: David Hildenbrand Acked-by: Andreas Larsson Signed-off-by: Ryan Roberts --- arch/sparc/mm/tlb.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index 8648a50afe88..a35ddcca5e76 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -52,8 +52,10 @@ void flush_tlb_pending(void) void arch_enter_lazy_mmu_mode(void) { - struct tlb_batch *tb = this_cpu_ptr(&tlb_batch); + struct tlb_batch *tb; + preempt_disable(); + tb = this_cpu_ptr(&tlb_batch); tb->active = 1; } @@ -64,6 +66,7 @@ void arch_leave_lazy_mmu_mode(void) if (tb->tlb_nr) flush_tlb_pending(); tb->active = 0; + preempt_enable(); } static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,