From patchwork Fri Feb 28 18:24:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Miko=C5=82aj_Lenczewski?= X-Patchwork-Id: 13996924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BD9A3C282C6 for ; Fri, 28 Feb 2025 18:30:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=AJc6IWDKbbfkTM8+9Cqc5lqo0vgR1i+EaFVLwPFOQKI=; b=v1bFNxT3MXG27UXgSw5IPHrrKo Ctvc0AVspzwwvfG9QRssagPd4MQRUVNYWU12APR2G4PdKAPmlSTZzaFo6RuGJiUNUBMoWE56bQ2nj PF8RTt94lEZ+YKARYDvJ2L2OZTWN4QHuaYU3V1RFVAbDiJFfClobSXyhYc19rUnkM2pEzoKO2IklT 0tTy7WmWUYiOX6dhkSfBm1C6/wXe51KVP/dYK7hpnwtHCKzN5Q/yngcPDeLZX7qXniQFSkaWbk4in eCO9Ky59dziKgWQvILPhKEydYtrwsFdmgZiX9WAjilLTfPxSYXSKEHvV1JAjLbOd9iNVB1DEm4NaX cRPE0AJg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1to57i-0000000C99S-2PoC; Fri, 28 Feb 2025 18:30:10 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1to52x-0000000C7eg-1fCx for linux-arm-kernel@lists.infradead.org; Fri, 28 Feb 2025 18:25:16 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 330C6176A; Fri, 28 Feb 2025 10:25:30 -0800 (PST) Received: from mazurka.cambridge.arm.com (mazurka.cambridge.arm.com [10.2.80.18]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C73F03F6A8; Fri, 28 Feb 2025 10:25:11 -0800 (PST) From: =?utf-8?q?Miko=C5=82aj_Lenczewski?= To: ryan.roberts@arm.com, suzuki.poulose@arm.com, yang@os.amperecomputing.com, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org, jean-philippe@linaro.org, mark.rutland@arm.com, joey.gouly@arm.com, oliver.upton@linux.dev, james.morse@arm.com, broonie@kernel.org, maz@kernel.org, david@redhat.com, akpm@linux-foundation.org, jgg@ziepe.ca, nicolinc@nvidia.com, mshavit@google.com, jsnitsel@redhat.com, smostafa@google.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev Cc: =?utf-8?q?Miko=C5=82aj_Lenczewski?= Subject: [PATCH v2 2/4] arm64/mm: Delay tlbi in contpte_convert() under BBML2 Date: Fri, 28 Feb 2025 18:24:02 +0000 Message-ID: <20250228182403.6269-4-miko.lenczewski@arm.com> X-Mailer: git-send-email 2.45.3 In-Reply-To: <20250228182403.6269-2-miko.lenczewski@arm.com> References: <20250228182403.6269-2-miko.lenczewski@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250228_102515_476743_13A7EACB X-CRM114-Status: GOOD ( 11.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When converting a region via contpte_convert() to use mTHP, we have two different goals. We have to mark each entry as contiguous, and we would like to smear the dirty and young (access) bits across all entries in the contiguous block. Currently, we do this by first accumulating the dirty and young bits in the block, using an atomic __ptep_get_and_clear() and the relevant pte_{dirty,young}() calls, performing a tlbi, and finally smearing the correct bits across the block using __set_ptes(). This approach works fine for BBM level 0, but with support for BBM level 2 we are allowed to reorder the tlbi to after setting the pagetable entries. This reordering means that other threads will not see an invalid pagetable entry, instead operating on stale data, until we have performed our smearing and issued the invalidation. Avoiding this invalid entry reduces faults in other threads, and thus improves performance marginally (more so when there are more threads). Signed-off-by: MikoĊ‚aj Lenczewski Reviewed-by: Ryan Roberts --- arch/arm64/mm/contpte.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 55107d27d3f8..145530f706a9 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -68,9 +68,13 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr, pte = pte_mkyoung(pte); } - __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); + if (!system_supports_bbml2_noabort()) + __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); __set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES); + + if (system_supports_bbml2_noabort()) + __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); } void __contpte_try_fold(struct mm_struct *mm, unsigned long addr,