From patchwork Wed Dec 11 16:01:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Miko=C5=82aj_Lenczewski?= X-Patchwork-Id: 13903716 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6999EE77180 for ; Wed, 11 Dec 2024 16:09:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=aO5Lef1xz4D8yiqC49HGFl/t5fdWfkTzV7VCX9dzZSQ=; b=AT/yazGR16k5cKNpIoC/DF7FF1 WwpZhnSGXNZ5MtwZTpr43cvyLmyPw2NzZOtLD2akzH31+I+ZzDSO26tSh7UgF33VGdwhZgKBwJFz3 Aj/w0VAE8ii0eimcvJ8W+OljKhDsP6lJ+wxSiIitq+73qs3qh2eidgnbiTWYFYB6Ltieif/YRyVnK 4CWlz00+abf8kNxIFDnZWRrt25o018oyUpXcEbwokJtg9PWonob34uAudZap+jVhFPAYqr0rSLpyj JMboWLnRGlF2BG5TotNidpaOWQcF995J5a6rrAz4DML2aivTqoJGGieEUABVnxG9vOOR2vVgslnG+ vnYiCpHg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tLPGv-0000000FOog-38V2; Wed, 11 Dec 2024 16:09:09 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tLPB5-0000000FNMd-3qhO for linux-arm-kernel@lists.infradead.org; Wed, 11 Dec 2024 16:03:08 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 64CCA15A1; Wed, 11 Dec 2024 08:03:35 -0800 (PST) Received: from mazurka.cambridge.arm.com (mazurka.cambridge.arm.com [10.1.196.66]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 42A103F5A1; Wed, 11 Dec 2024 08:03:05 -0800 (PST) From: =?utf-8?q?Miko=C5=82aj_Lenczewski?= To: ryan.roberts@arm.com, catalin.marinas@arm.com, will@kernel.org, corbet@lwn.net, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com Cc: =?utf-8?q?Miko=C5=82aj_Lenczewski?= , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev Subject: [RESEND RFC PATCH v1 4/5] arm64/mm: Delay tlbi in contpte_convert() under BBML2 Date: Wed, 11 Dec 2024 16:01:40 +0000 Message-ID: <20241211160218.41404-5-miko.lenczewski@arm.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241211160218.41404-1-miko.lenczewski@arm.com> References: <20241211160218.41404-1-miko.lenczewski@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241211_080307_989370_8C93171E X-CRM114-Status: GOOD ( 11.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When converting a region via contpte_convert() to use mTHP, we have two different goals. We have to mark each entry as contiguous, and we would like to smear the dirty and young (access) bits across all entries in the contiguous block. Currently, we do this by first accumulating the dirty and young bits in the block, using an atomic __ptep_get_and_clear() and the relevant pte_{dirty,young}() calls, performing a tlbi, and finally smearing the correct bits across the block using __set_ptes(). This approach works fine for BBM level 0, but with support for BBM level 2 we are allowed to reorder the tlbi to after setting the pagetable entries. This reordering means that other threads will not see an invalid pagetable entry, instead operating on stale data, until we have performed our smearing and issued the invalidation. Avoiding this invalid entry reduces faults in other threads, and thus improves performance marginally (more so when there are more threads). Signed-off-by: MikoĊ‚aj Lenczewski --- arch/arm64/mm/contpte.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 55107d27d3f8..fc927be800ee 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -68,9 +68,13 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr, pte = pte_mkyoung(pte); } - __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); + if (!system_supports_bbml2()) + __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); __set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES); + + if (system_supports_bbml2()) + __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); } void __contpte_try_fold(struct mm_struct *mm, unsigned long addr,