From patchwork Wed Dec 11 15:45:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Miko=C5=82aj_Lenczewski?= X-Patchwork-Id: 13903703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 15757E77180 for ; Wed, 11 Dec 2024 15:54:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=aO5Lef1xz4D8yiqC49HGFl/t5fdWfkTzV7VCX9dzZSQ=; b=oxGSfA5Na71RPbj/9yoriJBJ3A d+G0/AJOechW6Vga/eYsvCMZW8eg8r46ssUpqkiNoepycciD8gG1nCpVFgaZSYICy1XGoORni60lW LZNKvuPmoKNUzDDIJA1Lv3zsaKhLjmfhO8bdehXnQK2MCwH/7X8QytW61fp4jJ7Y8ftpOztVTNKPp x9/XrfO3m3lnHZuN8Yyg8gmBnK4OK4jgEi0J7o9+xfh2zpRvP4cSJlStsEjF7sCWUJp8K4ciyDM+y olpPtYa0fKVJWo0Y4N0/ZHubsswmpjZSlD3XBHBy+Rtlw8a4B1FNKUn1ZNtoXB6LnoYR/HGZ1KZYs vxqBp1sQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tLP2x-0000000FLRg-0HgD; Wed, 11 Dec 2024 15:54:43 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tLOvP-0000000FJo4-1Eoe for linux-arm-kernel@lists.infradead.org; Wed, 11 Dec 2024 15:46:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A6D481758; Wed, 11 Dec 2024 07:47:22 -0800 (PST) Received: from mazurka.cambridge.arm.com (mazurka.cambridge.arm.com [10.1.196.66]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0E76B3F5A1; Wed, 11 Dec 2024 07:46:52 -0800 (PST) From: =?utf-8?q?Miko=C5=82aj_Lenczewski?= To: catalin.marinas@arm.com, will@kernel.org, corbet@lwn.net, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com Cc: =?utf-8?q?Miko=C5=82aj_Lenczewski?= , linux-arm-kernel@lists.infradead.org, liunx-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvmarm@vger.kernel.org Subject: [RFC PATCH v1 4/5] arm64/mm: Delay tlbi in contpte_convert() under BBML2 Date: Wed, 11 Dec 2024 15:45:05 +0000 Message-ID: <20241211154611.40395-5-miko.lenczewski@arm.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241211154611.40395-1-miko.lenczewski@arm.com> References: <20241211154611.40395-1-miko.lenczewski@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241211_074655_371542_0AC2C01A X-CRM114-Status: GOOD ( 11.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When converting a region via contpte_convert() to use mTHP, we have two different goals. We have to mark each entry as contiguous, and we would like to smear the dirty and young (access) bits across all entries in the contiguous block. Currently, we do this by first accumulating the dirty and young bits in the block, using an atomic __ptep_get_and_clear() and the relevant pte_{dirty,young}() calls, performing a tlbi, and finally smearing the correct bits across the block using __set_ptes(). This approach works fine for BBM level 0, but with support for BBM level 2 we are allowed to reorder the tlbi to after setting the pagetable entries. This reordering means that other threads will not see an invalid pagetable entry, instead operating on stale data, until we have performed our smearing and issued the invalidation. Avoiding this invalid entry reduces faults in other threads, and thus improves performance marginally (more so when there are more threads). Signed-off-by: MikoĊ‚aj Lenczewski --- arch/arm64/mm/contpte.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 55107d27d3f8..fc927be800ee 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -68,9 +68,13 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr, pte = pte_mkyoung(pte); } - __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); + if (!system_supports_bbml2()) + __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); __set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES); + + if (system_supports_bbml2()) + __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); } void __contpte_try_fold(struct mm_struct *mm, unsigned long addr,