From patchwork Tue Oct 24 12:56:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13434388 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 361A5C07545 for ; Tue, 24 Oct 2023 12:57:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=mbHdqau9pJulmNN6NkoSfqmr4GvS10/0QSICfIDM7Vk=; b=L/NrRa5C1Irwdf 5wbfiVRgATvRjuXwwG2nOuPX2DCspIceCs/7PGO15OmYOMx99HL5p7CMygTBSZXwaBUEYDK9N0ps1 RVyeboRA0ucJrPHvLB4B9J5UoabMv7Tnc4hSpyR4j7EIw5g7qkvAl3IWZn8rPGq8FPrLrmE6bJPsH E+Dvsr0Y+Cr2LKzv9yL1R7eNsjBLwtDGrD/6msy/JUM1Xw+TPLQIbV0//OpcL5YNT27kPC2nYwZJn CChLk6tRcb4qXVBh1E4MaxLIeKKlp0thhyRu9h39k72qK+i62fBSTTh7kORQGJy4HgwQ9xHpaqjq7 xT4oN4EeNv6xEt0cHjNQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qvGyE-009pBQ-14; Tue, 24 Oct 2023 12:57:18 +0000 Received: from out199-1.us.a.mail.aliyun.com ([47.90.199.1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qvGxz-009ovw-3C for linux-arm-kernel@lists.infradead.org; Tue, 24 Oct 2023 12:57:16 +0000 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0VurM0Hm_1698152201; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VurM0Hm_1698152201) by smtp.aliyun-inc.com; Tue, 24 Oct 2023 20:56:42 +0800 From: Baolin Wang To: catalin.marinas@arm.com, will@kernel.org Cc: akpm@linux-foundation.org, v-songbaohua@oppo.com, yuzhao@google.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH] arm64: mm: drop tlb flush operation when clearing the access bit Date: Tue, 24 Oct 2023 20:56:35 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231024_055704_300463_B2C05132 X-CRM114-Status: GOOD ( 19.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now ptep_clear_flush_young() is only called by folio_referenced() to check if the folio was referenced, and now it will call a tlb flush on ARM64 architecture. However the tlb flush can be expensive on ARM64 servers, especially for the systems with a large CPU numbers. Similar to the x86 architecture, below comments also apply equally to ARM64 architecture. So we can drop the tlb flush operation in ptep_clear_flush_young() on ARM64 architecture to improve the performance. " /* Clearing the accessed bit without a TLB flush * doesn't cause data corruption. [ It could cause incorrect * page aging and the (mistaken) reclaim of hot pages, but the * chance of that should be relatively low. ] * * So as a performance optimization don't flush the TLB when * clearing the accessed bit, it will eventually be flushed by * a context switch or a VM operation anyway. [ In the rare * event of it not getting flushed for a long time the delay * shouldn't really matter because there's no real memory * pressure for swapout to react to. ] */ " Running the thpscale to show some obvious improvements for compaction latency with this patch: base patched Amean fault-both-1 1093.19 ( 0.00%) 1084.57 * 0.79%* Amean fault-both-3 2566.22 ( 0.00%) 2228.45 * 13.16%* Amean fault-both-5 3591.22 ( 0.00%) 3146.73 * 12.38%* Amean fault-both-7 4157.26 ( 0.00%) 4113.67 * 1.05%* Amean fault-both-12 6184.79 ( 0.00%) 5218.70 * 15.62%* Amean fault-both-18 9103.70 ( 0.00%) 7739.71 * 14.98%* Amean fault-both-24 12341.73 ( 0.00%) 10684.23 * 13.43%* Amean fault-both-30 15519.00 ( 0.00%) 13695.14 * 11.75%* Amean fault-both-32 16189.15 ( 0.00%) 14365.73 * 11.26%* base patched Duration User 167.78 161.03 Duration System 1836.66 1673.01 Duration Elapsed 2074.58 2059.75 Barry Song submitted a similar patch [1] before, that replaces the ptep_clear_flush_young_notify() with ptep_clear_young_notify() in folio_referenced_one(). However, I'm not sure if removing the tlb flush operation is applicable to every architecture in kernel, so dropping the tlb flush for ARM64 seems a sensible change. Note: I am okay for both approach, if someone can help to ensure that all architectures do not need the tlb flush when clearing the accessed bit, then I also think Barry's patch is better (hope Barry can resend his patch). [1] https://lore.kernel.org/lkml/20220617070555.344368-1-21cnbao@gmail.com/ Signed-off-by: Baolin Wang --- arch/arm64/include/asm/pgtable.h | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 0bd18de9fd97..2979d796ba9d 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -905,21 +905,22 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, static inline int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { - int young = ptep_test_and_clear_young(vma, address, ptep); - - if (young) { - /* - * We can elide the trailing DSB here since the worst that can - * happen is that a CPU continues to use the young entry in its - * TLB and we mistakenly reclaim the associated page. The - * window for such an event is bounded by the next - * context-switch, which provides a DSB to complete the TLB - * invalidation. - */ - flush_tlb_page_nosync(vma, address); - } - - return young; + /* + * This comment is borrowed from x86, but applies equally to ARM64: + * + * Clearing the accessed bit without a TLB flush doesn't cause + * data corruption. [ It could cause incorrect page aging and + * the (mistaken) reclaim of hot pages, but the chance of that + * should be relatively low. ] + * + * So as a performance optimization don't flush the TLB when + * clearing the accessed bit, it will eventually be flushed by + * a context switch or a VM operation anyway. [ In the rare + * event of it not getting flushed for a long time the delay + * shouldn't really matter because there's no real memory + * pressure for swapout to react to. ] + */ + return ptep_test_and_clear_young(vma, address, ptep); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE