From patchwork Thu Jan 25 17:35:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13531481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C074CC47258 for ; Thu, 25 Jan 2024 17:35:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=AtXWcmj0tHgqty6pjEo/UHNAMAVhGHDcTtOj4VWu1c4=; b=GXgdfC+vSXYaf3 WWhXYBCXz9Fz89QJLUCCeiULeYBQWjeqRZWbf+Bf16iKcrQZl1OImE7lYj3acKtIZluU5TkFDgz8b rolLmpBHJ7Desb0cWl1EKfsPxtmVtrABU6d9CLfrkazmI4lj89HT7KgHjoIwcsK5/bQaSpSmHZUET zMvhal08CrwJrZYi0qimBuM84cpfpXf4rlH4jz6kWbMC2UEtckck7jlVUvxx/qSYDeJFP+jRe+MJA PkUeJewyR7llsgCb+HW35bhKcQ2btcDhsyAfuydcx/b5excBYWudhFmwoh5SfOt5UqMxy6fBBAk3s hsSJbTSGIZuEl9gnrMDQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rT3dm-00000001BA7-0yPm; Thu, 25 Jan 2024 17:35:50 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rT3di-00000001B8c-19rd for linux-arm-kernel@lists.infradead.org; Thu, 25 Jan 2024 17:35:48 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 70B771FB; Thu, 25 Jan 2024 09:36:28 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2254F3F73F; Thu, 25 Jan 2024 09:35:43 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Andrew Morton , David Hildenbrand Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2] arm64/mm: Make set_ptes() robust when OAs cross 48-bit boundary Date: Thu, 25 Jan 2024 17:35:34 +0000 Message-Id: <20240125173534.1659317-1-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240125_093546_406879_26F4FC08 X-CRM114-Status: GOOD ( 14.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since the high bits [51:48] of an OA are not stored contiguously in the PTE, there is a theoretical bug in set_ptes(), which just adds PAGE_SIZE to the pte to get the pte with the next pfn. This works until the pfn crosses the 48-bit boundary, at which point we overflow into the upper attributes. Of course one could argue (and Matthew Wilcox has :) that we will never see a folio cross this boundary because we only allow naturally aligned power-of-2 allocation, so this would require a half-petabyte folio. So its only a theoretical bug. But its better that the code is robust regardless. I've implemented pte_next_pfn() as part of the fix, which is an opt-in core-mm interface. So that is now available to the core-mm, which will be needed shortly to support forthcoming fork()-batching optimizations. Fixes: 4a169d61c2ed ("arm64: implement the new page table range API") Closes: https://lore.kernel.org/linux-mm/fdaeb9a5-d890-499a-92c8-d171df43ad01@arm.com/ Signed-off-by: Ryan Roberts Reviewed-by: Catalin Marinas Reviewed-by: David Hildenbrand --- Hi All, This applies on top of v6.8-rc1. It's a dependency for David's fork-batch work, so once Catalin has acked it, it will go through mm-unstable attached to David's series. changes since v1: - Removed pte_pgprot() optimization to aid integration with Ard's LPA2 series, per Catalin's request. Thanks, Ryan arch/arm64/include/asm/pgtable.h | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) -- 2.25.1 diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 79ce70fbb751..52d0b0a763f1 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -341,6 +341,22 @@ static inline void __sync_cache_and_tags(pte_t pte, unsigned int nr_pages) mte_sync_tags(pte, nr_pages); } +/* + * Select all bits except the pfn + */ +static inline pgprot_t pte_pgprot(pte_t pte) +{ + unsigned long pfn = pte_pfn(pte); + + return __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte)); +} + +#define pte_next_pfn pte_next_pfn +static inline pte_t pte_next_pfn(pte_t pte) +{ + return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte)); +} + static inline void set_ptes(struct mm_struct *mm, unsigned long __always_unused addr, pte_t *ptep, pte_t pte, unsigned int nr) @@ -354,7 +370,7 @@ static inline void set_ptes(struct mm_struct *mm, if (--nr == 0) break; ptep++; - pte_val(pte) += PAGE_SIZE; + pte = pte_next_pfn(pte); } } #define set_ptes set_ptes @@ -433,16 +449,6 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) return clear_pte_bit(pte, __pgprot(PTE_SWP_EXCLUSIVE)); } -/* - * Select all bits except the pfn - */ -static inline pgprot_t pte_pgprot(pte_t pte) -{ - unsigned long pfn = pte_pfn(pte); - - return __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte)); -} - #ifdef CONFIG_NUMA_BALANCING /* * See the comment in include/linux/pgtable.h