From patchwork Wed Nov 15 16:30:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13457056 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21160C54FB9 for ; Wed, 15 Nov 2023 16:30:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A8BFA6B0384; Wed, 15 Nov 2023 11:30:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A3BEB6B0386; Wed, 15 Nov 2023 11:30:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 92C656B0387; Wed, 15 Nov 2023 11:30:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 864976B0384 for ; Wed, 15 Nov 2023 11:30:42 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 5A8531A04D4 for ; Wed, 15 Nov 2023 16:30:42 +0000 (UTC) X-FDA: 81460727124.15.13D5900 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf17.hostedemail.com (Postfix) with ESMTP id 7229240023 for ; Wed, 15 Nov 2023 16:30:40 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700065840; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xsNAviUmR5F8SYwTUIrOFJJg1yOMZo9+rbtTxHdvUqw=; b=fTucqIfN+tK3Hx5ZHaqL/Gv5cd3+SvhzOkio2mUajaWaBQDA3jl6R+L2Ix+pV0TZLm/PFs D0ebgbDdWNRyatUBYRfb/NfY24sQz9Mksa0CbFyByX6U5ptC+D9iZWGDAgsKVNpkHslJvY AwYT4IeZOYBTP4k4AgqfodSCwz5Gukk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700065840; a=rsa-sha256; cv=none; b=rdB4ohmPhvxYB2TLjwWpZ/LWuikn/nfeqqLIwuc7LBUk4ngoWdkBS/bZd6k8bLtf6WnzaI bZwnxomleO6n7eXijrErdrmjfZG+wjNOUjJqHJWVSvHRvsBG8niqDhVueGU1wYSYIecSMv 4ti68zPDPnggps2NBAubemf6GyKzB8Y= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 14F6A1595; Wed, 15 Nov 2023 08:31:25 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5ABF03F73F; Wed, 15 Nov 2023 08:30:36 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 02/14] arm64/mm: set_pte(): New layer to manage contig bit Date: Wed, 15 Nov 2023 16:30:06 +0000 Message-Id: <20231115163018.1303287-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115163018.1303287-1-ryan.roberts@arm.com> References: <20231115163018.1303287-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 7229240023 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 9dmk3bnxmo5pqtcysentmthy8anhdoh8 X-HE-Tag: 1700065840-101895 X-HE-Meta: U2FsdGVkX1+olgzPLH1rC05gTXqGT5rGD5jNDbANWIrlmX+rDEcxBxnPSqjQIo8BTr/EamzXYOLCqngHSmtGfnOU09I2yTGxYICAUQXYpS7M7neSXHvZ6TE3DAfmwzFmpBLNASLwtmNQvO/SLMKweXyCfN5DDcOobf33MY9cjE6dOsE5LobluymaVD5qNd8sPTbxgxFlYgq4D8BzG48SIRWShpuWBbu9mXl7iUTuGWKIA7+TdELb1uSW1ET4/xp/PvexWBiFrhXDiH97NJ8gz4L53xcWlci0rtIbUl7nuzGwOim0HyBGqmJJeyqaYk9S/Foe+OxkpSIOqT8M0o0UXzY6Cvr3AVzJFGOgOuH4wl837r92DSlOIDGQrZITXWjpTIA61XCgCV1lT5m/JRy19fJo2itZOueu1odeq4tJ099ShvvUC6kKtD6zx4Apon3L+LWf7MfKZThhtlHfRiqxg6GlmLAsi2JVAZynzMHZ3HA6AYrOksEpBzvSMAsDD16GqzGLGq5ylsudUiewFUfDati+5EkKHntuZjYCCD0/yH1nQMXghLwl1MXdAsj8eplfu6Y4UCsl9aHzZx/xsOB1f1QbW+6h2CX6FSBmW6sE0NO0TRZKyAsjk8XdKUhX9DnFUrRfUSmpAm1CYj0UFHH9rdzcWpwrd/8FoYx7zZ+lE6dAQNBSN4VCXh6iK//taKgdlONqgjb/ZR1e14u1EVy6QlHpXD3znvzmoDOsHCA9sru8ujCVTtm5HCiNvJ82UTCOWctLlAOa4jebBOzQm4Xw2k4L09AUVJO6fvlYd54iuYBHiHM3F2nlCZLgC14yuxEF/2tQmF4TxbOUZXU9EujhPNZaUAdhvqjMwp7YO+f23IvcocFMFcP8XHXNTYnJhdEQSrwghTRNtIDZ+yYMozJwb4dqk+RTtXZJS36BDb6YJXWDMlossFyzBmYP3VXslCYDaKIjfE+h0CkNMzCzV7U 7YC2Kehb TgOa+OuP/cInI2v+ZbzyOtSJ7BVXZBXhJqX3OiF/SUj8yuTE8RYAYQcIShNEpXC7yHrcen/FwY3Uod/IfknBtTI7jD0wDM6IhbybS//rJp1p+nLQL/02+eb4+W0QrlZ5BiX8+2R2SOlJsvCvk9Xw12EDrodMuEuTDiTZygY25X6wQmqpXz+IsF9NqitSN+gwMDkqtfssnqg7LKmdvMbVeBW3AK8d44+CTa5zm+1kkOPAOF9OLdHdVUAWNXsmMjVyzkaEpcKvb5P4W0EWZtBNBIERAWA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Create a new layer for the in-table PTE manipulation APIs. For now, The existing API is prefixed with double underscore to become the arch-private API and the public API is just a simple wrapper that calls the private API. The public API implementation will subsequently be used to transparently manipulate the contiguous bit where appropriate. But since there are already some contig-aware users (e.g. hugetlb, kernel mapper), we must first ensure those users use the private API directly so that the future contig-bit manipulations in the public API do not interfere with those existing uses. Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/pgtable.h | 12 ++++++++---- arch/arm64/kernel/efi.c | 2 +- arch/arm64/mm/fixmap.c | 2 +- arch/arm64/mm/kasan_init.c | 4 ++-- arch/arm64/mm/mmu.c | 2 +- arch/arm64/mm/pageattr.c | 2 +- arch/arm64/mm/trans_pgd.c | 4 ++-- 7 files changed, 16 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b19a8aee684c..650d4f4bb6dc 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -93,7 +93,8 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys) __pte(__phys_to_pte_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) #define pte_none(pte) (!pte_val(pte)) -#define pte_clear(mm,addr,ptep) set_pte(ptep, __pte(0)) +#define pte_clear(mm, addr, ptep) \ + __set_pte(ptep, __pte(0)) #define pte_page(pte) (pfn_to_page(pte_pfn(pte))) /* @@ -261,7 +262,7 @@ static inline pte_t pte_mkdevmap(pte_t pte) return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL)); } -static inline void set_pte(pte_t *ptep, pte_t pte) +static inline void __set_pte(pte_t *ptep, pte_t pte) { WRITE_ONCE(*ptep, pte); @@ -350,7 +351,7 @@ static inline void set_ptes(struct mm_struct *mm, for (;;) { __check_safe_pte_update(mm, ptep, pte); - set_pte(ptep, pte); + __set_pte(ptep, pte); if (--nr == 0) break; ptep++; @@ -534,7 +535,7 @@ static inline void __set_pte_at(struct mm_struct *mm, { __sync_cache_and_tags(pte, nr); __check_safe_pte_update(mm, ptep, pte); - set_pte(ptep, pte); + __set_pte(ptep, pte); } static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, @@ -1118,6 +1119,9 @@ extern pte_t ptep_modify_prot_start(struct vm_area_struct *vma, extern void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t old_pte, pte_t new_pte); + +#define set_pte __set_pte + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_PGTABLE_H */ diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c index 0228001347be..44288a12fc6c 100644 --- a/arch/arm64/kernel/efi.c +++ b/arch/arm64/kernel/efi.c @@ -111,7 +111,7 @@ static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data) pte = set_pte_bit(pte, __pgprot(PTE_PXN)); else if (system_supports_bti_kernel() && spd->has_bti) pte = set_pte_bit(pte, __pgprot(PTE_GP)); - set_pte(ptep, pte); + __set_pte(ptep, pte); return 0; } diff --git a/arch/arm64/mm/fixmap.c b/arch/arm64/mm/fixmap.c index c0a3301203bd..51cd4501816d 100644 --- a/arch/arm64/mm/fixmap.c +++ b/arch/arm64/mm/fixmap.c @@ -121,7 +121,7 @@ void __set_fixmap(enum fixed_addresses idx, ptep = fixmap_pte(addr); if (pgprot_val(flags)) { - set_pte(ptep, pfn_pte(phys >> PAGE_SHIFT, flags)); + __set_pte(ptep, pfn_pte(phys >> PAGE_SHIFT, flags)); } else { pte_clear(&init_mm, addr, ptep); flush_tlb_kernel_range(addr, addr+PAGE_SIZE); diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 555285ebd5af..5eade712e9e5 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -112,7 +112,7 @@ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr, if (!early) memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE); next = addr + PAGE_SIZE; - set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL)); + __set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL)); } while (ptep++, addr = next, addr != end && pte_none(READ_ONCE(*ptep))); } @@ -266,7 +266,7 @@ static void __init kasan_init_shadow(void) * so we should make sure that it maps the zero page read-only. */ for (i = 0; i < PTRS_PER_PTE; i++) - set_pte(&kasan_early_shadow_pte[i], + __set_pte(&kasan_early_shadow_pte[i], pfn_pte(sym_to_pfn(kasan_early_shadow_page), PAGE_KERNEL_RO)); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 15f6347d23b6..e884279b268e 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -178,7 +178,7 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, do { pte_t old_pte = READ_ONCE(*ptep); - set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot)); + __set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot)); /* * After the PTE entry has been populated once, we diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 8e2017ba5f1b..057097acf9e0 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -41,7 +41,7 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data) pte = clear_pte_bit(pte, cdata->clear_mask); pte = set_pte_bit(pte, cdata->set_mask); - set_pte(ptep, pte); + __set_pte(ptep, pte); return 0; } diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 7b14df3c6477..230b607cf881 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -41,7 +41,7 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) * read only (code, rodata). Clear the RDONLY bit from * the temporary mappings we use during restore. */ - set_pte(dst_ptep, pte_mkwrite_novma(pte)); + __set_pte(dst_ptep, pte_mkwrite_novma(pte)); } else if ((debug_pagealloc_enabled() || is_kfence_address((void *)addr)) && !pte_none(pte)) { /* @@ -55,7 +55,7 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) */ BUG_ON(!pfn_valid(pte_pfn(pte))); - set_pte(dst_ptep, pte_mkpresent(pte_mkwrite_novma(pte))); + __set_pte(dst_ptep, pte_mkpresent(pte_mkwrite_novma(pte))); } }