From patchwork Thu Jun 22 14:41:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13289249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 799E9EB64D8 for ; Thu, 22 Jun 2023 14:42:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EB55D8D0005; Thu, 22 Jun 2023 10:42:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E3FA48D0001; Thu, 22 Jun 2023 10:42:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB8288D0005; Thu, 22 Jun 2023 10:42:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id BA00E8D0001 for ; Thu, 22 Jun 2023 10:42:34 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7F5481601B1 for ; Thu, 22 Jun 2023 14:42:34 +0000 (UTC) X-FDA: 80930649828.06.A4E57CB Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf01.hostedemail.com (Postfix) with ESMTP id CEF8D40002 for ; Thu, 22 Jun 2023 14:42:31 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687444952; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0wmgZ1gAShkaExAhWff4a7KZ8yvQfuwLze+OBa4plIM=; b=lVozCjkJJVr7+xT21Ox5T7hnnM0AkhNZpj484B20nXqHuCUt/nDhIlYFeDekAKNGCwuA0y Oq0YIAPCSMXIDzsZIKfYDJHuiq/8Dc8y5rjCP/tN5LqOlhfTE6AKx+pekJMBN3nuy660L5 TZDHToWCKELlFP10AFeReIFEUVV6yCU= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687444952; a=rsa-sha256; cv=none; b=gOMeBL4IKlWmZcRHkO9+Thog1Yjr6YG4zIhjDdYRlOb9DE7kszu7QOixS91W6Lq8aWB950 7qAvAzNU5g/j4RX9CLH7tul+tQbc5pCA3QBXlAYIOtI+p9D+LzY5OHmjjTdjvS0029GGJs mic06sBHao6GKjP4nWH5f5am2fjvDMI= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 927601042; Thu, 22 Jun 2023 07:43:14 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0E8A43F663; Thu, 22 Jun 2023 07:42:27 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v1 01/14] arm64/mm: set_pte(): New layer to manage contig bit Date: Thu, 22 Jun 2023 15:41:56 +0100 Message-Id: <20230622144210.2623299-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230622144210.2623299-1-ryan.roberts@arm.com> References: <20230622144210.2623299-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: CEF8D40002 X-Rspam-User: X-Stat-Signature: 8xefqu83qqi1mek53aqqf65iz5nra3to X-Rspamd-Server: rspam01 X-HE-Tag: 1687444951-175093 X-HE-Meta: U2FsdGVkX1/a5SF/hQhgaDgRYy9N9FngOLpCAIDglvaiRm6fJgw0myvYoK6/kB26/w6ECQOCpJp8Fv1w48JPPvCCL/MVO5gDRLttpZ5Z1tLo4ZeAUJkcBjHrBZVd6RVHOdyPHgBBoL5L9u0m6GXRs7UY1RiIbuBP+v/ufcZExd0Yq2JBvqi94paGvYih2imJYOAcLglhPSIltON7/5NC6gI9oGExuYlj2x1mF+LuY1l5Nr2yWqlTLLvu1Dd2wC48lbj5D7FaFFiLYYEawmHXdd00ap56kxxQMT05YiZwdfIPYK8Z86iSNaGh4wA2ytImDwaBg2yNKh/CfAK5G/rFNxZCGuGRL9ITee9304yztVgHLwB6B/Q3mm/1lIcLZu0YijZVYrhz4WJbx4ANfBa8sqN3VOZNuE02PkkHjxfN18h+sQJVPyp9O345jgnDU+Z9oTfYvPopBOfDSb0SGCbYJtISPc28ctBse6JfDnvAFo3IBdAetLKvRDEB5+AAgcyiNoRHh9wuRp4HOfw4uDrgkv+YgKusMmuF0o4LQiNMJ4PVt7slf3uqYCyWP/kRdgqOFxaQba2tmhLZx8080BHxEWaVj2IXRgui0fLDuUMZCf7GZGQXMlGR+XTCxYByN1DLkgWUOyjsQBULvthUlkMCXn2bIJxt6Wy7P1hAiPvVF4ABesoHD9UIJyZf8mWez4V8IaPXQUJ/vxR6JYgVxAOtLdmC+AgppYgkiG+K/fwRyVRlkIVktZsOjkKo5f7znVtwZZ8onK7f3w82zuVXzvswSC5UEcnpNImXOY1fbXegymH2IC2MGzK/34nbP7pQ6yAEHjGAUrOqPkG9FKm+FtddMI7QZfePChntpGd/gn7niZL9zsxL20gigrpElbK6hvijr/YIweshO3xzPhQqpwiiQljuVKC0uyputwESukc7aQR8XAPfMJJ6zYYiyD7+gfkPX24yepjLFubwztwaBh1 qJ/OB0Sb Ms+5EWAXYyrtQo/gIaJg3sWZvLEl7OLSsLKyu5W6xxEmZ8WIvZhUGM5cKDV2l7gow+yNq/GLoo4gq/2WuE2M38sWMzenKqT4H+7MFi+puPP0dfIyUF9Br/nvX9N1v/W0lmdKsu/n1KGGl04Doj6FNnqcLRwROIV7zd6aNaOz1TbHtfWsPP80tOQDIZitjrOx3ZHIV9bJgq8YtK65nFMpLxdpyFhfq9W+cysdW0upXDxISWIzeYTzLqQbuHitm4ly5mYVVDtqv7+StST3H+Hkm36ibZw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Create a new layer for the in-table PTE manipulation APIs. For now, The existing API is prefixed with double underscore to become the arch-private API and the public API is just a simple wrapper that calls the private API. The public API implementation will subsequently be used to transparently manipulate the contiguous bit where appropriate. But since there are already some contig-aware users (e.g. hugetlb, kernel mapper), we must first ensure those users use the private API directly so that the future contig-bit manipulations in the public API do not interfere with those existing uses. No behavioural changes intended. Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/pgtable.h | 23 ++++++++++++++++++++--- arch/arm64/kernel/efi.c | 2 +- arch/arm64/mm/fixmap.c | 2 +- arch/arm64/mm/kasan_init.c | 4 ++-- arch/arm64/mm/mmu.c | 2 +- arch/arm64/mm/pageattr.c | 2 +- arch/arm64/mm/trans_pgd.c | 4 ++-- 7 files changed, 28 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 6fd012663a01..7f5ce5687466 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -93,7 +93,8 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys) __pte(__phys_to_pte_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) #define pte_none(pte) (!pte_val(pte)) -#define pte_clear(mm,addr,ptep) set_pte(ptep, __pte(0)) +#define pte_clear(mm, addr, ptep) \ + __set_pte(ptep, __pte(0)) #define pte_page(pte) (pfn_to_page(pte_pfn(pte))) /* @@ -260,7 +261,7 @@ static inline pte_t pte_mkdevmap(pte_t pte) return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL)); } -static inline void set_pte(pte_t *ptep, pte_t pte) +static inline void __set_pte(pte_t *ptep, pte_t pte) { WRITE_ONCE(*ptep, pte); @@ -352,7 +353,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, __check_safe_pte_update(mm, ptep, pte); - set_pte(ptep, pte); + __set_pte(ptep, pte); } static inline void set_ptes(struct mm_struct *mm, unsigned long addr, @@ -1117,6 +1118,22 @@ extern pte_t ptep_modify_prot_start(struct vm_area_struct *vma, extern void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t old_pte, pte_t new_pte); + +/* + * The below functions constitute the public API that arm64 presents to the + * core-mm to manipulate PTE entries within the their page tables (or at least + * this is the subset of the API that arm64 needs to implement). These public + * versions will automatically and transparently apply the contiguous bit where + * it makes sense to do so. Therefore any users that are contig-aware (e.g. + * hugetlb, kernel mapper) should NOT use these APIs, but instead use the + * private versions, which are prefixed with double underscore. + */ + +static inline void set_pte(pte_t *ptep, pte_t pte) +{ + __set_pte(ptep, pte); +} + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_PGTABLE_H */ diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c index baab8dd3ead3..7a28b6a08a82 100644 --- a/arch/arm64/kernel/efi.c +++ b/arch/arm64/kernel/efi.c @@ -115,7 +115,7 @@ static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data) else if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) && system_supports_bti() && spd->has_bti) pte = set_pte_bit(pte, __pgprot(PTE_GP)); - set_pte(ptep, pte); + __set_pte(ptep, pte); return 0; } diff --git a/arch/arm64/mm/fixmap.c b/arch/arm64/mm/fixmap.c index c0a3301203bd..51cd4501816d 100644 --- a/arch/arm64/mm/fixmap.c +++ b/arch/arm64/mm/fixmap.c @@ -121,7 +121,7 @@ void __set_fixmap(enum fixed_addresses idx, ptep = fixmap_pte(addr); if (pgprot_val(flags)) { - set_pte(ptep, pfn_pte(phys >> PAGE_SHIFT, flags)); + __set_pte(ptep, pfn_pte(phys >> PAGE_SHIFT, flags)); } else { pte_clear(&init_mm, addr, ptep); flush_tlb_kernel_range(addr, addr+PAGE_SIZE); diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index e969e68de005..40125b217195 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -112,7 +112,7 @@ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr, if (!early) memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE); next = addr + PAGE_SIZE; - set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL)); + __set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL)); } while (ptep++, addr = next, addr != end && pte_none(READ_ONCE(*ptep))); } @@ -275,7 +275,7 @@ static void __init kasan_init_shadow(void) * so we should make sure that it maps the zero page read-only. */ for (i = 0; i < PTRS_PER_PTE; i++) - set_pte(&kasan_early_shadow_pte[i], + __set_pte(&kasan_early_shadow_pte[i], pfn_pte(sym_to_pfn(kasan_early_shadow_page), PAGE_KERNEL_RO)); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index af6bc8403ee4..c84dc87d08b9 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -178,7 +178,7 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, do { pte_t old_pte = READ_ONCE(*ptep); - set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot)); + __set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot)); /* * After the PTE entry has been populated once, we diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 8e2017ba5f1b..057097acf9e0 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -41,7 +41,7 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data) pte = clear_pte_bit(pte, cdata->clear_mask); pte = set_pte_bit(pte, cdata->set_mask); - set_pte(ptep, pte); + __set_pte(ptep, pte); return 0; } diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 4ea2eefbc053..f9997b226614 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -40,7 +40,7 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) * read only (code, rodata). Clear the RDONLY bit from * the temporary mappings we use during restore. */ - set_pte(dst_ptep, pte_mkwrite(pte)); + __set_pte(dst_ptep, pte_mkwrite(pte)); } else if (debug_pagealloc_enabled() && !pte_none(pte)) { /* * debug_pagealloc will removed the PTE_VALID bit if @@ -53,7 +53,7 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) */ BUG_ON(!pfn_valid(pte_pfn(pte))); - set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); + __set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); } }