From patchwork Wed Mar 13 12:56:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13591385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4AD45C54E58 for ; Wed, 13 Mar 2024 12:59:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mDvb7NZY/bAhg2K/Ci0d/QGNTM363D3l2FS3gFE7VGc=; b=2cZnPlFpO3Y19a nZjhc0UhsxzFu6TyfKyktOIFWTfDKNyDNq2tzb29o6lycwB5wKqElz5IYdE096CIKQbjjlEkWU6fV YYk5J/+42LxwoztYtGpSoX7zkz93VI9tjlfKHKXXrpaN9FbxPiSifJ2zLyfBmI0usHO1ktvuBPEs2 qzQR0wnvbxjPl3PIxhJMX7VJDVM+9cz0eChnwC7Fxazfaceh/xu1a3pZpp62mNT3QOHWudsTy4YW+ 3WnhhCReE5vJ7xMy5JwsqC9IimE/Mc56GIq1NvnbphBN4eoVzQscMorBYV88psZ+2PDHrlPlAGkRZ 1h87lgPYbRh8nz1h4YEQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkOCd-0000000A90o-1lUu; Wed, 13 Mar 2024 12:59:27 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkOCW-0000000A8wo-2SDb for linux-arm-kernel@lists.infradead.org; Wed, 13 Mar 2024 12:59:23 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710334758; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Z3G1nuFqQMcL9w47Lc5nl6vfdXaNcslnGBGv9qOyThc=; b=PmbJ281DBgcXTSAruHn/9xaKqRLhh80FALW3zXkvIzhFpbp5m+yWRL/8K+3RCNPuUu98HK W9LHqHMrkWSB3dBu2Sh1HVi+x5TSgE9l7uaNzHPLK6JmKJoes5ZaSC6z+anlh0TR8dPCiH 9mamE39CifQau2oti0Mdm+10EoKvJr4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-537-qtVZcP0xOj2vEdUbsFtseA-1; Wed, 13 Mar 2024 08:57:30 -0400 X-MC-Unique: qtVZcP0xOj2vEdUbsFtseA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2E299101CC63; Wed, 13 Mar 2024 12:57:30 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6093C111FA; Wed, 13 Mar 2024 12:57:27 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Ard Biesheuvel , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH 01/10] arm64: mm: Split out routines for code reuse Date: Wed, 13 Mar 2024 20:56:59 +0800 Message-ID: <20240313125711.20651-2-piliu@redhat.com> In-Reply-To: <20240313125711.20651-1-piliu@redhat.com> References: <20240313125711.20651-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240313_055921_293280_B617FEAF X-CRM114-Status: GOOD ( 24.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The split out routines will have a dedicated file scope and not interfere with each other. Signed-off-by: Pingfan Liu Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland To: linux-arm-kernel@lists.infradead.org --- arch/arm64/mm/mmu.c | 253 +-------------------------------------- arch/arm64/mm/mmu_inc.c | 255 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 256 insertions(+), 252 deletions(-) create mode 100644 arch/arm64/mm/mmu_inc.c diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 15f6347d23b6..870be374f458 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -169,230 +169,7 @@ bool pgattr_change_is_safe(u64 old, u64 new) return ((old ^ new) & ~mask) == 0; } -static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, - phys_addr_t phys, pgprot_t prot) -{ - pte_t *ptep; - - ptep = pte_set_fixmap_offset(pmdp, addr); - do { - pte_t old_pte = READ_ONCE(*ptep); - - set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot)); - - /* - * After the PTE entry has been populated once, we - * only allow updates to the permission attributes. - */ - BUG_ON(!pgattr_change_is_safe(pte_val(old_pte), - READ_ONCE(pte_val(*ptep)))); - - phys += PAGE_SIZE; - } while (ptep++, addr += PAGE_SIZE, addr != end); - - pte_clear_fixmap(); -} - -static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, - unsigned long end, phys_addr_t phys, - pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), - int flags) -{ - unsigned long next; - pmd_t pmd = READ_ONCE(*pmdp); - - BUG_ON(pmd_sect(pmd)); - if (pmd_none(pmd)) { - pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN; - phys_addr_t pte_phys; - - if (flags & NO_EXEC_MAPPINGS) - pmdval |= PMD_TABLE_PXN; - BUG_ON(!pgtable_alloc); - pte_phys = pgtable_alloc(PAGE_SHIFT); - __pmd_populate(pmdp, pte_phys, pmdval); - pmd = READ_ONCE(*pmdp); - } - BUG_ON(pmd_bad(pmd)); - - do { - pgprot_t __prot = prot; - - next = pte_cont_addr_end(addr, end); - - /* use a contiguous mapping if the range is suitably aligned */ - if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) && - (flags & NO_CONT_MAPPINGS) == 0) - __prot = __pgprot(pgprot_val(prot) | PTE_CONT); - - init_pte(pmdp, addr, next, phys, __prot); - - phys += next - addr; - } while (addr = next, addr != end); -} - -static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, - phys_addr_t phys, pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), int flags) -{ - unsigned long next; - pmd_t *pmdp; - - pmdp = pmd_set_fixmap_offset(pudp, addr); - do { - pmd_t old_pmd = READ_ONCE(*pmdp); - - next = pmd_addr_end(addr, end); - - /* try section mapping first */ - if (((addr | next | phys) & ~PMD_MASK) == 0 && - (flags & NO_BLOCK_MAPPINGS) == 0) { - pmd_set_huge(pmdp, phys, prot); - - /* - * After the PMD entry has been populated once, we - * only allow updates to the permission attributes. - */ - BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd), - READ_ONCE(pmd_val(*pmdp)))); - } else { - alloc_init_cont_pte(pmdp, addr, next, phys, prot, - pgtable_alloc, flags); - - BUG_ON(pmd_val(old_pmd) != 0 && - pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp))); - } - phys += next - addr; - } while (pmdp++, addr = next, addr != end); - - pmd_clear_fixmap(); -} - -static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, - unsigned long end, phys_addr_t phys, - pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), int flags) -{ - unsigned long next; - pud_t pud = READ_ONCE(*pudp); - - /* - * Check for initial section mappings in the pgd/pud. - */ - BUG_ON(pud_sect(pud)); - if (pud_none(pud)) { - pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN; - phys_addr_t pmd_phys; - - if (flags & NO_EXEC_MAPPINGS) - pudval |= PUD_TABLE_PXN; - BUG_ON(!pgtable_alloc); - pmd_phys = pgtable_alloc(PMD_SHIFT); - __pud_populate(pudp, pmd_phys, pudval); - pud = READ_ONCE(*pudp); - } - BUG_ON(pud_bad(pud)); - - do { - pgprot_t __prot = prot; - - next = pmd_cont_addr_end(addr, end); - - /* use a contiguous mapping if the range is suitably aligned */ - if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) && - (flags & NO_CONT_MAPPINGS) == 0) - __prot = __pgprot(pgprot_val(prot) | PTE_CONT); - - init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags); - - phys += next - addr; - } while (addr = next, addr != end); -} - -static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, - phys_addr_t phys, pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), - int flags) -{ - unsigned long next; - pud_t *pudp; - p4d_t *p4dp = p4d_offset(pgdp, addr); - p4d_t p4d = READ_ONCE(*p4dp); - - if (p4d_none(p4d)) { - p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN; - phys_addr_t pud_phys; - - if (flags & NO_EXEC_MAPPINGS) - p4dval |= P4D_TABLE_PXN; - BUG_ON(!pgtable_alloc); - pud_phys = pgtable_alloc(PUD_SHIFT); - __p4d_populate(p4dp, pud_phys, p4dval); - p4d = READ_ONCE(*p4dp); - } - BUG_ON(p4d_bad(p4d)); - - pudp = pud_set_fixmap_offset(p4dp, addr); - do { - pud_t old_pud = READ_ONCE(*pudp); - - next = pud_addr_end(addr, end); - - /* - * For 4K granule only, attempt to put down a 1GB block - */ - if (pud_sect_supported() && - ((addr | next | phys) & ~PUD_MASK) == 0 && - (flags & NO_BLOCK_MAPPINGS) == 0) { - pud_set_huge(pudp, phys, prot); - - /* - * After the PUD entry has been populated once, we - * only allow updates to the permission attributes. - */ - BUG_ON(!pgattr_change_is_safe(pud_val(old_pud), - READ_ONCE(pud_val(*pudp)))); - } else { - alloc_init_cont_pmd(pudp, addr, next, phys, prot, - pgtable_alloc, flags); - - BUG_ON(pud_val(old_pud) != 0 && - pud_val(old_pud) != READ_ONCE(pud_val(*pudp))); - } - phys += next - addr; - } while (pudp++, addr = next, addr != end); - - pud_clear_fixmap(); -} - -static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys, - unsigned long virt, phys_addr_t size, - pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), - int flags) -{ - unsigned long addr, end, next; - pgd_t *pgdp = pgd_offset_pgd(pgdir, virt); - - /* - * If the virtual and physical address don't have the same offset - * within a page, we cannot map the region as the caller expects. - */ - if (WARN_ON((phys ^ virt) & ~PAGE_MASK)) - return; - - phys &= PAGE_MASK; - addr = virt & PAGE_MASK; - end = PAGE_ALIGN(virt + size); - - do { - next = pgd_addr_end(addr, end); - alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc, - flags); - phys += next - addr; - } while (pgdp++, addr = next, addr != end); -} +#include "mmu_inc.c" static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, unsigned long virt, phys_addr_t size, @@ -1168,34 +945,6 @@ void vmemmap_free(unsigned long start, unsigned long end, } #endif /* CONFIG_MEMORY_HOTPLUG */ -int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot) -{ - pud_t new_pud = pfn_pud(__phys_to_pfn(phys), mk_pud_sect_prot(prot)); - - /* Only allow permission changes for now */ - if (!pgattr_change_is_safe(READ_ONCE(pud_val(*pudp)), - pud_val(new_pud))) - return 0; - - VM_BUG_ON(phys & ~PUD_MASK); - set_pud(pudp, new_pud); - return 1; -} - -int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot) -{ - pmd_t new_pmd = pfn_pmd(__phys_to_pfn(phys), mk_pmd_sect_prot(prot)); - - /* Only allow permission changes for now */ - if (!pgattr_change_is_safe(READ_ONCE(pmd_val(*pmdp)), - pmd_val(new_pmd))) - return 0; - - VM_BUG_ON(phys & ~PMD_MASK); - set_pmd(pmdp, new_pmd); - return 1; -} - int pud_clear_huge(pud_t *pudp) { if (!pud_sect(READ_ONCE(*pudp))) diff --git a/arch/arm64/mm/mmu_inc.c b/arch/arm64/mm/mmu_inc.c new file mode 100644 index 000000000000..dcd97eea0726 --- /dev/null +++ b/arch/arm64/mm/mmu_inc.c @@ -0,0 +1,255 @@ +// SPDX-License-Identifier: GPL-2.0-only + +int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot) +{ + pud_t new_pud = pfn_pud(__phys_to_pfn(phys), mk_pud_sect_prot(prot)); + + /* Only allow permission changes for now */ + if (!pgattr_change_is_safe(READ_ONCE(pud_val(*pudp)), + pud_val(new_pud))) + return 0; + + VM_BUG_ON(phys & ~PUD_MASK); + set_pud(pudp, new_pud); + return 1; +} + +int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot) +{ + pmd_t new_pmd = pfn_pmd(__phys_to_pfn(phys), mk_pmd_sect_prot(prot)); + + /* Only allow permission changes for now */ + if (!pgattr_change_is_safe(READ_ONCE(pmd_val(*pmdp)), + pmd_val(new_pmd))) + return 0; + + VM_BUG_ON(phys & ~PMD_MASK); + set_pmd(pmdp, new_pmd); + return 1; +} + +static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, + phys_addr_t phys, pgprot_t prot) +{ + pte_t *ptep; + + ptep = pte_set_fixmap_offset(pmdp, addr); + do { + pte_t old_pte = READ_ONCE(*ptep); + + set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot)); + + /* + * After the PTE entry has been populated once, we + * only allow updates to the permission attributes. + */ + BUG_ON(!pgattr_change_is_safe(pte_val(old_pte), + READ_ONCE(pte_val(*ptep)))); + + phys += PAGE_SIZE; + } while (ptep++, addr += PAGE_SIZE, addr != end); + + pte_clear_fixmap(); +} + +static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, + unsigned long end, phys_addr_t phys, + pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), + int flags) +{ + unsigned long next; + pmd_t pmd = READ_ONCE(*pmdp); + + BUG_ON(pmd_sect(pmd)); + if (pmd_none(pmd)) { + pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN; + phys_addr_t pte_phys; + + if (flags & NO_EXEC_MAPPINGS) + pmdval |= PMD_TABLE_PXN; + BUG_ON(!pgtable_alloc); + pte_phys = pgtable_alloc(PAGE_SHIFT); + __pmd_populate(pmdp, pte_phys, pmdval); + pmd = READ_ONCE(*pmdp); + } + BUG_ON(pmd_bad(pmd)); + + do { + pgprot_t __prot = prot; + + next = pte_cont_addr_end(addr, end); + + /* use a contiguous mapping if the range is suitably aligned */ + if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) && + (flags & NO_CONT_MAPPINGS) == 0) + __prot = __pgprot(pgprot_val(prot) | PTE_CONT); + + init_pte(pmdp, addr, next, phys, __prot); + + phys += next - addr; + } while (addr = next, addr != end); +} + +static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, + phys_addr_t phys, pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), int flags) +{ + unsigned long next; + pmd_t *pmdp; + + pmdp = pmd_set_fixmap_offset(pudp, addr); + do { + pmd_t old_pmd = READ_ONCE(*pmdp); + + next = pmd_addr_end(addr, end); + + /* try section mapping first */ + if (((addr | next | phys) & ~PMD_MASK) == 0 && + (flags & NO_BLOCK_MAPPINGS) == 0) { + pmd_set_huge(pmdp, phys, prot); + + /* + * After the PMD entry has been populated once, we + * only allow updates to the permission attributes. + */ + BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd), + READ_ONCE(pmd_val(*pmdp)))); + } else { + alloc_init_cont_pte(pmdp, addr, next, phys, prot, + pgtable_alloc, flags); + + BUG_ON(pmd_val(old_pmd) != 0 && + pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp))); + } + phys += next - addr; + } while (pmdp++, addr = next, addr != end); + + pmd_clear_fixmap(); +} + +static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, + unsigned long end, phys_addr_t phys, + pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), int flags) +{ + unsigned long next; + pud_t pud = READ_ONCE(*pudp); + + /* + * Check for initial section mappings in the pgd/pud. + */ + BUG_ON(pud_sect(pud)); + if (pud_none(pud)) { + pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN; + phys_addr_t pmd_phys; + + if (flags & NO_EXEC_MAPPINGS) + pudval |= PUD_TABLE_PXN; + BUG_ON(!pgtable_alloc); + pmd_phys = pgtable_alloc(PMD_SHIFT); + __pud_populate(pudp, pmd_phys, pudval); + pud = READ_ONCE(*pudp); + } + BUG_ON(pud_bad(pud)); + + do { + pgprot_t __prot = prot; + + next = pmd_cont_addr_end(addr, end); + + /* use a contiguous mapping if the range is suitably aligned */ + if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) && + (flags & NO_CONT_MAPPINGS) == 0) + __prot = __pgprot(pgprot_val(prot) | PTE_CONT); + + init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags); + + phys += next - addr; + } while (addr = next, addr != end); +} + +static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, + phys_addr_t phys, pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), + int flags) +{ + unsigned long next; + pud_t *pudp; + p4d_t *p4dp = p4d_offset(pgdp, addr); + p4d_t p4d = READ_ONCE(*p4dp); + + if (p4d_none(p4d)) { + p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN; + phys_addr_t pud_phys; + + if (flags & NO_EXEC_MAPPINGS) + p4dval |= P4D_TABLE_PXN; + BUG_ON(!pgtable_alloc); + pud_phys = pgtable_alloc(PUD_SHIFT); + __p4d_populate(p4dp, pud_phys, p4dval); + p4d = READ_ONCE(*p4dp); + } + BUG_ON(p4d_bad(p4d)); + + pudp = pud_set_fixmap_offset(p4dp, addr); + do { + pud_t old_pud = READ_ONCE(*pudp); + + next = pud_addr_end(addr, end); + + /* + * For 4K granule only, attempt to put down a 1GB block + */ + if (pud_sect_supported() && + ((addr | next | phys) & ~PUD_MASK) == 0 && + (flags & NO_BLOCK_MAPPINGS) == 0) { + pud_set_huge(pudp, phys, prot); + + /* + * After the PUD entry has been populated once, we + * only allow updates to the permission attributes. + */ + BUG_ON(!pgattr_change_is_safe(pud_val(old_pud), + READ_ONCE(pud_val(*pudp)))); + } else { + alloc_init_cont_pmd(pudp, addr, next, phys, prot, + pgtable_alloc, flags); + + BUG_ON(pud_val(old_pud) != 0 && + pud_val(old_pud) != READ_ONCE(pud_val(*pudp))); + } + phys += next - addr; + } while (pudp++, addr = next, addr != end); + + pud_clear_fixmap(); +} + +static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys, + unsigned long virt, phys_addr_t size, + pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), + int flags) +{ + unsigned long addr, end, next; + pgd_t *pgdp = pgd_offset_pgd(pgdir, virt); + + /* + * If the virtual and physical address don't have the same offset + * within a page, we cannot map the region as the caller expects. + */ + if (WARN_ON((phys ^ virt) & ~PAGE_MASK)) + return; + + phys &= PAGE_MASK; + addr = virt & PAGE_MASK; + end = PAGE_ALIGN(virt + size); + + do { + next = pgd_addr_end(addr, end); + alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc, + flags); + phys += next - addr; + } while (pgdp++, addr = next, addr != end); +} + From patchwork Wed Mar 13 12:57:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13591393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9F816C54791 for ; Wed, 13 Mar 2024 13:02:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=b5OelcbRDC53NpiWnxwRyWaGhCqez2bYZPra7e83GPI=; b=YZ9NW0Q935Mm42 VfwwNaA4qA546laWk+cLypA04Mcil+JcC9LSmueNdaFtK2x6FcQPm8rNZwoRZAceZURtXAqzITuIk GFE9Ua9kt4x2UMZ5nc9AlbYqeHU0zOf7VOMpj5qFK3VTEHIvk04+aiNNW4/crP1s4jrDFrV/9iEyo jluAl8AqHIQU8mVioudkttmzbmfShhEmUq0H9czRHp2S/V+y5gB9xHIt0RwhG4Fqpa/sIyJNVGaYs f5HH90AGm4zJdLN844RG3AYbG+liiYFnLBV5oAJBoE+FyN6nXn6uKPsW7Uq5TP4II6jvw8sNimBaC 2rLOfK4kF6j2/HNjkwag==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkOFj-0000000AA0q-1qhG; Wed, 13 Mar 2024 13:02:39 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkOFg-0000000AA05-356f for linux-arm-kernel@lists.infradead.org; Wed, 13 Mar 2024 13:02:38 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710334955; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fYLe0aT8sUm59p0tKQcJHRPXsTmiAWxpPyjGCxUwStM=; b=JbkdhOpwIR5Bv45s4S0k3/D4H4z0/WTpre7UsPCYN7hk2qGjwSDnHLu9VQzhO9EAgQr0UV vELRtI9TdD4icsMfWYEt6/j2A+daEFoS/i/QaC/YzHYDl2CooPDat+DsR4yyhKiRXa+K1N //xQr2hoV7dytMhYKXQkZe7TChOl6lc= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-563-W-JmPx16PR6XbEdj-9vJFg-1; Wed, 13 Mar 2024 08:57:33 -0400 X-MC-Unique: W-JmPx16PR6XbEdj-9vJFg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4465A3802123; Wed, 13 Mar 2024 12:57:33 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id C2B8810E47; Wed, 13 Mar 2024 12:57:30 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Ard Biesheuvel , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH 02/10] arm64: mm: Introduce mmu_head routines without instrumentation Date: Wed, 13 Mar 2024 20:57:00 +0800 Message-ID: <20240313125711.20651-3-piliu@redhat.com> In-Reply-To: <20240313125711.20651-1-piliu@redhat.com> References: <20240313125711.20651-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240313_060236_879546_A93BF3EE X-CRM114-Status: GOOD ( 23.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org During the early boot stage, the instrumentation can not be handled. Use a macro INSTRUMENT_OPTION to switch on or off 'noinstr' on these routines. Signed-off-by: Pingfan Liu Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland To: linux-arm-kernel@lists.infradead.org --- arch/arm64/mm/Makefile | 2 +- arch/arm64/mm/mmu.c | 50 +++++++++-------------------- arch/arm64/mm/mmu_head.c | 19 +++++++++++ arch/arm64/mm/mmu_inc.c | 68 +++++++++++++++++++++++++++++++--------- 4 files changed, 87 insertions(+), 52 deletions(-) create mode 100644 arch/arm64/mm/mmu_head.c diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index dbd1bc95967d..0d92fb24a398 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -2,7 +2,7 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ cache.o copypage.o flush.o \ ioremap.o mmap.o pgd.o mmu.o \ - context.o proc.o pageattr.o fixmap.o + context.o proc.o pageattr.o fixmap.o mmu_head.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_PTDUMP_CORE) += ptdump.o obj-$(CONFIG_PTDUMP_DEBUGFS) += ptdump_debugfs.o diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 870be374f458..80e49faaf066 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -131,46 +131,14 @@ static phys_addr_t __init early_pgtable_alloc(int shift) return phys; } +#define INSTRUMENT_OPTION +#include "mmu_inc.c" + bool pgattr_change_is_safe(u64 old, u64 new) { - /* - * The following mapping attributes may be updated in live - * kernel mappings without the need for break-before-make. - */ - pteval_t mask = PTE_PXN | PTE_RDONLY | PTE_WRITE | PTE_NG; - - /* creating or taking down mappings is always safe */ - if (!pte_valid(__pte(old)) || !pte_valid(__pte(new))) - return true; - - /* A live entry's pfn should not change */ - if (pte_pfn(__pte(old)) != pte_pfn(__pte(new))) - return false; - - /* live contiguous mappings may not be manipulated at all */ - if ((old | new) & PTE_CONT) - return false; - - /* Transitioning from Non-Global to Global is unsafe */ - if (old & ~new & PTE_NG) - return false; - - /* - * Changing the memory type between Normal and Normal-Tagged is safe - * since Tagged is considered a permission attribute from the - * mismatched attribute aliases perspective. - */ - if (((old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) || - (old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)) && - ((new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) || - (new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED))) - mask |= PTE_ATTRINDX_MASK; - - return ((old ^ new) & ~mask) == 0; + return __pgattr_change_is_safe(old, new); } -#include "mmu_inc.c" - static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot, @@ -945,6 +913,16 @@ void vmemmap_free(unsigned long start, unsigned long end, } #endif /* CONFIG_MEMORY_HOTPLUG */ +int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot) +{ + return __pud_set_huge(pudp, phys, prot); +} + +int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot) +{ + return __pmd_set_huge(pmdp, phys, prot); +} + int pud_clear_huge(pud_t *pudp) { if (!pud_sect(READ_ONCE(*pudp))) diff --git a/arch/arm64/mm/mmu_head.c b/arch/arm64/mm/mmu_head.c new file mode 100644 index 000000000000..4d65b7368db3 --- /dev/null +++ b/arch/arm64/mm/mmu_head.c @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include +#include +#include +#include + +#define INSTRUMENT_OPTION __noinstr_section(".init.text.noinstr") +#include "mmu_inc.c" + +void INSTRUMENT_OPTION mmu_head_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, + unsigned long virt, phys_addr_t size, + pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), + int flags) +{ + __create_pgd_mapping_locked(pgdir, phys, virt, size, prot, pgtable_alloc, flags); +} diff --git a/arch/arm64/mm/mmu_inc.c b/arch/arm64/mm/mmu_inc.c index dcd97eea0726..2535927d30ec 100644 --- a/arch/arm64/mm/mmu_inc.c +++ b/arch/arm64/mm/mmu_inc.c @@ -1,11 +1,49 @@ // SPDX-License-Identifier: GPL-2.0-only -int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot) +static bool INSTRUMENT_OPTION __pgattr_change_is_safe(u64 old, u64 new) +{ + /* + * The following mapping attributes may be updated in live + * kernel mappings without the need for break-before-make. + */ + pteval_t mask = PTE_PXN | PTE_RDONLY | PTE_WRITE | PTE_NG; + + /* creating or taking down mappings is always safe */ + if (!pte_valid(__pte(old)) || !pte_valid(__pte(new))) + return true; + + /* A live entry's pfn should not change */ + if (pte_pfn(__pte(old)) != pte_pfn(__pte(new))) + return false; + + /* live contiguous mappings may not be manipulated at all */ + if ((old | new) & PTE_CONT) + return false; + + /* Transitioning from Non-Global to Global is unsafe */ + if (old & ~new & PTE_NG) + return false; + + /* + * Changing the memory type between Normal and Normal-Tagged is safe + * since Tagged is considered a permission attribute from the + * mismatched attribute aliases perspective. + */ + if (((old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) || + (old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)) && + ((new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) || + (new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED))) + mask |= PTE_ATTRINDX_MASK; + + return ((old ^ new) & ~mask) == 0; +} + +static int INSTRUMENT_OPTION __pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot) { pud_t new_pud = pfn_pud(__phys_to_pfn(phys), mk_pud_sect_prot(prot)); /* Only allow permission changes for now */ - if (!pgattr_change_is_safe(READ_ONCE(pud_val(*pudp)), + if (!__pgattr_change_is_safe(READ_ONCE(pud_val(*pudp)), pud_val(new_pud))) return 0; @@ -14,12 +52,12 @@ int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot) return 1; } -int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot) +static int INSTRUMENT_OPTION __pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot) { pmd_t new_pmd = pfn_pmd(__phys_to_pfn(phys), mk_pmd_sect_prot(prot)); /* Only allow permission changes for now */ - if (!pgattr_change_is_safe(READ_ONCE(pmd_val(*pmdp)), + if (!__pgattr_change_is_safe(READ_ONCE(pmd_val(*pmdp)), pmd_val(new_pmd))) return 0; @@ -28,7 +66,7 @@ int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot) return 1; } -static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, +static void INSTRUMENT_OPTION init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot) { pte_t *ptep; @@ -43,7 +81,7 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, * After the PTE entry has been populated once, we * only allow updates to the permission attributes. */ - BUG_ON(!pgattr_change_is_safe(pte_val(old_pte), + BUG_ON(!__pgattr_change_is_safe(pte_val(old_pte), READ_ONCE(pte_val(*ptep)))); phys += PAGE_SIZE; @@ -52,7 +90,7 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, pte_clear_fixmap(); } -static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, +static void INSTRUMENT_OPTION alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot, phys_addr_t (*pgtable_alloc)(int), @@ -91,7 +129,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, } while (addr = next, addr != end); } -static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, +static void INSTRUMENT_OPTION init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot, phys_addr_t (*pgtable_alloc)(int), int flags) { @@ -107,13 +145,13 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, /* try section mapping first */ if (((addr | next | phys) & ~PMD_MASK) == 0 && (flags & NO_BLOCK_MAPPINGS) == 0) { - pmd_set_huge(pmdp, phys, prot); + __pmd_set_huge(pmdp, phys, prot); /* * After the PMD entry has been populated once, we * only allow updates to the permission attributes. */ - BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd), + BUG_ON(!__pgattr_change_is_safe(pmd_val(old_pmd), READ_ONCE(pmd_val(*pmdp)))); } else { alloc_init_cont_pte(pmdp, addr, next, phys, prot, @@ -128,7 +166,7 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, pmd_clear_fixmap(); } -static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, +static void INSTRUMENT_OPTION alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot, phys_addr_t (*pgtable_alloc)(int), int flags) @@ -169,7 +207,7 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, } while (addr = next, addr != end); } -static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, +static void INSTRUMENT_OPTION alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot, phys_addr_t (*pgtable_alloc)(int), int flags) @@ -204,13 +242,13 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, if (pud_sect_supported() && ((addr | next | phys) & ~PUD_MASK) == 0 && (flags & NO_BLOCK_MAPPINGS) == 0) { - pud_set_huge(pudp, phys, prot); + __pud_set_huge(pudp, phys, prot); /* * After the PUD entry has been populated once, we * only allow updates to the permission attributes. */ - BUG_ON(!pgattr_change_is_safe(pud_val(old_pud), + BUG_ON(!__pgattr_change_is_safe(pud_val(old_pud), READ_ONCE(pud_val(*pudp)))); } else { alloc_init_cont_pmd(pudp, addr, next, phys, prot, @@ -225,7 +263,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, pud_clear_fixmap(); } -static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys, +static void INSTRUMENT_OPTION __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot, phys_addr_t (*pgtable_alloc)(int), From patchwork Wed Mar 13 12:57:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13591390 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1AAE9C54E67 for ; Wed, 13 Mar 2024 13:00:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=izefFViaa+PI/WTPgGMwKFh0VpeG5PcQmf9VnJBy0sg=; b=cw9ZA87ym4Ih5v UpssKlUy5U5xQwp2VQz1dlOxPVRXc5R1yesW6jMDiinTS5aFpwWOVDfU3M6AxzsVAiCe3DZDdqfA8 OYY1TZD33+IYrGKO7w2dl2jv6WkCzNv32Z+d982HzrrNJo2C2dnE+2ovbbSmAOG1nAnh6GKEoGVpE UuB1PQN5hJxOgBT5PsBNK1shaW+4UbuaemNwmGAaUY+7iXyepoTGqkFk3RKKSduUPyF2bOG5CC5O0 2SoIgDqr4lSlvnyEo66HgGx8e+50QCtO6+RtHKYV9g+g7daiOTYdsB6WVyD3XEr+yTZLrSN9yJoYM ZGlI06C+JAMG1auPdyRA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkODh-0000000A9VO-22zJ; Wed, 13 Mar 2024 13:00:33 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkODb-0000000A9Rc-0ru3 for linux-arm-kernel@lists.infradead.org; Wed, 13 Mar 2024 13:00:29 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710334826; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aG56RhtnSp32m8zSsqT06yRKajmUh0YjDAABkjKWEXk=; b=ezT8F1Msf62dwd3vHBpIvcUmQzjNYCjv3qOItjz+3me10Ad09VKaz4kiJmgTLdK9279/BC K+mvXsxviStplzBgXfqI6/pHyWETViz+zwrI07MSNBWRGy1kg6h2KgUKdZTnpUwWISKUQ6 dzOTSwf/UrNXOb7JrKBFJWW7vRo99HE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-633-br_91FgdP8-84j-YrCw-CQ-1; Wed, 13 Mar 2024 08:57:36 -0400 X-MC-Unique: br_91FgdP8-84j-YrCw-CQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 58622185A784; Wed, 13 Mar 2024 12:57:36 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id D79E610E47; Wed, 13 Mar 2024 12:57:33 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Ard Biesheuvel , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH 03/10] arm64: mm: Use if-conditon to truncate external dependency Date: Wed, 13 Mar 2024 20:57:01 +0800 Message-ID: <20240313125711.20651-4-piliu@redhat.com> In-Reply-To: <20240313125711.20651-1-piliu@redhat.com> References: <20240313125711.20651-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240313_060027_465765_7F3F3DC4 X-CRM114-Status: GOOD ( 17.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org An outside callee can present some challenging issues for the early boot stage, including posistion-dependent, instrumentation, alignment and sub-component not being ready. To mitigate these dependencies, leveraging compile-time optimization can help truncate reliance, ensuring that mmu_head is self-contained. Additionally, running checks against relocation and external dependencies in the Makefile can further enhance the robustness of the system. Signed-off-by: Pingfan Liu Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland To: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/pgtable.h | 11 +++++++++-- arch/arm64/mm/Makefile | 19 ++++++++++++++++++ arch/arm64/mm/mmu_head.c | 3 +++ arch/arm64/mm/mmu_inc.c | 33 ++++++++++++++++---------------- 4 files changed, 47 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 79ce70fbb751..f43a93d78454 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -625,10 +625,17 @@ extern pgd_t reserved_pg_dir[PTRS_PER_PGD]; extern void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd); +#ifndef KERNEL_READY +#define KERNEL_READY true +#endif static inline bool in_swapper_pgdir(void *addr) { - return ((unsigned long)addr & PAGE_MASK) == - ((unsigned long)swapper_pg_dir & PAGE_MASK); + /* The compiling time optimization screens the calls to set_swapper_pgd() */ + if (KERNEL_READY) + return ((unsigned long)addr & PAGE_MASK) == + ((unsigned long)swapper_pg_dir & PAGE_MASK); + else + return false; } static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 0d92fb24a398..89d496ca970b 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -14,3 +14,22 @@ KASAN_SANITIZE_physaddr.o += n obj-$(CONFIG_KASAN) += kasan_init.o KASAN_SANITIZE_kasan_init.o := n + +$(obj)/mmu_head_tmp.o: $(src)/mmu_head.c FORCE + $(call if_changed_rule,cc_o_c) +OBJCOPYFLAGS_mmu_head.o := $(OBJCOPYFLAGS) +$(obj)/mmu_head.o: $(obj)/mmu_head_tmp.o FORCE + $(call if_changed,stubcopy) + +quiet_cmd_stubcopy = STUBCPY $@ + cmd_stubcopy = \ + $(STRIP) --strip-debug -o $@ $<; \ + if $(OBJDUMP) -r $@ | grep R_AARCH64_ABS; then \ + echo "$@: absolute symbol references not allowed in mmu_head.o" >&2; \ + /bin/false; \ + fi; \ + if nm -u $@ | grep "U"; then \ + echo "$@: external dependency incur uncertainty of alignment and not-PIC" >&2; \ + /bin/false; \ + fi; \ + $(OBJCOPY) $(OBJCOPYFLAGS) $< $@ diff --git a/arch/arm64/mm/mmu_head.c b/arch/arm64/mm/mmu_head.c index 4d65b7368db3..ccdd0f079c49 100644 --- a/arch/arm64/mm/mmu_head.c +++ b/arch/arm64/mm/mmu_head.c @@ -1,5 +1,8 @@ // SPDX-License-Identifier: GPL-2.0-only + +#define KERNEL_READY false + #include #include #include diff --git a/arch/arm64/mm/mmu_inc.c b/arch/arm64/mm/mmu_inc.c index 2535927d30ec..196987c120bf 100644 --- a/arch/arm64/mm/mmu_inc.c +++ b/arch/arm64/mm/mmu_inc.c @@ -81,7 +81,7 @@ static void INSTRUMENT_OPTION init_pte(pmd_t *pmdp, unsigned long addr, unsigned * After the PTE entry has been populated once, we * only allow updates to the permission attributes. */ - BUG_ON(!__pgattr_change_is_safe(pte_val(old_pte), + BUG_ON(KERNEL_READY && !__pgattr_change_is_safe(pte_val(old_pte), READ_ONCE(pte_val(*ptep)))); phys += PAGE_SIZE; @@ -99,19 +99,19 @@ static void INSTRUMENT_OPTION alloc_init_cont_pte(pmd_t *pmdp, unsigned long add unsigned long next; pmd_t pmd = READ_ONCE(*pmdp); - BUG_ON(pmd_sect(pmd)); + BUG_ON(KERNEL_READY && pmd_sect(pmd)); if (pmd_none(pmd)) { pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN; phys_addr_t pte_phys; if (flags & NO_EXEC_MAPPINGS) pmdval |= PMD_TABLE_PXN; - BUG_ON(!pgtable_alloc); + BUG_ON(KERNEL_READY && !pgtable_alloc); pte_phys = pgtable_alloc(PAGE_SHIFT); __pmd_populate(pmdp, pte_phys, pmdval); pmd = READ_ONCE(*pmdp); } - BUG_ON(pmd_bad(pmd)); + BUG_ON(KERNEL_READY && pmd_bad(pmd)); do { pgprot_t __prot = prot; @@ -151,14 +151,13 @@ static void INSTRUMENT_OPTION init_pmd(pud_t *pudp, unsigned long addr, unsigned * After the PMD entry has been populated once, we * only allow updates to the permission attributes. */ - BUG_ON(!__pgattr_change_is_safe(pmd_val(old_pmd), + BUG_ON(KERNEL_READY && !__pgattr_change_is_safe(pmd_val(old_pmd), READ_ONCE(pmd_val(*pmdp)))); } else { alloc_init_cont_pte(pmdp, addr, next, phys, prot, pgtable_alloc, flags); - - BUG_ON(pmd_val(old_pmd) != 0 && - pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp))); + BUG_ON(KERNEL_READY && pmd_val(old_pmd) != 0 && + pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp))); } phys += next - addr; } while (pmdp++, addr = next, addr != end); @@ -177,19 +176,19 @@ static void INSTRUMENT_OPTION alloc_init_cont_pmd(pud_t *pudp, unsigned long add /* * Check for initial section mappings in the pgd/pud. */ - BUG_ON(pud_sect(pud)); + BUG_ON(KERNEL_READY && pud_sect(pud)); if (pud_none(pud)) { pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN; phys_addr_t pmd_phys; if (flags & NO_EXEC_MAPPINGS) pudval |= PUD_TABLE_PXN; - BUG_ON(!pgtable_alloc); + BUG_ON(KERNEL_READY && !pgtable_alloc); pmd_phys = pgtable_alloc(PMD_SHIFT); __pud_populate(pudp, pmd_phys, pudval); pud = READ_ONCE(*pudp); } - BUG_ON(pud_bad(pud)); + BUG_ON(KERNEL_READY && pud_bad(pud)); do { pgprot_t __prot = prot; @@ -223,12 +222,12 @@ static void INSTRUMENT_OPTION alloc_init_pud(pgd_t *pgdp, unsigned long addr, un if (flags & NO_EXEC_MAPPINGS) p4dval |= P4D_TABLE_PXN; - BUG_ON(!pgtable_alloc); + BUG_ON(KERNEL_READY && !pgtable_alloc); pud_phys = pgtable_alloc(PUD_SHIFT); __p4d_populate(p4dp, pud_phys, p4dval); p4d = READ_ONCE(*p4dp); } - BUG_ON(p4d_bad(p4d)); + BUG_ON(KERNEL_READY && p4d_bad(p4d)); pudp = pud_set_fixmap_offset(p4dp, addr); do { @@ -248,14 +247,14 @@ static void INSTRUMENT_OPTION alloc_init_pud(pgd_t *pgdp, unsigned long addr, un * After the PUD entry has been populated once, we * only allow updates to the permission attributes. */ - BUG_ON(!__pgattr_change_is_safe(pud_val(old_pud), + BUG_ON(KERNEL_READY && !__pgattr_change_is_safe(pud_val(old_pud), READ_ONCE(pud_val(*pudp)))); } else { alloc_init_cont_pmd(pudp, addr, next, phys, prot, pgtable_alloc, flags); - BUG_ON(pud_val(old_pud) != 0 && - pud_val(old_pud) != READ_ONCE(pud_val(*pudp))); + BUG_ON(KERNEL_READY && pud_val(old_pud) != 0 && + pud_val(old_pud) != READ_ONCE(pud_val(*pudp))); } phys += next - addr; } while (pudp++, addr = next, addr != end); @@ -276,7 +275,7 @@ static void INSTRUMENT_OPTION __create_pgd_mapping_locked(pgd_t *pgdir, phys_add * If the virtual and physical address don't have the same offset * within a page, we cannot map the region as the caller expects. */ - if (WARN_ON((phys ^ virt) & ~PAGE_MASK)) + if (KERNEL_READY && WARN_ON((phys ^ virt) & ~PAGE_MASK)) return; phys &= PAGE_MASK; From patchwork Wed Mar 13 12:57:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13591387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D0B92C54791 for ; Wed, 13 Mar 2024 13:00:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xftJp2IgZn1ZC5TZ1ChyGLc7gW7G33D8D8/yWV/0sVQ=; b=eSvxAA3Eiy4FrG +SpkGM9MF/JtrysHRxO4fWPp8eo21F9qjtp6sTcC1830m/3Z0r+RLPN5eawXhWJaAEWGtxhejqquJ o1Vrt+PUQmkW1LOdgI+lOQeF8tzq4lGLEDR1Wm5m/jaFKJHNX2Q3WgybrlJk53lzHB9/CfHXCLTsm DzF2j/7hQAtLT/cyRS9tUE1byB9svuLNTSmemGoHv8pXGp8mvZmB68O0nOr7Rem/HyGg59OIe+4Yz p2L4NP5IMF8ACMmYHs1BILDQBdaT4UYxQybJPCQY+R3guZZ/va7vSlbpzv8OQN1PFNxxEGRnOqzBt RViPzk+Pvy45YHEYwLWA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkODI-0000000A9Ko-1SpL; Wed, 13 Mar 2024 13:00:08 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkODE-0000000A9Jb-2gYG for linux-arm-kernel@lists.infradead.org; Wed, 13 Mar 2024 13:00:06 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710334803; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q7UaXx7woWiSjxapRzZBKFnnDL8hoi9yfupYamQFkZI=; b=O1bxNSEDUm6aQC4NBCqatCPAe0jSZ1/I7e37z8b0Et+xkX65oFkqWGM+gu3yPsMOxPmUXs N5CEfgdE9R34hEn6PZiFJ/40rWFvUJe40Ngji6u/C9HRvMMB/pmHa9JTWMas+dslxiHfPv +Dsdv1zXBARG+cuBBptFxkaTrf8rJTY= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-185-x6uSjwJYOiyEabkt_SkOig-1; Wed, 13 Mar 2024 08:57:39 -0400 X-MC-Unique: x6uSjwJYOiyEabkt_SkOig-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 704FE1C54064; Wed, 13 Mar 2024 12:57:39 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id EDC2D17A90; Wed, 13 Mar 2024 12:57:36 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Ard Biesheuvel , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH 04/10] arm64: head: Enable __create_pgd_mapping() to handle pgtable's paddr Date: Wed, 13 Mar 2024 20:57:02 +0800 Message-ID: <20240313125711.20651-5-piliu@redhat.com> In-Reply-To: <20240313125711.20651-1-piliu@redhat.com> References: <20240313125711.20651-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240313_060004_896616_3F499D90 X-CRM114-Status: UNSURE ( 9.30 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When mmu-off or identical mapping, both of the page table: init_idmap_pg_dir and init_pg_dir can be accessed by physical address (virtual address equals physical) This patch introduces routines to avoid using fixmap to access page table. Signed-off-by: Pingfan Liu Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland To: linux-arm-kernel@lists.infradead.org --- arch/arm64/mm/mmu_head.c | 42 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/arch/arm64/mm/mmu_head.c b/arch/arm64/mm/mmu_head.c index ccdd0f079c49..562d036dc30a 100644 --- a/arch/arm64/mm/mmu_head.c +++ b/arch/arm64/mm/mmu_head.c @@ -10,6 +10,48 @@ #include #define INSTRUMENT_OPTION __noinstr_section(".init.text.noinstr") + +#undef pud_set_fixmap_offset +#undef pud_clear_fixmap +#undef pmd_set_fixmap_offset +#undef pmd_clear_fixmap +#undef pte_set_fixmap_offset +#undef pte_clear_fixmap + +/* This group is used to access intermedia level in no mmu or identity map */ +#define pud_set_fixmap_offset(p4dp, addr) \ +({ \ + pud_t *pudp; \ + if (CONFIG_PGTABLE_LEVELS > 3) \ + pudp = (pud_t *)__p4d_to_phys(*p4dp) + pud_index(addr); \ + else \ + pudp = (pud_t *)p4dp; \ + pudp; \ +}) + +#define pud_clear_fixmap() + +#define pmd_set_fixmap_offset(pudp, addr) \ +({ \ + pmd_t *pmdp; \ + if (CONFIG_PGTABLE_LEVELS > 2) \ + pmdp = (pmd_t *)__pud_to_phys(*pudp) + pmd_index(addr); \ + else \ + pmdp = (pmd_t *)pudp; \ + pmdp; \ +}) + +#define pmd_clear_fixmap() + +#define pte_set_fixmap_offset(pmdp, addr) \ +({ \ + pte_t *ptep; \ + ptep = (pte_t *)__pmd_to_phys(*pmdp) + pte_index(addr); \ + ptep; \ +}) + +#define pte_clear_fixmap() + #include "mmu_inc.c" void INSTRUMENT_OPTION mmu_head_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, From patchwork Wed Mar 13 12:57:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13591384 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6681AC54791 for ; Wed, 13 Mar 2024 12:59:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0NvbAH8Ift4woWX5mY3ltK46SRbpBsBxgKeadGz8h8Q=; b=kVWWLCXGhYAxvA gFNi2zyFvV/PWVCWq6/DjK5g4O1H9g97AtSZkdSqvZxC5pLSJwuLHnIDhEdOwEfUjhLTAd0lRuVyh Wo9vdBeksoVHFDiYOF5rtKX6kkicslhEH84mujlBmXhRpZ0y3rkIDeqXZY43AFXe8kdV3OKdRNIBq 1Mm1oL5g7VvAw4WtGr49yChDd2IJ917YkLU9sWzPvaDhqI9bXttKSxixju1tzRa8ep1wj3euTlt0c CsML9nC1+jZp+HnNmPJKckSIFHlkmF0bzUe9gfSv9xWg2r0bs+cpGj6aQj+uNtS+sjsLJ/suW4QZm NCW+psIyR6w6Am2oTy7g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkOCb-0000000A8zi-11lK; Wed, 13 Mar 2024 12:59:25 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkOCW-0000000A8wm-44xz for linux-arm-kernel@lists.infradead.org; Wed, 13 Mar 2024 12:59:23 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710334758; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2r+392XXwuc3n014z3yI/az0vrJLvijlDnFFnJlEVmc=; b=CiQ5eMCuiVpV+maFacIEzbMU2bCIy+ssJSv7hry2S8+xmbthkLugnNpEtDsfeHXQWE3fVo hv3Hh9gEKFakyS4q/TjrycNkg3kYE/tXY/ZIeMUCwFwX1Bck8IDpc5zO9r+oAOig0jjzM1 gPQapewtAzegmKvQHEFYrmntu7evu6w= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-464-Gg5js5fyMQ-hNBvcV9Pluw-1; Wed, 13 Mar 2024 08:57:42 -0400 X-MC-Unique: Gg5js5fyMQ-hNBvcV9Pluw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 89D7D872860; Wed, 13 Mar 2024 12:57:42 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id 10BE8111FA; Wed, 13 Mar 2024 12:57:39 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Ard Biesheuvel , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH 05/10] arm64: mm: Force early mapping aligned on SWAPPER_BLOCK_SIZE Date: Wed, 13 Mar 2024 20:57:03 +0800 Message-ID: <20240313125711.20651-6-piliu@redhat.com> In-Reply-To: <20240313125711.20651-1-piliu@redhat.com> References: <20240313125711.20651-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240313_055921_790566_D5E19AD7 X-CRM114-Status: UNSURE ( 9.55 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org At this very early stage, the page table size is limited and block-mapping is appealed. Force the input param aligned on SWAPPER_BLOCK_SIZE, so that __create_pgd_mapping_locked() can use the block-mapping scheme. Signed-off-by: Pingfan Liu Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland To: linux-arm-kernel@lists.infradead.org --- arch/arm64/mm/mmu_head.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/mm/mmu_head.c b/arch/arm64/mm/mmu_head.c index 562d036dc30a..e00f6f2c7bec 100644 --- a/arch/arm64/mm/mmu_head.c +++ b/arch/arm64/mm/mmu_head.c @@ -60,5 +60,11 @@ void INSTRUMENT_OPTION mmu_head_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phy phys_addr_t (*pgtable_alloc)(int), int flags) { + phys_addr_t end = phys + size; + + phys = ALIGN_DOWN(phys, SWAPPER_BLOCK_SIZE); + virt = ALIGN_DOWN(virt, SWAPPER_BLOCK_SIZE); + end = ALIGN(end, SWAPPER_BLOCK_SIZE); + size = end - phys; __create_pgd_mapping_locked(pgdir, phys, virt, size, prot, pgtable_alloc, flags); } From patchwork Wed Mar 13 12:57:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13591388 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DEA6BC54E58 for ; Wed, 13 Mar 2024 13:00:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WcVdF/7akC0mr3zZvSqq3rBB1/xKNjtpeYCw766XAcE=; b=iMhSwUHpxZIa3A S8nJut69VQVXBB4EXKdetTyKp6JrNTyS8eDrWsSoiQcDQVxNryxZJgS/xmET8V035UN7GSl5bDUi5 WftzNDV4mISPiX9+Ap4JVBb/Hi5DhQco5KhvQ/nHFmE1XGB3XW3oYWx2AeOMqrtFJseglNkEl0ZPq c7h7K1UyVHZN/MCXOTiKfnapqL7WmLStYpMz7GJ5F16mX5aBqQaaNylftppp6hmNtQrs/YYMqa0bv 1b1P/0QXOm9TzXCy76x+WHF5VGYMXuUTs6qtN3BKgFHUnFphDMOtXNYJBZBcfKLzVkRE1GuXMgYio 9yA2lyVy4pWj5KHvuwvA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkODf-0000000A9UN-39w5; Wed, 13 Mar 2024 13:00:31 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkODb-0000000A9Rd-0ryl for linux-arm-kernel@lists.infradead.org; Wed, 13 Mar 2024 13:00:29 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710334826; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oYJDM8sSyWlFn9/09ELVf0hZb4Kfsu5P9I2kefxFvzs=; b=Cq3OekzScFZs9dJmzOfWmmrhKBiP3ir/xg/2dCAg5fj1KljJBc2yYyQJ2yXgsdi6Sevf3+ 8dmB/oiK4xp5P1GZgiIhNXMJN22Oiif4cqs8VaZI7vF65U8aTwCAaQtn4PE5b9+5Z5Z9S+ S6D9n+mXmcP99vLvos5bsevQzUPxuII= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-130-BccFzXOXNiufeLZr3kYqMg-1; Wed, 13 Mar 2024 08:57:46 -0400 X-MC-Unique: BccFzXOXNiufeLZr3kYqMg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9DC89800268; Wed, 13 Mar 2024 12:57:45 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2993B10E47; Wed, 13 Mar 2024 12:57:42 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Ard Biesheuvel , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH 06/10] arm64: mm: Handle scope beyond the capacity of kernel pgtable in mmu_head_create_pgd_mapping() Date: Wed, 13 Mar 2024 20:57:04 +0800 Message-ID: <20240313125711.20651-7-piliu@redhat.com> In-Reply-To: <20240313125711.20651-1-piliu@redhat.com> References: <20240313125711.20651-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240313_060027_381180_6FC7D55C X-CRM114-Status: GOOD ( 13.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch serves the same purpose as the commit fa2a8445b1d3 ("arm64: allow ID map to be extended to 52 bits") Since it is harmless to ditto init_pg_dir, there is no need to distinguish between init_idmap_pg_dir and init_pg_dir. Signed-off-by: Pingfan Liu Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland To: linux-arm-kernel@lists.infradead.org --- arch/arm64/mm/mmu_head.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/arm64/mm/mmu_head.c b/arch/arm64/mm/mmu_head.c index e00f6f2c7bec..2df91e62ddb0 100644 --- a/arch/arm64/mm/mmu_head.c +++ b/arch/arm64/mm/mmu_head.c @@ -66,5 +66,23 @@ void INSTRUMENT_OPTION mmu_head_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phy virt = ALIGN_DOWN(virt, SWAPPER_BLOCK_SIZE); end = ALIGN(end, SWAPPER_BLOCK_SIZE); size = end - phys; + /* + * In case that the kernel routines support small VA range while the boot image + * is put beyond the scope, blindless extending the pgtable by one level + */ + if ((IS_ENABLED(CONFIG_ARM64_16K_PAGES) && IS_ENABLED(CONFIG_ARM64_VA_BITS_36)) || + (IS_ENABLED(CONFIG_ARM64_64K_PAGES) && IS_ENABLED(CONFIG_ARM64_VA_BITS_42)) || + (IS_ENABLED(CONFIG_ARM64_4K_PAGES) && IS_ENABLED(CONFIG_ARM64_VA_BITS_39))) { + unsigned long pgd_paddr; + pgd_t *pgd; + pgd_t pgd_val; + + pgd_paddr = headpool_pgtable_alloc(0); + pgd_val = __pgd(pgd_paddr | P4D_TYPE_TABLE); + /* The shift should be one more level than PGDIR_SHIFT */ + pgd = pgdir + (virt >> ARM64_HW_PGTABLE_LEVEL_SHIFT(3 - CONFIG_PGTABLE_LEVELS)); + set_pgd(pgd, pgd_val); + pgdir = pgd; + } __create_pgd_mapping_locked(pgdir, phys, virt, size, prot, pgtable_alloc, flags); } From patchwork Wed Mar 13 12:57:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13591383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7170CC54791 for ; Wed, 13 Mar 2024 12:59:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lxtXeDJYHZ4O4Fi7VVe89yv9TSfzUpw5FBliI5iOssA=; b=Q0fHsAec6v2Yhd oW4Wq5+SOkuVUXoRLCvzum4COZkbnkSz/tMWuGzzzaNjv2ftBDgp9TrMpe7YncnQV24K2cew7ldrS sd8VWF4vpt/OGaE8y4JPyYhKq16Bf7sgrIGV8lsfCw7+eoAig8Cd9QP9KLADoJGMNKt6yAv49xOVX jBXX5B1O1RnmnMY2uAXFNcxZ8WYSEJH/Dkg6/v3eewgmj6J4FvLRNOhxovboDnYcwnn2vd9AjEAol MT+DYhhtnylDirmdu0gPXSYz09YFwLMmZKP62sSmvcYCApVpwwyS5lHxyFPYtcs9O4aDGoWJT1LUI rhYmYvp6t6JOCy3Bfn8A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkOCV-0000000A8wp-1miS; Wed, 13 Mar 2024 12:59:19 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkOCU-0000000A8w6-1KFY for linux-arm-kernel@bombadil.infradead.org; Wed, 13 Mar 2024 12:59:18 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender :Reply-To:Content-ID:Content-Description; bh=4oWWkBxQCWLUXpAAFNHy9pFVbXBDN9dLJmUvQs17o7Q=; b=hbPnq1FdgJu9r9bVBGilFMMefQ /Vwq7k8wT4FFnog/CpJ2FT+HKrVYCvyPCi42SlkLiAu9PGmFs2bTk5u5DgSfMEgyFUlBRt0vlY+5d pxSFNay8MCzDHeU9JAGdwHALotVFWCCIziMzRcGL3/uwmX5qd3+eiUIeUVAt54/ZWdLwS9/2KiRy3 qSyR7UQt8Hrrq1MG5KjVjy0rVl5BZjKZAbeivHZK2rxD53MnJnz8tIYWG/1sRz0nA/ZM/BlomT1ST agN3DXnyoVM6LL6y3MIA5yAs6zOEStiNjUpvyHHJ+eLcNjSMFaJt83iojcBKJHhqfZCjqsi+gIvJ9 nn/YKXfw==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by casper.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkOCQ-00000005QOU-0rXF for linux-arm-kernel@lists.infradead.org; Wed, 13 Mar 2024 12:59:17 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710334671; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4oWWkBxQCWLUXpAAFNHy9pFVbXBDN9dLJmUvQs17o7Q=; b=NdPQAAU7GbJkSXxRYm0A93dZrM9uzJwDAw+70fqA5qUoVk7xpTASH5FZNKSfilUSkRZtli IvA+U3wMqfoTuaebh9ZZABVWQLl3ybAhRPGXRZAOcSk0JBe/FdCaowLzlxSITKz2lYktXp 6D+XjdG3+6RrcWHMppJdd/8pxD4U+IQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710334679; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4oWWkBxQCWLUXpAAFNHy9pFVbXBDN9dLJmUvQs17o7Q=; b=EkxEsnDdQRlyJgpagdIiDBHabESJePvTDb0RQcnKRFLEpuDg7o6LTmvJKSiAc5aMVsKwL/ kUhQOUCZwACxCyfE4wQrq4+cDoc1LbUWBMaEWxShbEw8QYN2eDtQ59XDi3cYdG970aahVZ hA93q2rkfni3VSZFItBJrfQykZwWCSI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710334687; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4oWWkBxQCWLUXpAAFNHy9pFVbXBDN9dLJmUvQs17o7Q=; b=ZoFc9WFYBS/xUcqtvZ8xwHyUuKoeTaSRgHh2mCl/Zk7R27rVZVPmjlHraFuRrWVtU7WXmI QeXxgYTWDm+QybjNpZuDFO7gh6QjInAOzs70FPRG7K/+Dj6lrMbJ9XuiAuGIq+a0I7afdM SK1pO61VdVZOz7PU85MzuQxohFF2QCg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-428-nNTM5Bk5MBO14FAo3SeRmQ-1; Wed, 13 Mar 2024 08:57:49 -0400 X-MC-Unique: nNTM5Bk5MBO14FAo3SeRmQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B22D5800268; Wed, 13 Mar 2024 12:57:48 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3DCB510E47; Wed, 13 Mar 2024 12:57:45 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Ard Biesheuvel , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH 07/10] arm64: mm: Introduce head_pool routines to enable pgtabl allocation Date: Wed, 13 Mar 2024 20:57:05 +0800 Message-ID: <20240313125711.20651-8-piliu@redhat.com> In-Reply-To: <20240313125711.20651-1-piliu@redhat.com> References: <20240313125711.20651-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240313_125914_780594_6FAD3298 X-CRM114-Status: GOOD ( 10.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org __create_pgd_mapping_locked() needs pgtable_alloc parameter to allocate memory for page table. During the early boot, the memory for page table should be allocated from init_idmap_pg_dir or init_pg_dir. This patch introduces routines to allocate PAGE from the above pool. Signed-off-by: Pingfan Liu Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland To: linux-arm-kernel@lists.infradead.org --- arch/arm64/mm/mmu_head.c | 42 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/arch/arm64/mm/mmu_head.c b/arch/arm64/mm/mmu_head.c index 2df91e62ddb0..801ebffe4209 100644 --- a/arch/arm64/mm/mmu_head.c +++ b/arch/arm64/mm/mmu_head.c @@ -54,6 +54,8 @@ #include "mmu_inc.c" +phys_addr_t headpool_pgtable_alloc(int unused_shift); + void INSTRUMENT_OPTION mmu_head_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot, @@ -86,3 +88,43 @@ void INSTRUMENT_OPTION mmu_head_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phy } __create_pgd_mapping_locked(pgdir, phys, virt, size, prot, pgtable_alloc, flags); } + +struct headpool { + phys_addr_t start; + unsigned long size; + unsigned long next_idx; +} __aligned(8); + +struct headpool head_pool __initdata; + +void INSTRUMENT_OPTION headpool_init(phys_addr_t start, unsigned long size) +{ + struct headpool *pool; + + asm volatile( + "adrp %0, head_pool;" + "add %0, %0, #:lo12:head_pool;" + : "=r" (pool) + : + : + ); + pool->start = start; + pool->size = size; + pool->next_idx = 0; +} + +phys_addr_t INSTRUMENT_OPTION headpool_pgtable_alloc(int unused_shift) +{ + struct headpool *pool; + unsigned long idx; + + asm volatile( + "adrp %0, head_pool;" + "add %0, %0, #:lo12:head_pool;" + : "=r" (pool) + : + : + ); + idx = pool->next_idx++; + return pool->start + (idx << PAGE_SHIFT); +} From patchwork Wed Mar 13 12:57:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13591391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 385F4C54E58 for ; Wed, 13 Mar 2024 13:00:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=19inU/4PpJvEraAK6uqO6UHjKzKuH31inGUbih12M6Q=; b=OJFQeSSGzr8EG0 uKQ2V1fdl7ZCm/1qvySD/nZEmeb/O0IEC3i1+CafR2F9jid8GOZmvrBJs6f6S0+Z7Z+MjOgtTqeL0 a2iXMjt1ftgeUv7MOBpSA618qq2awrnhpsjxL6sWpEe27bjkIVfZ7zLuHreq3XzDkyvB3hU38h6Ee 56jVyRk40wtpCv4penOK4zzoU8hMy6Ne389FuLwbM87eIcioEMZQZT6bkthSmVkVkP9Sq1fWAflLX zKyMIjXLvULBJl2c97HePhVJCSjV+Z51WHmKwQ+VbOiPw7ooQeycsrSiZMis1djwDt9ym4oiFv31u Mvv/wh3na6vH1cl6a9vA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkODi-0000000A9Vp-1UPX; Wed, 13 Mar 2024 13:00:34 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkODb-0000000A9Ra-0rwg for linux-arm-kernel@lists.infradead.org; Wed, 13 Mar 2024 13:00:30 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710334826; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EwPhW7O+Fl1hLMWiwjIgjkNT9KeW91Gso2Lq5mgSCuY=; b=EyFscRwZx2m5hAGcpQFBQSOjvzRhaTcXa8OKBaYGkrK7jZLjpn059ihBQL10px3zJ81V0k Vk82WDDzd72ikdguHBR9leg2UJxCZf3QYpkneAfMCBb6RyHFSjtPUzt1lslHZpzCemjO2B SkC3NHOq9EAgzjqzQcgk7C7M8aLofNM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-695-Z34CSwRxP4iv9XxebPKaZg-1; Wed, 13 Mar 2024 08:57:53 -0400 X-MC-Unique: Z34CSwRxP4iv9XxebPKaZg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5D5A9101A58E; Wed, 13 Mar 2024 12:57:53 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id D6A5B10E47; Wed, 13 Mar 2024 12:57:50 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Ard Biesheuvel , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH 09/10] arm64: head: Use __create_pgd_mapping_locked() to serve the creation of pgtable Date: Wed, 13 Mar 2024 20:57:07 +0800 Message-ID: <20240313125711.20651-10-piliu@redhat.com> In-Reply-To: <20240313125711.20651-1-piliu@redhat.com> References: <20240313125711.20651-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240313_060027_495610_AB028F58 X-CRM114-Status: GOOD ( 21.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The init_stack serves as stack for the C routines. For idmap, the mapping consist of five sections: kernel text section init_pg_dir, which needs to be accessed when create_kernel_mapping() __initdata, which contains data accessed by create_kernel_mapping() init_stack, which serves as the stack fdt Signed-off-by: Pingfan Liu Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland To: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/kernel-pgtable.h | 1 + arch/arm64/include/asm/mmu.h | 4 + arch/arm64/kernel/head.S | 171 +++++++++++++----------- arch/arm64/mm/mmu.c | 4 - 4 files changed, 96 insertions(+), 84 deletions(-) diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index 85d26143faa5..796bf3d8c181 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -91,6 +91,7 @@ #else #define INIT_IDMAP_DIR_SIZE (INIT_IDMAP_DIR_PAGES * PAGE_SIZE) #endif +// #define INIT_IDMAP_DIR_PAGES EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1) /* Initial memory map size */ diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 2fcf51231d6e..b817b694d1ba 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -12,6 +12,10 @@ #define USER_ASID_FLAG (UL(1) << USER_ASID_BIT) #define TTBR_ASID_MASK (UL(0xffff) << 48) +#define NO_BLOCK_MAPPINGS BIT(0) +#define NO_CONT_MAPPINGS BIT(1) +#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ + #ifndef __ASSEMBLY__ #include diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 7b236994f0e1..e2fa6b95f809 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -332,79 +333,69 @@ SYM_FUNC_START_LOCAL(remap_region) SYM_FUNC_END(remap_region) SYM_FUNC_START_LOCAL(create_idmap) - mov x28, lr - /* - * The ID map carries a 1:1 mapping of the physical address range - * covered by the loaded image, which could be anywhere in DRAM. This - * means that the required size of the VA (== PA) space is decided at - * boot time, and could be more than the configured size of the VA - * space for ordinary kernel and user space mappings. - * - * There are three cases to consider here: - * - 39 <= VA_BITS < 48, and the ID map needs up to 48 VA bits to cover - * the placement of the image. In this case, we configure one extra - * level of translation on the fly for the ID map only. (This case - * also covers 42-bit VA/52-bit PA on 64k pages). - * - * - VA_BITS == 48, and the ID map needs more than 48 VA bits. This can - * only happen when using 64k pages, in which case we need to extend - * the root level table rather than add a level. Note that we can - * treat this case as 'always extended' as long as we take care not - * to program an unsupported T0SZ value into the TCR register. - * - * - Combinations that would require two additional levels of - * translation are not supported, e.g., VA_BITS==36 on 16k pages, or - * VA_BITS==39/4k pages with 5-level paging, where the input address - * requires more than 47 or 48 bits, respectively. - */ -#if (VA_BITS < 48) -#define IDMAP_PGD_ORDER (VA_BITS - PGDIR_SHIFT) -#define EXTRA_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3) + adr_l x0, init_stack + add sp, x0, #THREAD_SIZE + sub sp, sp, #16 + stp lr, x0, [sp, #0] // x0 is useless, just to keep stack 16-bytes align - /* - * If VA_BITS < 48, we have to configure an additional table level. - * First, we have to verify our assumption that the current value of - * VA_BITS was chosen such that all translation levels are fully - * utilised, and that lowering T0SZ will always result in an additional - * translation level to be configured. - */ -#if VA_BITS != EXTRA_SHIFT -#error "Mismatch between VA_BITS and page size/number of translation levels" -#endif -#else -#define IDMAP_PGD_ORDER (PHYS_MASK_SHIFT - PGDIR_SHIFT) -#define EXTRA_SHIFT - /* - * If VA_BITS == 48, we don't have to configure an additional - * translation level, but the top-level table has more entries. - */ -#endif adrp x0, init_idmap_pg_dir - adrp x3, _text - adrp x6, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE - mov_q x7, SWAPPER_RX_MMUFLAGS - - map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT - - /* Remap the kernel page tables r/w in the ID map */ - adrp x1, _text - adrp x2, init_pg_dir - adrp x3, init_pg_end - bic x4, x2, #SWAPPER_BLOCK_SIZE - 1 - mov_q x5, SWAPPER_RW_MMUFLAGS - mov x6, #SWAPPER_BLOCK_SHIFT - bl remap_region - - /* Remap the FDT after the kernel image */ - adrp x1, _text - adrp x22, _end + SWAPPER_BLOCK_SIZE - bic x2, x22, #SWAPPER_BLOCK_SIZE - 1 + adrp x1, init_idmap_pg_end + sub x1, x1, x0 + bl headpool_init + mov x0, #0 + bl headpool_pgtable_alloc // return x0, containing init_idmap_pg_dir + mov x27, x0 // bake in case of flush + + adr_l x1, _text // phys + mov x2, x1 // virt for idmap + adr_l x3, _etext - 1 + sub x3, x3, x1 // size + ldr x4, =SWAPPER_RX_MMUFLAGS + adr_l x5, headpool_pgtable_alloc + mov x6, #0 + bl mmu_head_create_pgd_mapping + + mov x0, x27 // pgd + adr_l x1, init_pg_dir // phys + mov x2, x1 // virt for idmap + adr_l x3, init_pg_end + sub x3, x3, x1 + ldr x4, =SWAPPER_RW_MMUFLAGS + adr_l x5, headpool_pgtable_alloc + mov x6, #0 + bl mmu_head_create_pgd_mapping + + mov x0, x27 // pgd + adr_l x1, init_stack // kernel mapping need write-permission to use this stack + mov x2, x1 // virt for idmap + ldr x3, =THREAD_SIZE + ldr x4, =SWAPPER_RW_MMUFLAGS + adr_l x5, headpool_pgtable_alloc + mov x6, #0 + bl mmu_head_create_pgd_mapping + + mov x0, x27 // pgd + adr_l x1, __initdata_begin // kernel mapping need write-permission to it + mov x2, x1 // virt for idmap + adr_l x3, __initdata_end + sub x3, x3, x1 + ldr x4, =SWAPPER_RW_MMUFLAGS + adr_l x5, headpool_pgtable_alloc + mov x6, #0 + bl mmu_head_create_pgd_mapping + + + mov x0, x27 // pgd + mov x1, x21 // FDT phys + adr_l x2, _end + SWAPPER_BLOCK_SIZE // virt + mov x3, #(MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE) // size + ldr x4, =SWAPPER_RW_MMUFLAGS + adr_l x5, headpool_pgtable_alloc + mov x6, #0 + bl mmu_head_create_pgd_mapping + + adr_l x22, _end + SWAPPER_BLOCK_SIZE bfi x22, x21, #0, #SWAPPER_BLOCK_SHIFT // remapped FDT address - add x3, x2, #MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE - bic x4, x21, #SWAPPER_BLOCK_SIZE - 1 - mov_q x5, SWAPPER_RW_MMUFLAGS - mov x6, #SWAPPER_BLOCK_SHIFT - bl remap_region /* * Since the page tables have been populated with non-cacheable @@ -417,22 +408,42 @@ SYM_FUNC_START_LOCAL(create_idmap) adrp x0, init_idmap_pg_dir adrp x1, init_idmap_pg_end bl dcache_inval_poc -0: ret x28 + ldp lr, x0, [sp], #16 +0: ret SYM_FUNC_END(create_idmap) SYM_FUNC_START_LOCAL(create_kernel_mapping) + sub sp, sp, #80 + stp x0, x1, [sp, #0] + stp x2, x3, [sp, #16] + stp x4, x5, [sp, #32] + stp x6, x7, [sp, #48] + stp lr, xzr, [sp, #64] + adrp x0, init_pg_dir - mov_q x5, KIMAGE_VADDR // compile time __va(_text) + adrp x1, init_pg_end + sub x1, x1, x0 + bl headpool_init + mov x0, #0 + bl headpool_pgtable_alloc // return x0, containing init_pg_dir + + adrp x1, _text // runtime __pa(_text) + mov_q x2, KIMAGE_VADDR // compile time __va(_text) #ifdef CONFIG_RELOCATABLE - add x5, x5, x23 // add KASLR displacement + add x2, x2, x23 // add KASLR displacement #endif - adrp x6, _end // runtime __pa(_end) - adrp x3, _text // runtime __pa(_text) - sub x6, x6, x3 // _end - _text - add x6, x6, x5 // runtime __va(_end) - mov_q x7, SWAPPER_RW_MMUFLAGS - - map_memory x0, x1, x5, x6, x7, x3, (VA_BITS - PGDIR_SHIFT), x10, x11, x12, x13, x14 + adrp x3, _end // runtime __pa(_end) + sub x3, x3, x1 // _end - _text + ldr x4, =SWAPPER_RW_MMUFLAGS + adr_l x5, headpool_pgtable_alloc + mov x6, #0 + bl mmu_head_create_pgd_mapping + + ldp lr, xzr, [sp, #64] + ldp x6, x7, [sp, #48] + ldp x4, x5, [sp, #32] + ldp x2, x3, [sp, #16] + ldp x0, x1, [sp], #80 dsb ishst // sync with page table walker ret diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 80e49faaf066..e9748c7017dd 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -41,10 +41,6 @@ #include #include -#define NO_BLOCK_MAPPINGS BIT(0) -#define NO_CONT_MAPPINGS BIT(1) -#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ - int idmap_t0sz __ro_after_init; #if VA_BITS > 48 From patchwork Wed Mar 13 12:57:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13591389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8E76DC54791 for ; Wed, 13 Mar 2024 13:00:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PERu84xOv0ohtB7SiFBXaoG6azZjQkV8IutyOgNm4+A=; b=SsDDjeVE3/RPQh QtVFR9kN1LamSn6qi+LV9+8Q3SMlZ0uHebmATJ1LQXZkgx5vHOHqx/m/61Rp2sBz4m/CZMfNpRRWb U4kn+j4x3gYa7dd/slSg9HHMfahKBKErQ3zwa+5E5j7+gvf3bgdsj2CAhEhMw72Ow1oNWfgXaWHS5 2gzNpP/btGKB8rxIWglEi5YmIMgiR+USzLLhb/AgQCgG1Gl3zfN5TtiA8z6uFrgPWiB0i9TlPDeE6 7Gre8IrXrglC86McSI6t0qxVmpGHtzZOFxxasEwRJ/7lD0qVFp/uu5zlnWKvQ/SW+KClVrKCv9yjS UfAtdh/N60/8cmD41K2g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkODg-0000000A9Uf-1gCF; Wed, 13 Mar 2024 13:00:32 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkODb-0000000A9RZ-0sfR for linux-arm-kernel@lists.infradead.org; Wed, 13 Mar 2024 13:00:29 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710334826; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OTpMzIPTaN/PeAoV9naza8XudirCZsVkx9XBw064bkU=; b=B/+KI+kZYODLHdBWEsexBkaY+ya+jXq3iwexhu5R8Evgm3vjgx3oq1gFXldYDMRvaJioST 6shzVU0UWnmDbnXzD4iKUcdRBR+kpb2iKyiCeW1ypBWhXQSR1TGadWBSffBCL00oGnJoQy eb6bhJHtGRl+yzVBMEmxHneK8ON7Mr8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-626-vAQK93SON1eA3E5WK4J--Q-1; Wed, 13 Mar 2024 08:57:56 -0400 X-MC-Unique: vAQK93SON1eA3E5WK4J--Q-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 72529101A56C; Wed, 13 Mar 2024 12:57:56 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id F0E6810E47; Wed, 13 Mar 2024 12:57:53 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Ard Biesheuvel , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH 10/10] arm64: head: Clean up unneeded routines Date: Wed, 13 Mar 2024 20:57:08 +0800 Message-ID: <20240313125711.20651-11-piliu@redhat.com> In-Reply-To: <20240313125711.20651-1-piliu@redhat.com> References: <20240313125711.20651-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240313_060027_403861_B9E4035F X-CRM114-Status: GOOD ( 16.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Signed-off-by: Pingfan Liu Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland To: linux-arm-kernel@lists.infradead.org --- arch/arm64/kernel/head.S | 143 --------------------------------------- 1 file changed, 143 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index e2fa6b95f809..c38d169129ac 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -189,149 +189,6 @@ SYM_FUNC_START_LOCAL(clear_page_tables) b __pi_memset // tail call SYM_FUNC_END(clear_page_tables) -/* - * Macro to populate page table entries, these entries can be pointers to the next level - * or last level entries pointing to physical memory. - * - * tbl: page table address - * rtbl: pointer to page table or physical memory - * index: start index to write - * eindex: end index to write - [index, eindex] written to - * flags: flags for pagetable entry to or in - * inc: increment to rtbl between each entry - * tmp1: temporary variable - * - * Preserves: tbl, eindex, flags, inc - * Corrupts: index, tmp1 - * Returns: rtbl - */ - .macro populate_entries, tbl, rtbl, index, eindex, flags, inc, tmp1 -.Lpe\@: phys_to_pte \tmp1, \rtbl - orr \tmp1, \tmp1, \flags // tmp1 = table entry - str \tmp1, [\tbl, \index, lsl #3] - add \rtbl, \rtbl, \inc // rtbl = pa next level - add \index, \index, #1 - cmp \index, \eindex - b.ls .Lpe\@ - .endm - -/* - * Compute indices of table entries from virtual address range. If multiple entries - * were needed in the previous page table level then the next page table level is assumed - * to be composed of multiple pages. (This effectively scales the end index). - * - * vstart: virtual address of start of range - * vend: virtual address of end of range - we map [vstart, vend] - * shift: shift used to transform virtual address into index - * order: #imm 2log(number of entries in page table) - * istart: index in table corresponding to vstart - * iend: index in table corresponding to vend - * count: On entry: how many extra entries were required in previous level, scales - * our end index. - * On exit: returns how many extra entries required for next page table level - * - * Preserves: vstart, vend - * Returns: istart, iend, count - */ - .macro compute_indices, vstart, vend, shift, order, istart, iend, count - ubfx \istart, \vstart, \shift, \order - ubfx \iend, \vend, \shift, \order - add \iend, \iend, \count, lsl \order - sub \count, \iend, \istart - .endm - -/* - * Map memory for specified virtual address range. Each level of page table needed supports - * multiple entries. If a level requires n entries the next page table level is assumed to be - * formed from n pages. - * - * tbl: location of page table - * rtbl: address to be used for first level page table entry (typically tbl + PAGE_SIZE) - * vstart: virtual address of start of range - * vend: virtual address of end of range - we map [vstart, vend - 1] - * flags: flags to use to map last level entries - * phys: physical address corresponding to vstart - physical memory is contiguous - * order: #imm 2log(number of entries in PGD table) - * - * If extra_shift is set, an extra level will be populated if the end address does - * not fit in 'extra_shift' bits. This assumes vend is in the TTBR0 range. - * - * Temporaries: istart, iend, tmp, count, sv - these need to be different registers - * Preserves: vstart, flags - * Corrupts: tbl, rtbl, vend, istart, iend, tmp, count, sv - */ - .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, order, istart, iend, tmp, count, sv, extra_shift - sub \vend, \vend, #1 - add \rtbl, \tbl, #PAGE_SIZE - mov \count, #0 - - .ifnb \extra_shift - tst \vend, #~((1 << (\extra_shift)) - 1) - b.eq .L_\@ - compute_indices \vstart, \vend, #\extra_shift, #(PAGE_SHIFT - 3), \istart, \iend, \count - mov \sv, \rtbl - populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp - mov \tbl, \sv - .endif -.L_\@: - compute_indices \vstart, \vend, #PGDIR_SHIFT, #\order, \istart, \iend, \count - mov \sv, \rtbl - populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp - mov \tbl, \sv - -#if SWAPPER_PGTABLE_LEVELS > 3 - compute_indices \vstart, \vend, #PUD_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count - mov \sv, \rtbl - populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp - mov \tbl, \sv -#endif - -#if SWAPPER_PGTABLE_LEVELS > 2 - compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count - mov \sv, \rtbl - populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp - mov \tbl, \sv -#endif - - compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count - bic \rtbl, \phys, #SWAPPER_BLOCK_SIZE - 1 - populate_entries \tbl, \rtbl, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp - .endm - -/* - * Remap a subregion created with the map_memory macro with modified attributes - * or output address. The entire remapped region must have been covered in the - * invocation of map_memory. - * - * x0: last level table address (returned in first argument to map_memory) - * x1: start VA of the existing mapping - * x2: start VA of the region to update - * x3: end VA of the region to update (exclusive) - * x4: start PA associated with the region to update - * x5: attributes to set on the updated region - * x6: order of the last level mappings - */ -SYM_FUNC_START_LOCAL(remap_region) - sub x3, x3, #1 // make end inclusive - - // Get the index offset for the start of the last level table - lsr x1, x1, x6 - bfi x1, xzr, #0, #PAGE_SHIFT - 3 - - // Derive the start and end indexes into the last level table - // associated with the provided region - lsr x2, x2, x6 - lsr x3, x3, x6 - sub x2, x2, x1 - sub x3, x3, x1 - - mov x1, #1 - lsl x6, x1, x6 // block size at this level - - populate_entries x0, x4, x2, x3, x5, x6, x7 - ret -SYM_FUNC_END(remap_region) - SYM_FUNC_START_LOCAL(create_idmap) adr_l x0, init_stack add sp, x0, #THREAD_SIZE