From patchwork Thu Oct 6 11:12:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13000163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACD87C43217 for ; Thu, 6 Oct 2022 11:11:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231506AbiJFLL5 (ORCPT ); Thu, 6 Oct 2022 07:11:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231426AbiJFLLx (ORCPT ); Thu, 6 Oct 2022 07:11:53 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9347A15707 for ; Thu, 6 Oct 2022 04:11:52 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 594FE1042; Thu, 6 Oct 2022 04:11:58 -0700 (PDT) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B36713F73B; Thu, 6 Oct 2022 04:11:50 -0700 (PDT) From: Alexandru Elisei To: pbonzini@redhat.com, thuth@redhat.com, andrew.jones@linux.dev, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Laurent Vivier , Janosch Frank , Claudio Imbrenda Subject: [kvm-unit-tests PATCH 1/3] lib/vmalloc: Treat virt_to_pte_phys() as returning a physical address Date: Thu, 6 Oct 2022 12:12:39 +0100 Message-Id: <20221006111241.15083-2-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221006111241.15083-1-alexandru.elisei@arm.com> References: <20221006111241.15083-1-alexandru.elisei@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org All architectures that implements virt_to_pte_phys() (s390x, x86, arm and arm64) return a physical address from the function. Teach vmalloc to treat it as such, instead of confusing the return value with a page table entry. Changing things the other way around (having the function return a page table entry instead) is not feasible, because it is possible for an architecture to use the upper bits of the table entry to store metadata about the page. Cc: Paolo Bonzini Cc: Thomas Huth Cc: Andrew Jones Cc: Laurent Vivier Cc: Janosch Frank Cc: Claudio Imbrenda Signed-off-by: Alexandru Elisei --- lib/vmalloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/vmalloc.c b/lib/vmalloc.c index 572682576cc3..0696b5da8190 100644 --- a/lib/vmalloc.c +++ b/lib/vmalloc.c @@ -169,7 +169,7 @@ static void vm_free(void *mem) /* the pointer is not page-aligned, it was a single-page allocation */ if (!IS_ALIGNED((uintptr_t)mem, PAGE_SIZE)) { assert(GET_MAGIC(mem) == VM_MAGIC); - page = virt_to_pte_phys(page_root, mem) & PAGE_MASK; + page = virt_to_pte_phys(page_root, mem); assert(page); free_page(phys_to_virt(page)); return; @@ -183,7 +183,7 @@ static void vm_free(void *mem) /* free all the pages including the metadata page */ ptr = (uintptr_t)m & PAGE_MASK; for (i = 0 ; i < m->npages + 1; i++, ptr += PAGE_SIZE) { - page = virt_to_pte_phys(page_root, (void *)ptr) & PAGE_MASK; + page = virt_to_pte_phys(page_root, (void *)ptr); assert(page); free_page(phys_to_virt(page)); } From patchwork Thu Oct 6 11:12:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13000164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C18C5C433FE for ; Thu, 6 Oct 2022 11:11:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231508AbiJFLL7 (ORCPT ); Thu, 6 Oct 2022 07:11:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231488AbiJFLL4 (ORCPT ); Thu, 6 Oct 2022 07:11:56 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3415A3890 for ; Thu, 6 Oct 2022 04:11:53 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 983081BF7; Thu, 6 Oct 2022 04:11:59 -0700 (PDT) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 62D493F73B; Thu, 6 Oct 2022 04:11:52 -0700 (PDT) From: Alexandru Elisei To: pbonzini@redhat.com, thuth@redhat.com, andrew.jones@linux.dev, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Subject: [kvm-unit-tests PATCH 2/3] arm/arm64: mmu: Teach virt_to_pte_phys() about block descriptors Date: Thu, 6 Oct 2022 12:12:40 +0100 Message-Id: <20221006111241.15083-3-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221006111241.15083-1-alexandru.elisei@arm.com> References: <20221006111241.15083-1-alexandru.elisei@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The arm and arm64 architectures allow a virtual address to be mapped using a block descriptor (or huge page, as Linux calls it), and the function mmu_set_ranges_sect() is made available for a test to do just that. But virt_to_pte_phys() assumes that all virtual addresses are mapped with page granularity, which can lead to erroneous addresses being returned in the case of block mappings. Signed-off-by: Alexandru Elisei --- lib/arm/mmu.c | 89 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 54 insertions(+), 35 deletions(-) diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c index e1a72fe4941f..2aaa63d538c0 100644 --- a/lib/arm/mmu.c +++ b/lib/arm/mmu.c @@ -111,10 +111,61 @@ pteval_t *install_page(pgd_t *pgtable, phys_addr_t phys, void *virt) __pgprot(PTE_WBWA | PTE_USER)); } -phys_addr_t virt_to_pte_phys(pgd_t *pgtable, void *mem) +/* + * NOTE: The Arm architecture might require the use of a + * break-before-make sequence before making changes to a PTE and + * certain conditions are met (see Arm ARM D5-2669 for AArch64 and + * B3-1378 for AArch32 for more details). + */ +pteval_t *mmu_get_pte(pgd_t *pgtable, uintptr_t vaddr) { - return (*get_pte(pgtable, (uintptr_t)mem) & PHYS_MASK & -PAGE_SIZE) - + ((ulong)mem & (PAGE_SIZE - 1)); + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + if (!mmu_enabled()) + return NULL; + + pgd = pgd_offset(pgtable, vaddr); + if (!pgd_valid(*pgd)) + return NULL; + + pud = pud_offset(pgd, vaddr); + if (!pud_valid(*pud)) + return NULL; + + pmd = pmd_offset(pud, vaddr); + if (!pmd_valid(*pmd)) + return NULL; + if (pmd_huge(*pmd)) + return &pmd_val(*pmd); + + pte = pte_offset(pmd, vaddr); + if (!pte_valid(*pte)) + return NULL; + + return &pte_val(*pte); +} + +phys_addr_t virt_to_pte_phys(pgd_t *pgtable, void *virt) +{ + phys_addr_t mask; + pteval_t *pteval; + + pteval = mmu_get_pte(pgtable, (uintptr_t)virt); + if (!pteval || !pte_valid(__pte(*pteval))) { + install_page(pgtable, (phys_addr_t)(unsigned long)virt, virt); + return (phys_addr_t)(unsigned long)virt; + } + + if (pmd_huge(__pmd(*pteval))) + mask = PMD_MASK; + else + mask = PAGE_MASK; + + return (*pteval & PHYS_MASK & mask) | + ((phys_addr_t)(unsigned long)virt & ~mask); } void mmu_set_range_ptes(pgd_t *pgtable, uintptr_t virt_offset, @@ -231,38 +282,6 @@ unsigned long __phys_to_virt(phys_addr_t addr) return addr; } -/* - * NOTE: The Arm architecture might require the use of a - * break-before-make sequence before making changes to a PTE and - * certain conditions are met (see Arm ARM D5-2669 for AArch64 and - * B3-1378 for AArch32 for more details). - */ -pteval_t *mmu_get_pte(pgd_t *pgtable, uintptr_t vaddr) -{ - pgd_t *pgd; - pud_t *pud; - pmd_t *pmd; - pte_t *pte; - - if (!mmu_enabled()) - return NULL; - - pgd = pgd_offset(pgtable, vaddr); - assert(pgd_valid(*pgd)); - pud = pud_offset(pgd, vaddr); - assert(pud_valid(*pud)); - pmd = pmd_offset(pud, vaddr); - assert(pmd_valid(*pmd)); - - if (pmd_huge(*pmd)) - return &pmd_val(*pmd); - - pte = pte_offset(pmd, vaddr); - assert(pte_valid(*pte)); - - return &pte_val(*pte); -} - void mmu_clear_user(pgd_t *pgtable, unsigned long vaddr) { pteval_t *p_pte = mmu_get_pte(pgtable, vaddr); From patchwork Thu Oct 6 11:12:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13000165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74758C433F5 for ; Thu, 6 Oct 2022 11:12:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231517AbiJFLL7 (ORCPT ); Thu, 6 Oct 2022 07:11:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231496AbiJFLL4 (ORCPT ); Thu, 6 Oct 2022 07:11:56 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id EA44A8E9A2 for ; Thu, 6 Oct 2022 04:11:54 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D8CB11C00; Thu, 6 Oct 2022 04:12:00 -0700 (PDT) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A25153F73B; Thu, 6 Oct 2022 04:11:53 -0700 (PDT) From: Alexandru Elisei To: pbonzini@redhat.com, thuth@redhat.com, andrew.jones@linux.dev, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Subject: [kvm-unit-tests PATCH 3/3] arm/arm64: mmu: Rename mmu_get_pte() -> follow_pte() Date: Thu, 6 Oct 2022 12:12:41 +0100 Message-Id: <20221006111241.15083-4-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221006111241.15083-1-alexandru.elisei@arm.com> References: <20221006111241.15083-1-alexandru.elisei@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The function get_pte() from mmu.c returns a pointer to the PTE associated with the requested virtual address, mapping the virtual address in the process if it's not already mapped. mmu_get_pte() returns a pointer to the PTE if and only if the virtual is mapped in pgtable, otherwise returns NULL. Rename it to follow_pte() to avoid any confusion with get_pte(). follow_pte() also matches the name of Linux kernel function with a similar purpose. Also remove the mmu_enabled() check from the function, as the purpose of the function is to get the mapping for the virtual address in the pgtable supplied as the argument, not to translate the virtual address to a physical address using the current translation; that's what virt_to_phys() does. Signed-off-by: Alexandru Elisei --- lib/arm/asm/mmu-api.h | 2 +- lib/arm/mmu.c | 9 +++------ 2 files changed, 4 insertions(+), 7 deletions(-) diff --git a/lib/arm/asm/mmu-api.h b/lib/arm/asm/mmu-api.h index 3d77cbfd8b24..6c1136d957f9 100644 --- a/lib/arm/asm/mmu-api.h +++ b/lib/arm/asm/mmu-api.h @@ -17,6 +17,6 @@ extern void mmu_set_range_sect(pgd_t *pgtable, uintptr_t virt_offset, extern void mmu_set_range_ptes(pgd_t *pgtable, uintptr_t virt_offset, phys_addr_t phys_start, phys_addr_t phys_end, pgprot_t prot); -extern pteval_t *mmu_get_pte(pgd_t *pgtable, uintptr_t vaddr); +extern pteval_t *follow_pte(pgd_t *pgtable, uintptr_t vaddr); extern void mmu_clear_user(pgd_t *pgtable, unsigned long vaddr); #endif diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c index 2aaa63d538c0..ec3ce63f2316 100644 --- a/lib/arm/mmu.c +++ b/lib/arm/mmu.c @@ -117,16 +117,13 @@ pteval_t *install_page(pgd_t *pgtable, phys_addr_t phys, void *virt) * certain conditions are met (see Arm ARM D5-2669 for AArch64 and * B3-1378 for AArch32 for more details). */ -pteval_t *mmu_get_pte(pgd_t *pgtable, uintptr_t vaddr) +pteval_t *follow_pte(pgd_t *pgtable, uintptr_t vaddr) { pgd_t *pgd; pud_t *pud; pmd_t *pmd; pte_t *pte; - if (!mmu_enabled()) - return NULL; - pgd = pgd_offset(pgtable, vaddr); if (!pgd_valid(*pgd)) return NULL; @@ -153,7 +150,7 @@ phys_addr_t virt_to_pte_phys(pgd_t *pgtable, void *virt) phys_addr_t mask; pteval_t *pteval; - pteval = mmu_get_pte(pgtable, (uintptr_t)virt); + pteval = follow_pte(pgtable, (uintptr_t)virt); if (!pteval || !pte_valid(__pte(*pteval))) { install_page(pgtable, (phys_addr_t)(unsigned long)virt, virt); return (phys_addr_t)(unsigned long)virt; @@ -284,7 +281,7 @@ unsigned long __phys_to_virt(phys_addr_t addr) void mmu_clear_user(pgd_t *pgtable, unsigned long vaddr) { - pteval_t *p_pte = mmu_get_pte(pgtable, vaddr); + pteval_t *p_pte = follow_pte(pgtable, vaddr); if (p_pte) { pteval_t entry = *p_pte & ~PTE_USER; WRITE_ONCE(*p_pte, entry);