From patchwork Fri Dec 9 01:53:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver Upton X-Patchwork-Id: 13069143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA8BEC4332F for ; Fri, 9 Dec 2022 01:55:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Mpnn5O4v3ok+wp+MfRHyF8NoRho3ECFT42w7TDigXr8=; b=bgcKBZNgwnEj0U lrDFhJH9khpQ6VPDipMhmFyIsTjfP2dfGcdwB9jlayQLqvsLPgUBSV3QIuwgY6gLXqo3oTU0pKz5R hx/EOxHZYkJAofyWCpeRtGjZZNt92n/PRt1i85zU3KMt5v/g+R5TDQDaE9bp/0fFxVfONoQH6sXbO tzc22r+WvKeQIGVYR1MtHctuY9jgYijA7+YvXss/22fWysurdsSZwnGUJx1Ez6KtRT+e+rImmXdG3 SsVnXCyYFh3g3G9NW8RDUv+lK7nnYxqVZQ/cCW3JuI6Lath3yoy3BprD7X5qu2aMMSGu+LXuC/OaZ wMh4D/+svO0E9VPiMzHQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Sah-000V96-71; Fri, 09 Dec 2022 01:54:19 +0000 Received: from out-245.mta0.migadu.com ([91.218.175.245]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Sa4-000UVZ-FV for linux-arm-kernel@lists.infradead.org; Fri, 09 Dec 2022 01:53:42 +0000 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Oliver Upton To: Marc Zyngier , James Morse , Alexandru Elisei , Paolo Bonzini , Shuah Khan Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, kvmarm@lists.linux.dev, Ricardo Koller , Sean Christopherson , Oliver Upton , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/7] KVM: selftests: Correctly initialize the VA space for TTBR0_EL1 Date: Fri, 9 Dec 2022 01:53:03 +0000 Message-Id: <20221209015307.1781352-5-oliver.upton@linux.dev> In-Reply-To: <20221209015307.1781352-1-oliver.upton@linux.dev> References: <20221209015307.1781352-1-oliver.upton@linux.dev> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_175340_740529_06E7E25B X-CRM114-Status: GOOD ( 15.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org An interesting feature of the Arm architecture is that the stage-1 MMU supports two distinct VA regions, controlled by TTBR{0,1}_EL1. As KVM selftests on arm64 only uses TTBR0_EL1, the VA space is constrained to [0, 2^(va_bits)). This is different from other architectures that allow for addressing low and high regions of the VA space from a single page table. KVM selftests' VA space allocator presumes the valid address range is split between low and high memory based the MSB, which of course is a poor match for arm64's TTBR0 region. Add a helper that correctly handles both addressing schemes with a comment describing each. Signed-off-by: Oliver Upton Reviewed-by: Sean Christopherson --- .../selftests/kvm/include/kvm_util_base.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 49 ++++++++++++++++--- 2 files changed, 44 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 6cd86da698b3..b193863d754f 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -103,6 +103,7 @@ struct kvm_vm { struct sparsebit *vpages_mapped; bool has_irqchip; bool pgd_created; + bool has_split_va_space; vm_paddr_t ucall_mmio_addr; vm_paddr_t pgd; vm_vaddr_t gdt; diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index a256ec67aff6..53d15f32f220 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -186,6 +186,43 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = { _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES, "Missing new mode params?"); +/* + * Initializes vm->vpages_valid to match the canonical VA space of the + * architecture. + * + * Most architectures split the range addressed by a single page table into a + * low and high region based on the MSB of the VA. On architectures with this + * behavior the VA region spans [0, 2^(va_bits - 1)), [-(2^(va_bits - 1), -1]. + * + * arm64 is a bit different from the rest of the crowd, as the low and high + * regions of the VA space are addressed by distinct paging structures + * (TTBR{0,1}_EL1). KVM selftests on arm64 only uses TTBR0_EL1, meaning that we + * only have a low VA region. As there is no VA split based on the MSB, the VA + * region spans [0, 2^va_bits). + */ +static void vm_vaddr_populate_bitmap(struct kvm_vm *vm) +{ + sparsebit_num_t contig_va_bits = vm->va_bits; + sparsebit_num_t nr_contig_pages; + + /* + * Depending on the architecture, the MSB of the VA could split between + * low and high regions. When that is the case each region has + * va_bits - 1 of address. + */ + if (vm->has_split_va_space) + contig_va_bits--; + + nr_contig_pages = (1ULL << contig_va_bits) >> vm->page_shift; + + sparsebit_set_num(vm->vpages_valid, 0, nr_contig_pages); + + if (vm->has_split_va_space) + sparsebit_set_num(vm->vpages_valid, + -(1ULL << contig_va_bits), + nr_contig_pages); +} + struct kvm_vm *____vm_create(enum vm_guest_mode mode) { struct kvm_vm *vm; @@ -268,17 +305,17 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode) #ifdef __aarch64__ if (vm->pa_bits != 40) vm->type = KVM_VM_TYPE_ARM_IPA_SIZE(vm->pa_bits); + + /* selftests use TTBR0 only, meaning there is a single VA region. */ + vm->has_split_va_space = false; +#else + vm->has_split_va_space = true; #endif vm_open(vm); - /* Limit to VA-bit canonical virtual addresses. */ vm->vpages_valid = sparsebit_alloc(); - sparsebit_set_num(vm->vpages_valid, - 0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift); - sparsebit_set_num(vm->vpages_valid, - (~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift, - (1ULL << (vm->va_bits - 1)) >> vm->page_shift); + vm_vaddr_populate_bitmap(vm); /* Limit physical addresses to PA-bits. */ vm->max_gfn = vm_compute_max_gfn(vm);