From patchwork Fri Apr 29 18:39:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12832596 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 881D6C433F5 for ; Fri, 29 Apr 2022 18:39:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380073AbiD2SnF (ORCPT ); Fri, 29 Apr 2022 14:43:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1359759AbiD2SnE (ORCPT ); Fri, 29 Apr 2022 14:43:04 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD5D1D64C7 for ; Fri, 29 Apr 2022 11:39:45 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id i19-20020aa79093000000b0050d44b83506so4551977pfa.22 for ; Fri, 29 Apr 2022 11:39:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Lae+OTId74MWDDAUiScLgZRF0Bn5hrr1t6aBaPS6xDM=; b=ZN22D5TNK2jNXTqIBP/FxfLVW3oXhxOENvV2jjKgN3OvVSx048Wv4BG8OEQYIVXk75 lJWy07GrdtpMMpQG2RIdaA2wXAET2GBXRb/kw3VCpHbtRtr1hs0TstVcHRE6Vmm4Ul34 hk16yhS/g6lgVh4dk8poSP0el9ZoVwRz3J+7RMLQc1H+v7BNptAg6vzufQDz5EjXF1ad jRX7I62DZUnmQBpTD+sl7I7ezsbEKs5jWAGtLFzMvBTEvzvaOMrszZHlh3yeRL/+8jUd oyqq+Wkz2SWhGD+CJ4gjQQM/c9WHPZzxry54fnjrV2kgGwNhsNrOsC1IpaIFciBG+uAP 6NWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Lae+OTId74MWDDAUiScLgZRF0Bn5hrr1t6aBaPS6xDM=; b=fQDsFuT+vdmSJNr7hOXDJbvd2Vs761Q4zXWkU4Ejm8R/w74z/fYe/ZyBGc81XRjps5 1QMo3Ye3o4l9fOtoVF7JrbeSXivpdPGipaJQPnVNiWhuy3+fVExIVyJ6hFnSkzHns0co tYkPyQMsj6bxd4hqLvEfkIYP8/cTBdRXFcnUedbiahG5kPET7m8HByIC8Criu6CB2za9 Y/l+wtNLosJouNcT/0xozMoTb2HvFX2qrNVS42FEdwtHr4P+X1kDXykrfahTYmWfTg5/ uVfH2HQb+G7IjNo770G+air5JkgVcZMKaO9DoItbt3C/prrzIUhP+xI2kPyqLzgJowQ2 kR9w== X-Gm-Message-State: AOAM532FZRABekmItzJsOEjdkl5ePVoXMUYMXkw3cazfu+kDk309QH9Y 5iSRU8yaXgr+uy86XuzOheTKWoOC4vYlxw== X-Google-Smtp-Source: ABdhPJx9bvX2UVkTXHY+DCKjNe8Eo/qxZZmJx1pUqYjXf4kf4RryjjZNyNVO+k5rXAgVMLffAC2fbs9eyiJH5Q== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:6b83:b0:15d:1ea2:4f80 with SMTP id p3-20020a1709026b8300b0015d1ea24f80mr681304plk.41.1651257585187; Fri, 29 Apr 2022 11:39:45 -0700 (PDT) Date: Fri, 29 Apr 2022 18:39:27 +0000 In-Reply-To: <20220429183935.1094599-1-dmatlack@google.com> Message-Id: <20220429183935.1094599-2-dmatlack@google.com> Mime-Version: 1.0 References: <20220429183935.1094599-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH 1/9] KVM: selftests: Replace x86_page_size with raw levels From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org x86_page_size is an enum used to communicate the desired page size with which to map a range of memory. Under the hood they just encode the desired level at which to map the page. This ends up being clunky in a few ways: - The name suggests it encodes the size of the page rather than the level. - In other places in x86_64/processor.c we just use a raw int to encode the level. Simplify this by just admitting that x86_page_size is just the level and using an int and some more obviously named macros (e.g. PG_LEVEL_1G). Signed-off-by: David Matlack --- .../selftests/kvm/include/x86_64/processor.h | 14 +++++----- .../selftests/kvm/lib/x86_64/processor.c | 27 +++++++++---------- .../selftests/kvm/max_guest_memory_test.c | 2 +- .../selftests/kvm/x86_64/mmu_role_test.c | 2 +- 4 files changed, 22 insertions(+), 23 deletions(-) base-commit: 84e5ffd045f33e4fa32370135436d987478d0bf7 diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 37db341d4cc5..b512f9f508ae 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -465,13 +465,13 @@ void vcpu_set_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); void vm_xsave_req_perm(int bit); -enum x86_page_size { - X86_PAGE_SIZE_4K = 0, - X86_PAGE_SIZE_2M, - X86_PAGE_SIZE_1G, -}; -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - enum x86_page_size page_size); +#define PG_LEVEL_4K 0 +#define PG_LEVEL_2M 1 +#define PG_LEVEL_1G 2 + +#define PG_LEVEL_SIZE(_level) (1ull << (((_level) * 9) + 12)) + +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level); /* * Basic CPU control in CR0 diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..1a7de69e2495 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -199,15 +199,15 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm, uint64_t pt_pfn, uint64_t vaddr, uint64_t paddr, - int level, - enum x86_page_size page_size) + int current_level, + int target_level) { - struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, level); + struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, current_level); if (!pte->present) { pte->writable = true; pte->present = true; - pte->page_size = (level == page_size); + pte->page_size = (current_level == target_level); if (pte->page_size) pte->pfn = paddr >> vm->page_shift; else @@ -218,20 +218,19 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm, * a hugepage at this level, and that there isn't a hugepage at * this level. */ - TEST_ASSERT(level != page_size, + TEST_ASSERT(current_level != target_level, "Cannot create hugepage at level: %u, vaddr: 0x%lx\n", - page_size, vaddr); + current_level, vaddr); TEST_ASSERT(!pte->page_size, "Cannot create page table at level: %u, vaddr: 0x%lx\n", - level, vaddr); + current_level, vaddr); } return pte; } -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - enum x86_page_size page_size) +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) { - const uint64_t pg_size = 1ull << ((page_size * 9) + 12); + const uint64_t pg_size = PG_LEVEL_SIZE(level); struct pageUpperEntry *pml4e, *pdpe, *pde; struct pageTableEntry *pte; @@ -256,15 +255,15 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, * early if a hugepage was created. */ pml4e = virt_create_upper_pte(vm, vm->pgd >> vm->page_shift, - vaddr, paddr, 3, page_size); + vaddr, paddr, 3, level); if (pml4e->page_size) return; - pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, 2, page_size); + pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, 2, level); if (pdpe->page_size) return; - pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, 1, page_size); + pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, 1, level); if (pde->page_size) return; @@ -279,7 +278,7 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { - __virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K); + __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); } static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/testing/selftests/kvm/max_guest_memory_test.c index 3875c4b23a04..15f046e19cb2 100644 --- a/tools/testing/selftests/kvm/max_guest_memory_test.c +++ b/tools/testing/selftests/kvm/max_guest_memory_test.c @@ -244,7 +244,7 @@ int main(int argc, char *argv[]) #ifdef __x86_64__ /* Identity map memory in the guest using 1gb pages. */ for (i = 0; i < slot_size; i += size_1gb) - __virt_pg_map(vm, gpa + i, gpa + i, X86_PAGE_SIZE_1G); + __virt_pg_map(vm, gpa + i, gpa + i, PG_LEVEL_1G); #else for (i = 0; i < slot_size; i += vm_get_page_size(vm)) virt_pg_map(vm, gpa + i, gpa + i); diff --git a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c index da2325fcad87..bdecd532f935 100644 --- a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c +++ b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c @@ -35,7 +35,7 @@ static void mmu_role_test(u32 *cpuid_reg, u32 evil_cpuid_val) run = vcpu_state(vm, VCPU_ID); /* Map 1gb page without a backing memlot. */ - __virt_pg_map(vm, MMIO_GPA, MMIO_GPA, X86_PAGE_SIZE_1G); + __virt_pg_map(vm, MMIO_GPA, MMIO_GPA, PG_LEVEL_1G); r = _vcpu_run(vm, VCPU_ID); From patchwork Fri Apr 29 18:39:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12832597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 708D8C433FE for ; Fri, 29 Apr 2022 18:39:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380077AbiD2SnG (ORCPT ); Fri, 29 Apr 2022 14:43:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1359759AbiD2SnG (ORCPT ); Fri, 29 Apr 2022 14:43:06 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 521DDD64D1 for ; Fri, 29 Apr 2022 11:39:47 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id o6-20020a17090a420600b001d90365bda4so4434725pjg.1 for ; Fri, 29 Apr 2022 11:39:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JX0jdNV+cZNA/oe+LeTeOjDC1enCtQmv3fKIlmN/wK4=; b=Qm3qvTeCJls+TSPV8ox/Gz/bz/j5AZRHojI4lIR44KXWxv5e8kP9Yti2kl26ohlzOV 0RBDmQZMIYm9QXhpJ1AGIOi/LkQ8ZGTWT3ud5HaFZsBOwl/RlwYU+VJ1tL34/46M113B e9IIGX1mYSIy5rf8QkcmYuXvtfHvqRA3MKZEPpLO+K00y/7zfVIqbYpCJSdvU8naw8or bki1UYwq1Q9kqVKpu0xwmWnT39lENgzWxYh52MzrW8vGRpFXV6ESa39XcqrV+2rAJ+cB /Z6bAmbgsNIrttxOcwBYfI4EM/QxNNUKfKWLzBqCDHqCFAzMtkXHnMulWqAnWuJMGR1V Wpjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JX0jdNV+cZNA/oe+LeTeOjDC1enCtQmv3fKIlmN/wK4=; b=FgOSfF1CA6il+otIhFdTSZLcofhDa3JFqjIqdBjDbwHuUYrMLk8sQjEL3HL1Py/tJK vbhGZwIXMZfgmInPtYP0Njvnthu/ogOLvjVa8CTyZhAJ9R+SMezV39rSNfsUmehLkXuc 7taPL/xtOwqVrzo1XaJT564j8gyCIWuf0ybBshKb6Jpn9zhrZ83vyuU9+T6IsozlA0Qo pTzOM55OBTK+NYZEzzU5Zp6kdUpnwYBTui7lyBKLRjsmN31PaxpW7uvFLHcEznAVefaq 6JqgIQw8rexiwPIhpSpz7lnk+vl99Tbg982wkyyPZjhyDu22UfUGDxUEegWAmmLFAvFk Djag== X-Gm-Message-State: AOAM533cUG9UzU3c1DL8/1M19pyoDhE+pbJ6U26YAOwTGmgLAe1RUsaZ XA246wyFu6JqVBNtlnxn6VuXCujV0e6uYg== X-Google-Smtp-Source: ABdhPJw8HErDPYBHSTjEqN48Ybr+ClfM5LBVrA+sgQDDUG6c44Yzhiy9+IL+OwNekJwUpjXPuOrFRYEwcoTIHA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:aa7:8215:0:b0:4f7:125a:c88c with SMTP id k21-20020aa78215000000b004f7125ac88cmr432877pfi.70.1651257586809; Fri, 29 Apr 2022 11:39:46 -0700 (PDT) Date: Fri, 29 Apr 2022 18:39:28 +0000 In-Reply-To: <20220429183935.1094599-1-dmatlack@google.com> Message-Id: <20220429183935.1094599-3-dmatlack@google.com> Mime-Version: 1.0 References: <20220429183935.1094599-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH 2/9] KVM: selftests: Add option to create 2M and 1G EPT mappings From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The current EPT mapping code in the selftests only supports mapping 4K pages. This commit extends that support with an option to map at 2M or 1G. This will be used in a future commit to create large page mappings to test eager page splitting. No functional change intended. Signed-off-by: David Matlack --- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 105 ++++++++++--------- 1 file changed, 57 insertions(+), 48 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index d089d8b850b5..1fa2d1059ade 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -392,27 +392,60 @@ void nested_vmx_check_supported(void) } } -void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr) +static void nested_create_upper_pte(struct kvm_vm *vm, + struct eptPageTableEntry *pte, + uint64_t nested_paddr, + uint64_t paddr, + int current_level, + int target_level) +{ + if (!pte->readable) { + pte->writable = true; + pte->readable = true; + pte->executable = true; + pte->page_size = (current_level == target_level); + if (pte->page_size) + pte->address = paddr >> vm->page_shift; + else + pte->address = vm_alloc_page_table(vm) >> vm->page_shift; + } else { + /* + * Entry already present. Assert that the caller doesn't want + * a hugepage at this level, and that there isn't a hugepage at + * this level. + */ + TEST_ASSERT(current_level != target_level, + "Cannot create hugepage at level: %u, nested_paddr: 0x%lx\n", + current_level, nested_paddr); + TEST_ASSERT(!pte->page_size, + "Cannot create page table at level: %u, nested_paddr: 0x%lx\n", + current_level, nested_paddr); + } +} + + +void __nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr, int target_level) { + const uint64_t page_size = PG_LEVEL_SIZE(target_level); + struct eptPageTableEntry *pt; uint16_t index[4]; - struct eptPageTableEntry *pml4e; TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use " "unknown or unsupported guest mode, mode: 0x%x", vm->mode); - TEST_ASSERT((nested_paddr % vm->page_size) == 0, + TEST_ASSERT((nested_paddr % page_size) == 0, "Nested physical address not on page boundary,\n" - " nested_paddr: 0x%lx vm->page_size: 0x%x", - nested_paddr, vm->page_size); + " nested_paddr: 0x%lx page_size: 0x%lx", + nested_paddr, page_size); TEST_ASSERT((nested_paddr >> vm->page_shift) <= vm->max_gfn, "Physical address beyond beyond maximum supported,\n" " nested_paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", paddr, vm->max_gfn, vm->page_size); - TEST_ASSERT((paddr % vm->page_size) == 0, + TEST_ASSERT((paddr % page_size) == 0, "Physical address not on page boundary,\n" - " paddr: 0x%lx vm->page_size: 0x%x", - paddr, vm->page_size); + " paddr: 0x%lx page_size: 0x%lx", + paddr, page_size); TEST_ASSERT((paddr >> vm->page_shift) <= vm->max_gfn, "Physical address beyond beyond maximum supported,\n" " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", @@ -423,49 +456,25 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, index[2] = (nested_paddr >> 30) & 0x1ffu; index[3] = (nested_paddr >> 39) & 0x1ffu; - /* Allocate page directory pointer table if not present. */ - pml4e = vmx->eptp_hva; - if (!pml4e[index[3]].readable) { - pml4e[index[3]].address = vm_alloc_page_table(vm) >> vm->page_shift; - pml4e[index[3]].writable = true; - pml4e[index[3]].readable = true; - pml4e[index[3]].executable = true; - } + pt = vmx->eptp_hva; - /* Allocate page directory table if not present. */ - struct eptPageTableEntry *pdpe; - pdpe = addr_gpa2hva(vm, pml4e[index[3]].address * vm->page_size); - if (!pdpe[index[2]].readable) { - pdpe[index[2]].address = vm_alloc_page_table(vm) >> vm->page_shift; - pdpe[index[2]].writable = true; - pdpe[index[2]].readable = true; - pdpe[index[2]].executable = true; - } + for (int current_level = 3; current_level >= 0; current_level--) { + struct eptPageTableEntry *pte = &pt[index[current_level]]; - /* Allocate page table if not present. */ - struct eptPageTableEntry *pde; - pde = addr_gpa2hva(vm, pdpe[index[2]].address * vm->page_size); - if (!pde[index[1]].readable) { - pde[index[1]].address = vm_alloc_page_table(vm) >> vm->page_shift; - pde[index[1]].writable = true; - pde[index[1]].readable = true; - pde[index[1]].executable = true; - } + nested_create_upper_pte(vm, pte, nested_paddr, paddr, + current_level, target_level); - /* Fill in page table entry. */ - struct eptPageTableEntry *pte; - pte = addr_gpa2hva(vm, pde[index[1]].address * vm->page_size); - pte[index[0]].address = paddr >> vm->page_shift; - pte[index[0]].writable = true; - pte[index[0]].readable = true; - pte[index[0]].executable = true; + if (pte->page_size) + break; - /* - * For now mark these as accessed and dirty because the only - * testcase we have needs that. Can be reconsidered later. - */ - pte[index[0]].accessed = true; - pte[index[0]].dirty = true; + pt = addr_gpa2hva(vm, pte->address * vm->page_size); + } +} + +void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr) +{ + __nested_pg_map(vmx, vm, nested_paddr, paddr, PG_LEVEL_4K); } /* From patchwork Fri Apr 29 18:39:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12832598 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2830DC433EF for ; Fri, 29 Apr 2022 18:39:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380078AbiD2SnI (ORCPT ); Fri, 29 Apr 2022 14:43:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1359759AbiD2SnH (ORCPT ); Fri, 29 Apr 2022 14:43:07 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35826D64CD for ; Fri, 29 Apr 2022 11:39:49 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id r16-20020a17090b051000b001db302efed7so3448356pjz.2 for ; Fri, 29 Apr 2022 11:39:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=eLj9VWCH6YzHOSIQaoJWqtR8rFElApiGGUxoXWV1O94=; b=OJFI0bT7LMQpxeCViUR4+UD694zimhrAh2VPA0v9AMUuhyRMsWCNSeFSBylOT9gyUk 774fXxzii3Q5lJivzjja6/6V6/6TpcfqVQMBSmIRVpfNpkylP+nHG6FBqGBTLG9lI0xA 349tvY55CZ7O5qWzCb4db0JaENxafsHQ64/r+aR/CODZBsOAlPhSP9JfqykhY3Ih7V5y RrNnK/+Uz8t5rHjSt6L36E/LEtiUJTQPheUf1kQv3ZRsIH9Te98SqTOGqdLfNXyZz3z4 0iVSiKeonWn7oRF92IzHixs5T5CBuXv6K02KjxVrWoCawSHnvACRqV0g2kXWyE+9meKi k9kA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=eLj9VWCH6YzHOSIQaoJWqtR8rFElApiGGUxoXWV1O94=; b=NbAa6xZFKiqVmabj1IrNejpdfTqGngUaU9LdvDkCh9/y067Z5LeUPJYLUH/recPhIG c1FX2S+FXuyrkhqvrxKCfAX2caWKjMMWE9k8GjCgz/xm4lFVow+56ae1iOVTHOFOq3CX KmmGf+weJ1iGbrqTik61yC4A9Dpj1fVesag5yi3Y+43/DyTnnbm5xRi/7IDu4Idps/LT jejY5qHq+fRw3ee3wxfG7tAeEDrxQxYPQeySbHVAxlPwmW7dIWOm47Qefb5ccFwt0tQj c3vfNRocWtvYnbhyL9QrD2ydaEW0QpRHLTReezuk+uQQKViVtan48ClsY3AywbcgaF2g mIkA== X-Gm-Message-State: AOAM531ZKIKXVOhO+t1t65Jzgr6UZ6Yo5ilR1QBjEPHOAkRctCpgDiTn 5gGnkDgci7ZfL376EMTaLrueegX63QEegg== X-Google-Smtp-Source: ABdhPJzSIM17eRGysin9OCS+L6QVBunnF0SVXcSBOD20X7O0xte80T1HM8+3yFL94oCTYUwu7II0OlwRjy0FUA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:c986:b0:1d9:56e7:4e83 with SMTP id w6-20020a17090ac98600b001d956e74e83mr430894pjt.1.1651257588338; Fri, 29 Apr 2022 11:39:48 -0700 (PDT) Date: Fri, 29 Apr 2022 18:39:29 +0000 In-Reply-To: <20220429183935.1094599-1-dmatlack@google.com> Message-Id: <20220429183935.1094599-4-dmatlack@google.com> Mime-Version: 1.0 References: <20220429183935.1094599-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH 3/9] KVM: selftests: Drop stale function parameter comment for nested_map() From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org nested_map() does not take a parameter named eptp_memslot. Drop the comment referring to it. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 1 - 1 file changed, 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index 1fa2d1059ade..ac432e064fcd 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -485,7 +485,6 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, * nested_paddr - Nested guest physical address to map * paddr - VM Physical Address * size - The size of the range to map - * eptp_memslot - Memory region slot for new virtual translation tables * * Output Args: None * From patchwork Fri Apr 29 18:39:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12832599 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FC70C433FE for ; Fri, 29 Apr 2022 18:39:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380082AbiD2SnK (ORCPT ); Fri, 29 Apr 2022 14:43:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380079AbiD2SnJ (ORCPT ); Fri, 29 Apr 2022 14:43:09 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79B87D64CD for ; Fri, 29 Apr 2022 11:39:50 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id i19-20020aa79093000000b0050d44b83506so4552072pfa.22 for ; Fri, 29 Apr 2022 11:39:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gMz0hMrjdbXDHTQ8RW43oHBBrWZy5mDRZcH9wKZIsjA=; b=H3CBdqdsB9ntXoC+QEn+ptAhZ+1NnPUx0WGPaFsHmv/czxEoiyg+osRLoq2eKToeUy Eg0shDnk/juZr6IE7KAW+8aIB5VKjAsA5Ds2msL7dnB5iC3OdaJN+5zyBBQkpe3xnCRi gITTd2CW90g18ePySX/T8/CS8aW5rfBQOuuZEJulkuEgJM/D9uPYLMHzzYKTYURRvxem dybUUP3VjmiVFqv5041ZzLvHrkIybPGyJZtYg4TnDGZHcD48HiYGFQ1hOuwqcmqTpCOK Kfr2Z2rQobqmB/B/Mw/ey6l3JfrcmRzzwSySxgewsW8TD5sbstDF3veb2aO3VCUh5nHR KlHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gMz0hMrjdbXDHTQ8RW43oHBBrWZy5mDRZcH9wKZIsjA=; b=dNE+7uEVPD+j63lqB/zCRqxcHzrVCNUSCP8gqToFFvrEpbxjXEGqlghpBhy386QKhW 52SkMfX+pmaSvS0ytlJHBhR5zetBtjjY4xewq8NpzpQRbR+PQ5Hof1FCeoq1Bdm1cfEv SuKLTMJOv0iHdm6bv3fBCAFWQsGyfqkzgZ8t2tDYUqs+06RvaHGwc/3KcVnM6fC9yJSj CoN2nQI9z7tRG+qrkAb7+Po8YmN6V3TzVivmn7wfkFRoLNG0Tk7c2U54WnQJsIKX3lz3 eo4Cv3mlLl7pfMxD/v3clQCe3uOAnxEQ79YiOaQYhe+5Z1aP3xEml7WrWvpoYpgZ12dd 3Rww== X-Gm-Message-State: AOAM5311DAZjbGfj2/LDQ3qNIG4YPE6rxgRdX+0gYQWmszphA/LcezEc u94hmmaAXhaHXDHj61SBXnRU3drQtFdaMg== X-Google-Smtp-Source: ABdhPJznv9O8EXzm+YUnM0sJ0lmY5JEYOpv4OTXTjoQYBumPzDAz/5rO+0vENUvbhJ4vqlq2tEXTHhfNMr5PtQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:aa7:8385:0:b0:4f6:ef47:e943 with SMTP id u5-20020aa78385000000b004f6ef47e943mr619660pfm.38.1651257589945; Fri, 29 Apr 2022 11:39:49 -0700 (PDT) Date: Fri, 29 Apr 2022 18:39:30 +0000 In-Reply-To: <20220429183935.1094599-1-dmatlack@google.com> Message-Id: <20220429183935.1094599-5-dmatlack@google.com> Mime-Version: 1.0 References: <20220429183935.1094599-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH 4/9] KVM: selftests: Refactor nested_map() to specify target level From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor nested_map() to specify that it explicityl wants 4K mappings (the existing behavior) and push the implementation down into __nested_map(), which can be used in subsequent commits to create huge page mappings. No function change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index ac432e064fcd..715b58f1f7bc 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -485,6 +485,7 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, * nested_paddr - Nested guest physical address to map * paddr - VM Physical Address * size - The size of the range to map + * level - The level at which to map the range * * Output Args: None * @@ -493,22 +494,29 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, * Within the VM given by vm, creates a nested guest translation for the * page range starting at nested_paddr to the page range starting at paddr. */ -void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr, uint64_t size) +void __nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr, uint64_t size, + int level) { - size_t page_size = vm->page_size; + size_t page_size = PG_LEVEL_SIZE(level); size_t npages = size / page_size; TEST_ASSERT(nested_paddr + size > nested_paddr, "Vaddr overflow"); TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); while (npages--) { - nested_pg_map(vmx, vm, nested_paddr, paddr); + __nested_pg_map(vmx, vm, nested_paddr, paddr, level); nested_paddr += page_size; paddr += page_size; } } +void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr, uint64_t size) +{ + __nested_map(vmx, vm, nested_paddr, paddr, size, PG_LEVEL_4K); +} + /* Prepare an identity extended page table that maps all the * physical pages in VM. */ From patchwork Fri Apr 29 18:39:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12832600 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 395E6C433F5 for ; Fri, 29 Apr 2022 18:39:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380084AbiD2SnL (ORCPT ); Fri, 29 Apr 2022 14:43:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380079AbiD2SnK (ORCPT ); Fri, 29 Apr 2022 14:43:10 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BABCD64D3 for ; Fri, 29 Apr 2022 11:39:52 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id mm2-20020a17090b358200b001bf529127dfso4423758pjb.6 for ; Fri, 29 Apr 2022 11:39:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vX0LiIlEVdgzV2l+bjk1F0pjD+Z7yYfTJycwcaaWHm0=; b=J5pA4KS9iUHgNTgxm1NA4hWaWB+XUrv/1vLWiWpfDmJfXjQEzIE+NYB1ED9reknwB/ WaMrDqPjUt7w5vUvOi+UMxi+httAOBrEakCKV/nVRJj6CVluz7Us4Fx+d7KlTrzOsOAV 6bcOZlcMuinmqE/++n6MNXVqDEkNUNhZjvIcHvR0HxH2HLzJorBzDvALO9YtISVG7Hhu 0dVZYPZuBfsaXHVmj+9WUX3M7Iz6N986+2vjXbOfTmDU8ltTULaRYwXXef5X2HnbfQZ8 1B5TJr7XnyDq1nstjE6sj0BcbX6vDtD4tfZvBWf+Roae5TihoKB0blrSvHj29oTbhKBD eLDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vX0LiIlEVdgzV2l+bjk1F0pjD+Z7yYfTJycwcaaWHm0=; b=MZXaH+qzVriDEaeVaoLEnsnEm+ewq/iiA3auFseMWbrZBdd7uTzlhNJrfLOyx93Vdu eg0+qP57dkBdldasakNLnv7DKPp1ZUy+LTkQvflXEKk1ujBlSdfKJcG3fIPgMAFU/U1y rCAsXMadEyDNRdauZGdHVa3by+BHdzohNyZc+NiQtNYqGsyhKEUe81GEUeTaYXkJYY+B wGuR0AFxkcAmF/4qp0oHv8ewpviyrDcLq2Gs3Kh7D19yT/1MO2UcLNbeQIh5Eel1IIT1 umm/7TQhRSzy3+ZMKfanDoV37+kko1zhlo9atx0HIQR7wGGir+/0c2k+Xv085CtQ/4dn zVzw== X-Gm-Message-State: AOAM532tO9a4IoY485ernS7s9hfxK9p47eUcRwoOz9St4bATUcadT4/K BcdEvWpMgblAEDT6h+ORxTSxeOp0sVJLog== X-Google-Smtp-Source: ABdhPJyzmzd8Apv913MJngKeZqQgg7C9oVD1UkwShw8U2wjNitFwL/lMJnITRL7YWkM7ZxlgC9ItF+0ZZLCqWA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a63:8f45:0:b0:398:d78:142f with SMTP id r5-20020a638f45000000b003980d78142fmr584412pgn.162.1651257591510; Fri, 29 Apr 2022 11:39:51 -0700 (PDT) Date: Fri, 29 Apr 2022 18:39:31 +0000 In-Reply-To: <20220429183935.1094599-1-dmatlack@google.com> Message-Id: <20220429183935.1094599-6-dmatlack@google.com> Mime-Version: 1.0 References: <20220429183935.1094599-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH 5/9] KVM: selftests: Move VMX_EPT_VPID_CAP_AD_BITS to vmx.h From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is a VMX-related macro so move it to vmx.h. While here, open code the mask like the rest of the VMX bitmask macros. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- tools/testing/selftests/kvm/include/x86_64/processor.h | 3 --- tools/testing/selftests/kvm/include/x86_64/vmx.h | 2 ++ 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index b512f9f508ae..5a8854e85b8f 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -488,9 +488,6 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) #define X86_CR0_CD (1UL<<30) /* Cache Disable */ #define X86_CR0_PG (1UL<<31) /* Paging */ -/* VMX_EPT_VPID_CAP bits */ -#define VMX_EPT_VPID_CAP_AD_BITS (1ULL << 21) - #define XSTATE_XTILE_CFG_BIT 17 #define XSTATE_XTILE_DATA_BIT 18 diff --git a/tools/testing/selftests/kvm/include/x86_64/vmx.h b/tools/testing/selftests/kvm/include/x86_64/vmx.h index 583ceb0d1457..3b1794baa97c 100644 --- a/tools/testing/selftests/kvm/include/x86_64/vmx.h +++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h @@ -96,6 +96,8 @@ #define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f #define VMX_MISC_SAVE_EFER_LMA 0x00000020 +#define VMX_EPT_VPID_CAP_AD_BITS 0x00200000 + #define EXIT_REASON_FAILED_VMENTRY 0x80000000 #define EXIT_REASON_EXCEPTION_NMI 0 #define EXIT_REASON_EXTERNAL_INTERRUPT 1 From patchwork Fri Apr 29 18:39:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12832601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2E8BC433EF for ; Fri, 29 Apr 2022 18:39:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380089AbiD2SnN (ORCPT ); Fri, 29 Apr 2022 14:43:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380079AbiD2SnM (ORCPT ); Fri, 29 Apr 2022 14:43:12 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD30FD64DC for ; Fri, 29 Apr 2022 11:39:53 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id x9-20020a056a000bc900b0050d919e9c9bso3283863pfu.1 for ; Fri, 29 Apr 2022 11:39:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3CMhq67xTlIFikMfvwBaSUyLTSFbPR0YXiRyht/Utnk=; b=r+W/+qQPY1LXyrfIyY9EziNWAyrEZrAhUo6RNdVNSy6Q/z1ndphTX91UY/816fSaOO pwBxhuJ6oY0DQJgNJZDNMaTCxpnNnXVsdCqrNKT44c0QfAHIrJWZ1kehD3HOypomEKWi mrmZO1GQNDZxPSRQtiNZ6OxqQTTUPpBrKZ6n6pHpHD6f4C6HbqC8gwHv1t/f18y9b+yj /h7l8skPc/HvcvihSNNp3hIrJ9uAL6GTsr7OvUXqrhWLxJAs52Ian4IGJahUaTbFqS3O qM6wOo6PacwQD9YzwJr9NfQ9GzworvyNVr01IriJ0R5nPKhXoh6jbSeydrT9JIkLZVEx QRhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3CMhq67xTlIFikMfvwBaSUyLTSFbPR0YXiRyht/Utnk=; b=rJvdcte13vmlImPNOR81f/7Ru1S3sUoW5kDmyiwjqTT7M6Xxnsk0aSuYbGtZhlYkU4 olmAOP7yJL2a6hOJyJ666XPAZvHlTugtYnvK9fB6WZ3mrtjokSNsLrzahagJ+I43hmYk 6ik1/ggs4FCs9bwnsqOJS7Ym24QqTL6yb/wqOzSsoUVR1fN2ZxToyS2l9GQFZkgOMO4L LMsKu3tRuZkny0vrmEbh65EqqGkk1O/0dVzH91HKbMLqabHQm5Tt/M7WcDT7cUd/18rg LW9WbtZCLZ0ZyUhIJh4WhW0hGMwHWiQ6eSStNaO94cRHkPhp76WGYndERy3q9MuqMUei wJtQ== X-Gm-Message-State: AOAM531iHt/YYmeIAb/kS0fECdKYH+7xiru4deQFVlmx0ZeyIaMwd9zu upC/po92Rid76QnudtOGNuqWyD6Mi0Um2Q== X-Google-Smtp-Source: ABdhPJzfQzAGmdDGqEOinV9ksGDLVYyXzm6LAokEQsgVFzbYHLSQ1pusd3jOwyK2DlN+xK//kQTqG4bPoX40uA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a62:cd49:0:b0:50d:b02e:11df with SMTP id o70-20020a62cd49000000b0050db02e11dfmr644111pfg.4.1651257593204; Fri, 29 Apr 2022 11:39:53 -0700 (PDT) Date: Fri, 29 Apr 2022 18:39:32 +0000 In-Reply-To: <20220429183935.1094599-1-dmatlack@google.com> Message-Id: <20220429183935.1094599-7-dmatlack@google.com> Mime-Version: 1.0 References: <20220429183935.1094599-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH 6/9] KVM: selftests: Add a helper to check EPT/VPID capabilities From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Create a small helper function to check if a given EPT/VPID capability is supported. This will be re-used in a follow-up commit to check for 1G page support. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index 715b58f1f7bc..3862d93a18ac 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -198,6 +198,11 @@ bool load_vmcs(struct vmx_pages *vmx) return true; } +static bool ept_vpid_cap_supported(uint64_t mask) +{ + return rdmsr(MSR_IA32_VMX_EPT_VPID_CAP) & mask; +} + /* * Initialize the control fields to the most basic settings possible. */ @@ -215,7 +220,7 @@ static inline void init_vmcs_control_fields(struct vmx_pages *vmx) struct eptPageTablePointer eptp = { .memory_type = VMX_BASIC_MEM_TYPE_WB, .page_walk_length = 3, /* + 1 */ - .ad_enabled = !!(rdmsr(MSR_IA32_VMX_EPT_VPID_CAP) & VMX_EPT_VPID_CAP_AD_BITS), + .ad_enabled = ept_vpid_cap_supported(VMX_EPT_VPID_CAP_AD_BITS), .address = vmx->eptp_gpa >> PAGE_SHIFT_4K, }; From patchwork Fri Apr 29 18:39:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12832602 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C0A7C433EF for ; Fri, 29 Apr 2022 18:40:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380091AbiD2SnQ (ORCPT ); Fri, 29 Apr 2022 14:43:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380079AbiD2SnO (ORCPT ); Fri, 29 Apr 2022 14:43:14 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D62DD64C7 for ; Fri, 29 Apr 2022 11:39:55 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id m8-20020a17090aab0800b001cb1320ef6eso7269660pjq.3 for ; Fri, 29 Apr 2022 11:39:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ivJXS8ClmFOz8lWqCYwCeeuO40j2TNLdx2VYwHTzVUM=; b=bj8UQ3z1sTGw4fp5+0p4GZXDcr4HR/zgHyB7xpQA9mEPPKpI6HidykeH1zN8kZJlGd gVQ0vxNoBxtT51NUm9EvN50+ojH+LF8OZ0IkxN9mAlrFgdNR4NOF1bzZb/iiWQ4bLck4 Ks9MdjWp8yErf4C8jiS4E9cHboG/jPNCYXLAHEiBuv0rY+KNCx9TOWd6l+goYNNqqyjt 88fw99EZ3BUMS50gFMz+RJulnX95zOTpYBbiLdhiGIEqlFNjjdp1eNLWbs511aN3/jTB 2u+ULxjaNs3ev8XbyUL1/ryNRalaXYHzWHXQ7gYvyEhgil74mhIAmHorGdsjHi4vH1qX V+yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ivJXS8ClmFOz8lWqCYwCeeuO40j2TNLdx2VYwHTzVUM=; b=Wun1PSlp6VJlaCh3lBUp8J6oJTGXlIGhyEzNeGFy0i82DkV1LJkSyTuq3XtgfF00RV 90GWM1xD40wuOoWxivYVQ7P1CFz7QOCWgasF2lRe8F2SI7FuMhOf5bw8UjawSTCPwgrV 4QZwoLkB7Nwt15aBNOMYAcpj7//+h1czK0XegGTCCnTgscJIm59kTKsBXnfSIe7r+Qah 9tot35G3K4SXRIU0AmhZVFuhFyUQoO5Dps5pOhzJ/pSZxZVTEa/7mc0HpI1j1eMb3baG aUptTV8aDiiQq45lbMCaPCMksweulf9JUEu1AaW1XwyfcpqZ/mNF/IylWKjz+VZ7YbUL zaVg== X-Gm-Message-State: AOAM5321byFUaalEuAMx5b4Ad9NkZO4QhG5btDTtsUhyqlshn3ibdnaZ mdBFm9LpnwledjYQ/Dp/xjoFCmD4nwxvJA== X-Google-Smtp-Source: ABdhPJyQXT5EpBFIvYzCOVYX2PvIbS5DIYy4dH12s8tjr4xNtodjJG1/cS4unSFXV6wHzICgJA0WJiLDXpoUQA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:d584:b0:1bc:e520:91f2 with SMTP id v4-20020a17090ad58400b001bce52091f2mr5525312pju.192.1651257594775; Fri, 29 Apr 2022 11:39:54 -0700 (PDT) Date: Fri, 29 Apr 2022 18:39:33 +0000 In-Reply-To: <20220429183935.1094599-1-dmatlack@google.com> Message-Id: <20220429183935.1094599-8-dmatlack@google.com> Mime-Version: 1.0 References: <20220429183935.1094599-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH 7/9] KVM: selftests: Link selftests directly with lib object files From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The linker does obey strong/weak symbols when linking static libraries, it simply resolves an undefined symbol to the first-encountered symbol. This means that defining __weak arch-generic functions and then defining arch-specific strong functions to override them in libkvm will not always work. More specifically, if we have: lib/generic.c: void __weak foo(void) { pr_info("weak\n"); } void bar(void) { foo(); } lib/x86_64/arch.c: void foo(void) { pr_info("strong\n"); } And a selftest that calls bar(), it will print "weak". Now if you make generic.o explicitly depend on arch.o (e.g. add function to arch.c that is called directly from generic.c) it will print "strong". In other words, it seems that the linker is free to throw out arch.o when linking because generic.o does not explicitly depend on it, which causes the linker to lose the strong symbol. One solution is to link libkvm.a with --whole-archive so that the linker doesn't throw away object files it thinks are unnecessary. However that is a bit difficult to plumb since we are using the common selftests makefile rules. An easier solution is to drop libkvm.a just link selftests with all the .o files that were originally in libkvm.a. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- tools/testing/selftests/kvm/Makefile | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index af582d168621..c1eb6acb30de 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -172,12 +172,13 @@ LDFLAGS += -pthread $(no-pie-option) $(pgste-option) # $(TEST_GEN_PROGS) starts with $(OUTPUT)/ include ../lib.mk -STATIC_LIBS := $(OUTPUT)/libkvm.a LIBKVM_C := $(filter %.c,$(LIBKVM)) LIBKVM_S := $(filter %.S,$(LIBKVM)) LIBKVM_C_OBJ := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBKVM_C)) LIBKVM_S_OBJ := $(patsubst %.S, $(OUTPUT)/%.o, $(LIBKVM_S)) -EXTRA_CLEAN += $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) $(STATIC_LIBS) cscope.* +LIBKVM_OBJS = $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) + +EXTRA_CLEAN += $(LIBKVM_OBJS) cscope.* x := $(shell mkdir -p $(sort $(dir $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ)))) $(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c @@ -186,13 +187,9 @@ $(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c $(LIBKVM_S_OBJ): $(OUTPUT)/%.o: %.S $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@ -LIBKVM_OBJS = $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) -$(OUTPUT)/libkvm.a: $(LIBKVM_OBJS) - $(AR) crs $@ $^ - x := $(shell mkdir -p $(sort $(dir $(TEST_GEN_PROGS)))) -all: $(STATIC_LIBS) -$(TEST_GEN_PROGS): $(STATIC_LIBS) +all: $(LIBKVM_OBJS) +$(TEST_GEN_PROGS): $(LIBKVM_OBJS) cscope: include_paths = $(LINUX_TOOL_INCLUDE) $(LINUX_HDR_PATH) include lib .. cscope: From patchwork Fri Apr 29 18:39:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12832603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 128C8C433F5 for ; Fri, 29 Apr 2022 18:40:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380093AbiD2SnS (ORCPT ); Fri, 29 Apr 2022 14:43:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380086AbiD2SnP (ORCPT ); Fri, 29 Apr 2022 14:43:15 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C98FD64D7 for ; Fri, 29 Apr 2022 11:39:56 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id bj12-20020a170902850c00b0015adf30aaccso4559938plb.15 for ; Fri, 29 Apr 2022 11:39:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wjJUfUvHaodK4HMSxgaucfTWEgiY6ldqAfb1+kdosNE=; b=OeRwQFpC26LuaImy7GQAQs0KqKVjBxi1/t4aU0oHxYY8BDwRTK9j12rRWCcJtbkWVY i42v7gQMsI78UhJVSwpSZPb0ObkmSwj+yysqbjhdpFHfGLDM5v161doeWVlI1h5+ZooA 3oLRh3A0872Snx0SyWBsZAzG+8YCvWoGNkVAH8CPCON31hHWUgk4tpx8W3LuahqZm9w5 u2G3A35UarkoeXP2WTrIUXVfj834WAu1m66MOfT6xPpvtLJoBCaYONzn9FdvRPxrQss3 jfEhows8XqPCR6XDraO7yiXaYaFLnLNxLg5Xopx6Ud/5zfOz4msc6imFDqjETGmT0SMp 2v6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wjJUfUvHaodK4HMSxgaucfTWEgiY6ldqAfb1+kdosNE=; b=ayjXe8zi8Z9stELT2gQY/FE5hnV8eo3HWG2OBdwCiZ8eaS1uFjJOUN2f9fnFLesghp 0VAFJuQ/BBTsjVqeHUIy92v9F2BC+L1G5o3PAmixkVlj/ocVseSjoJCMF4idV8GSb4z1 8KTzAY+QjoeigmZbF9uoq9X0/TUal/oE3IO1ocXgRz8WBylTe0KdjAPlGa1pR9Gzb3i6 O9TXmtsY+MrpNtT9aZgLYOmt0a3r0EpuPeCGO8T8/6WmTxLdtsnZV1JL5MqiwMkgAqO+ wanOqa85guZgh5faimmRXtERHQ9A/42+uFZPWWLntqS6GzVYOp5UrKX1csL+T1sgSdPr dHkA== X-Gm-Message-State: AOAM530HxumAqqYqe0VFV+wm4aJRr4FN71wsR7Io9cdsXkxnLPJLyVZm l2Yd1aVg+2i8xnvIvHr/FyhlDYtc64Abaw== X-Google-Smtp-Source: ABdhPJydUc9991xbXugE3Lpb1J0qAvA9R82A/ijBmSbBtgsf+x9KCl4Kig916CHsHIgNrH9AI4zo+jl1SPxbAA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:4ad1:b0:1da:2c51:9405 with SMTP id mh17-20020a17090b4ad100b001da2c519405mr5404720pjb.216.1651257596091; Fri, 29 Apr 2022 11:39:56 -0700 (PDT) Date: Fri, 29 Apr 2022 18:39:34 +0000 In-Reply-To: <20220429183935.1094599-1-dmatlack@google.com> Message-Id: <20220429183935.1094599-9-dmatlack@google.com> Mime-Version: 1.0 References: <20220429183935.1094599-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH 8/9] KVM: selftests: Clean up LIBKVM files in Makefile From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Break up the long lines for LIBKVM and alphabetize each architecture. This makes reading the Makefile easier, and will make reading diffs to LIBKVM easier. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- tools/testing/selftests/kvm/Makefile | 36 ++++++++++++++++++++++++---- 1 file changed, 31 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index c1eb6acb30de..1ba0d01362bd 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -37,11 +37,37 @@ ifeq ($(ARCH),riscv) UNAME_M := riscv endif -LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c -LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S -LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c lib/aarch64/vgic.c -LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c -LIBKVM_riscv = lib/riscv/processor.c lib/riscv/ucall.c +LIBKVM += lib/assert.c +LIBKVM += lib/elf.c +LIBKVM += lib/guest_modes.c +LIBKVM += lib/io.c +LIBKVM += lib/kvm_util.c +LIBKVM += lib/perf_test_util.c +LIBKVM += lib/rbtree.c +LIBKVM += lib/sparsebit.c +LIBKVM += lib/test_util.c + +LIBKVM_x86_64 += lib/x86_64/apic.c +LIBKVM_x86_64 += lib/x86_64/handlers.S +LIBKVM_x86_64 += lib/x86_64/processor.c +LIBKVM_x86_64 += lib/x86_64/svm.c +LIBKVM_x86_64 += lib/x86_64/ucall.c +LIBKVM_x86_64 += lib/x86_64/vmx.c + +LIBKVM_aarch64 += lib/aarch64/gic.c +LIBKVM_aarch64 += lib/aarch64/gic_v3.c +LIBKVM_aarch64 += lib/aarch64/handlers.S +LIBKVM_aarch64 += lib/aarch64/processor.c +LIBKVM_aarch64 += lib/aarch64/spinlock.c +LIBKVM_aarch64 += lib/aarch64/ucall.c +LIBKVM_aarch64 += lib/aarch64/vgic.c + +LIBKVM_s390x += lib/s390x/diag318_test_handler.c +LIBKVM_s390x += lib/s390x/processor.c +LIBKVM_s390x += lib/s390x/ucall.c + +LIBKVM_riscv += lib/riscv/processor.c +LIBKVM_riscv += lib/riscv/ucall.c TEST_GEN_PROGS_x86_64 = x86_64/cpuid_test TEST_GEN_PROGS_x86_64 += x86_64/cr4_cpuid_sync_test From patchwork Fri Apr 29 18:39:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12832604 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 528B9C433EF for ; Fri, 29 Apr 2022 18:40:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380095AbiD2SnT (ORCPT ); Fri, 29 Apr 2022 14:43:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380087AbiD2SnR (ORCPT ); Fri, 29 Apr 2022 14:43:17 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06FCCD64C7 for ; Fri, 29 Apr 2022 11:39:58 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id l2-20020a17090ad10200b001ca56de815aso5106215pju.0 for ; Fri, 29 Apr 2022 11:39:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xZuuLIL1w6pem4+zoAcFzU1A6W2vec9vblEKsE00+QM=; b=R8xWbU0IzIrUGoDwLkIEwcsmF/XWKV/S8IuKqt3/40mGgPD9hFbJpL6SgeZHLgmfID 0uWQZ220BBY+o4AacqTV/KqIrKvzpx6bCANjflW0YXjK4uLjB4SXSyMEGIWrlaA+IfV1 ImvqrMKdxMo7Q38GUWT6qvaX7vIU5dgbMsJ2ZEDzn6WfWhxPJcxlpLXe6hQM8SuRDISw lnsUFIqd1uGbIqJ9k/rRD4a5jhzNJApl61FvU4Btlc1XLHzWrtHPZlilOeCrq6To9XzB Hg4ugku5Wb6b0w16vxGezFkShSs1PnJqCoRSdEGS88HEBkO7BmRBCMJFUcHWULCS5upJ 4yfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xZuuLIL1w6pem4+zoAcFzU1A6W2vec9vblEKsE00+QM=; b=3mIbKRHdqTlsthQ1n7EERJpTeQHo+o0Txz9aW/ncMjgpzsnz+SimN/v1Z7m0LhuxWH fWa1EFuKlfoLVW9SNjmIvY1sq6WgRUAxBpLjznlMwp72bhacYVr6LgH1SJdOhoGi9oEF ojb+TRQ2QzHZzkjKwXOBSqhpQiR/iodwEa4svS9hgjBLr8uvG4np+ykv0ynPvVGWlpH4 /r2kaYr9iGwkfrt7Y0Ff1nzhsJwLBNVGhi7y3CeqBsdHDwrD8lgYjuAW9/4oyygsLKBf AoqVhkNbedWboxUGqBvaGL24bBtPizYu8lwbXkvwA+iQkNqv5evAKWeddqWJ/IsK+rNB RAQQ== X-Gm-Message-State: AOAM530HHpRXk2JyFAjBZ3eykD8bjJInkG4xXq2152rT6dLVvTTRLDNW klJBbuTqdLWbPk0FMnXsfCMPVOSsQBSrzg== X-Google-Smtp-Source: ABdhPJyH9BtL1YbJYnJFz5xWtvnjlZlIGUKeZXD742U7/BGXRilSUJbyq1HYyUxnorivZJ9FBeY/DIlvIwsGZA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:690b:b0:159:65c:9044 with SMTP id j11-20020a170902690b00b00159065c9044mr470878plk.47.1651257597560; Fri, 29 Apr 2022 11:39:57 -0700 (PDT) Date: Fri, 29 Apr 2022 18:39:35 +0000 In-Reply-To: <20220429183935.1094599-1-dmatlack@google.com> Message-Id: <20220429183935.1094599-10-dmatlack@google.com> Mime-Version: 1.0 References: <20220429183935.1094599-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH 9/9] KVM: selftests: Add option to run dirty_log_perf_test vCPUs in L2 From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add an option to dirty_log_perf_test that configures the vCPUs to run in L2 instead of L1. This makes it possible to benchmark the dirty logging performance of nested virtualization, which is particularly interesting because KVM must shadow L1's EPT/NPT tables. For now this support only works on x86_64 CPUs with VMX. Otherwise passing -n results in the test being skipped. Signed-off-by: David Matlack --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/dirty_log_perf_test.c | 10 ++- .../selftests/kvm/include/perf_test_util.h | 5 ++ .../selftests/kvm/include/x86_64/vmx.h | 3 + .../selftests/kvm/lib/perf_test_util.c | 13 ++- .../selftests/kvm/lib/x86_64/perf_test_util.c | 89 +++++++++++++++++++ tools/testing/selftests/kvm/lib/x86_64/vmx.c | 11 +++ 7 files changed, 127 insertions(+), 5 deletions(-) create mode 100644 tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 1ba0d01362bd..9b342239a6dd 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -49,6 +49,7 @@ LIBKVM += lib/test_util.c LIBKVM_x86_64 += lib/x86_64/apic.c LIBKVM_x86_64 += lib/x86_64/handlers.S +LIBKVM_x86_64 += lib/x86_64/perf_test_util.c LIBKVM_x86_64 += lib/x86_64/processor.c LIBKVM_x86_64 += lib/x86_64/svm.c LIBKVM_x86_64 += lib/x86_64/ucall.c diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 7b47ae4f952e..d60a34cdfaee 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -336,8 +336,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) static void help(char *name) { puts(""); - printf("usage: %s [-h] [-i iterations] [-p offset] [-g]" - "[-m mode] [-b vcpu bytes] [-v vcpus] [-o] [-s mem type]" + printf("usage: %s [-h] [-i iterations] [-p offset] [-g] " + "[-m mode] [-n] [-b vcpu bytes] [-v vcpus] [-o] [-s mem type]" "[-x memslots]\n", name); puts(""); printf(" -i: specify iteration counts (default: %"PRIu64")\n", @@ -351,6 +351,7 @@ static void help(char *name) printf(" -p: specify guest physical test memory offset\n" " Warning: a low offset can conflict with the loaded test code.\n"); guest_modes_help(); + printf(" -n: Run the vCPUs in nested mode (L2)\n"); printf(" -b: specify the size of the memory region which should be\n" " dirtied by each vCPU. e.g. 10M or 3G.\n" " (default: 1G)\n"); @@ -387,7 +388,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "ghi:p:m:b:f:v:os:x:")) != -1) { + while ((opt = getopt(argc, argv, "ghi:p:m:nb:f:v:os:x:")) != -1) { switch (opt) { case 'g': dirty_log_manual_caps = 0; @@ -401,6 +402,9 @@ int main(int argc, char *argv[]) case 'm': guest_modes_cmdline(optarg); break; + case 'n': + perf_test_args.nested = true; + break; case 'b': guest_percpu_mem_size = parse_size(optarg); break; diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h index a86f953d8d36..1dfdaec43321 100644 --- a/tools/testing/selftests/kvm/include/perf_test_util.h +++ b/tools/testing/selftests/kvm/include/perf_test_util.h @@ -34,6 +34,9 @@ struct perf_test_args { uint64_t guest_page_size; int wr_fract; + /* Run vCPUs in L2 instead of L1, if the architecture supports it. */ + bool nested; + struct perf_test_vcpu_args vcpu_args[KVM_MAX_VCPUS]; }; @@ -49,5 +52,7 @@ void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract); void perf_test_start_vcpu_threads(int vcpus, void (*vcpu_fn)(struct perf_test_vcpu_args *)); void perf_test_join_vcpu_threads(int vcpus); +void perf_test_guest_code(uint32_t vcpu_id); +void perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus); #endif /* SELFTEST_KVM_PERF_TEST_UTIL_H */ diff --git a/tools/testing/selftests/kvm/include/x86_64/vmx.h b/tools/testing/selftests/kvm/include/x86_64/vmx.h index 3b1794baa97c..17d712503a36 100644 --- a/tools/testing/selftests/kvm/include/x86_64/vmx.h +++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h @@ -96,6 +96,7 @@ #define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f #define VMX_MISC_SAVE_EFER_LMA 0x00000020 +#define VMX_EPT_VPID_CAP_1G_PAGES 0x00020000 #define VMX_EPT_VPID_CAP_AD_BITS 0x00200000 #define EXIT_REASON_FAILED_VMENTRY 0x80000000 @@ -608,6 +609,7 @@ bool load_vmcs(struct vmx_pages *vmx); bool nested_vmx_supported(void); void nested_vmx_check_supported(void); +bool ept_1g_pages_supported(void); void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr); @@ -615,6 +617,7 @@ void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, uint64_t size); void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm, uint32_t memslot); +void nested_map_all_1g(struct vmx_pages *vmx, struct kvm_vm *vm); void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm, uint32_t eptp_memslot); void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm *vm); diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index 722df3a28791..6e15c93a3577 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -40,7 +40,7 @@ static bool all_vcpu_threads_running; * Continuously write to the first 8 bytes of each page in the * specified region. */ -static void guest_code(uint32_t vcpu_id) +void perf_test_guest_code(uint32_t vcpu_id) { struct perf_test_args *pta = &perf_test_args; struct perf_test_vcpu_args *vcpu_args = &pta->vcpu_args[vcpu_id]; @@ -140,7 +140,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, * effect as KVM allows aliasing HVAs in meslots. */ vm = vm_create_with_vcpus(mode, vcpus, DEFAULT_GUEST_PHY_PAGES, - guest_num_pages, 0, guest_code, NULL); + guest_num_pages, 0, perf_test_guest_code, NULL); pta->vm = vm; @@ -178,6 +178,9 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, perf_test_setup_vcpus(vm, vcpus, vcpu_memory_bytes, partition_vcpu_memory_access); + if (pta->nested) + perf_test_setup_nested(vm, vcpus); + ucall_init(vm, NULL); /* Export the shared variables to the guest. */ @@ -198,6 +201,12 @@ void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract) sync_global_to_guest(vm, perf_test_args); } +void __weak perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus) +{ + pr_info("%s() not support on this architecture, skipping.\n", __func__); + exit(KSFT_SKIP); +} + static void *vcpu_thread_main(void *data) { struct vcpu_thread *vcpu = data; diff --git a/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c b/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c new file mode 100644 index 000000000000..ba20a1499263 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c @@ -0,0 +1,89 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * x86_64-specific extensions to perf_test_util.c. + * + * Copyright (C) 2022, Google, Inc. + */ +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "perf_test_util.h" +#include "../kvm_util_internal.h" +#include "processor.h" +#include "vmx.h" + +void perf_test_l2_guest_code(uint64_t vcpu_id) +{ + perf_test_guest_code(vcpu_id); + vmcall(); +} + +extern char perf_test_l2_guest_entry[]; +__asm__( +"perf_test_l2_guest_entry:" +" mov (%rsp), %rdi;" +" call perf_test_l2_guest_code;" +" ud2;" +); + +static void perf_test_l1_guest_code(struct vmx_pages *vmx, uint64_t vcpu_id) +{ +#define L2_GUEST_STACK_SIZE 64 + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; + unsigned long *rsp; + + GUEST_ASSERT(vmx->vmcs_gpa); + GUEST_ASSERT(prepare_for_vmx_operation(vmx)); + GUEST_ASSERT(load_vmcs(vmx)); + GUEST_ASSERT(ept_1g_pages_supported()); + + rsp = &l2_guest_stack[L2_GUEST_STACK_SIZE - 1]; + *rsp = vcpu_id; + prepare_vmcs(vmx, perf_test_l2_guest_entry, rsp); + + GUEST_ASSERT(!vmlaunch()); + GUEST_ASSERT(vmreadz(VM_EXIT_REASON) == EXIT_REASON_VMCALL); + GUEST_DONE(); +} + +void perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus) +{ + struct vmx_pages *vmx, *vmx0 = NULL; + struct kvm_regs regs; + vm_vaddr_t vmx_gva; + int vcpu_id; + + nested_vmx_check_supported(); + + for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) { + vmx = vcpu_alloc_vmx(vm, &vmx_gva); + + if (vcpu_id == 0) { + prepare_eptp(vmx, vm, 0); + /* + * Identity map L2 with 1G pages so that KVM can shadow + * the EPT12 with huge pages. + */ + nested_map_all_1g(vmx, vm); + vmx0 = vmx; + } else { + /* Share the same EPT table across all vCPUs. */ + vmx->eptp = vmx0->eptp; + vmx->eptp_hva = vmx0->eptp_hva; + vmx->eptp_gpa = vmx0->eptp_gpa; + } + + /* + * Override the vCPU to run perf_test_l1_guest_code() which will + * bounce it into L2 before calling perf_test_guest_code(). + */ + vcpu_regs_get(vm, vcpu_id, ®s); + regs.rip = (unsigned long) perf_test_l1_guest_code; + vcpu_regs_set(vm, vcpu_id, ®s); + vcpu_args_set(vm, vcpu_id, 2, vmx_gva, vcpu_id); + } +} diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index 3862d93a18ac..32374a0f002c 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -203,6 +203,11 @@ static bool ept_vpid_cap_supported(uint64_t mask) return rdmsr(MSR_IA32_VMX_EPT_VPID_CAP) & mask; } +bool ept_1g_pages_supported(void) +{ + return ept_vpid_cap_supported(VMX_EPT_VPID_CAP_1G_PAGES); +} + /* * Initialize the control fields to the most basic settings possible. */ @@ -546,6 +551,12 @@ void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm, } } +/* Identity map the entire guest physical address space with 1GiB Pages. */ +void nested_map_all_1g(struct vmx_pages *vmx, struct kvm_vm *vm) +{ + __nested_map(vmx, vm, 0, 0, vm->max_gfn << vm->page_shift, PG_LEVEL_1G); +} + void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm, uint32_t eptp_memslot) {