From patchwork Tue Jul 18 23:45:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13317893 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBA07EB64DD for ; Tue, 18 Jul 2023 23:49:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A969280016; Tue, 18 Jul 2023 19:49:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 458938D0012; Tue, 18 Jul 2023 19:49:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D2BC8D002E; Tue, 18 Jul 2023 19:49:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 187778D0012 for ; Tue, 18 Jul 2023 19:49:19 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E16A01404B9 for ; Tue, 18 Jul 2023 23:49:18 +0000 (UTC) X-FDA: 81026376396.10.F6DEE2C Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf07.hostedemail.com (Postfix) with ESMTP id 0C47340013 for ; Tue, 18 Jul 2023 23:49:16 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=7kCME8pP; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 3-yS3ZAYKCE48uq3zsw44w1u.s421y3AD-220Bqs0.47w@flex--seanjc.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3-yS3ZAYKCE48uq3zsw44w1u.s421y3AD-220Bqs0.47w@flex--seanjc.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689724157; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Q98IcrUH3D9bs3/KGOraZ2fk7jlneSyp5eOXwjGSzPo=; b=kvL/vr9fnD95YvugVNUAyeXyohWsbQeV3g7u/mQVQfYwKV1q1/RXYYcGAQwffYVoPyXt5+ waV4VH3g+XKwlMIwA+8PL2a9lAi+8/Eyzwd609cLQa06wff6ZBL5vY3377JSCFksN5cEPY jd6aKp5CfLX6aw/Tod3LdVMrAi8VeCM= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=7kCME8pP; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 3-yS3ZAYKCE48uq3zsw44w1u.s421y3AD-220Bqs0.47w@flex--seanjc.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3-yS3ZAYKCE48uq3zsw44w1u.s421y3AD-220Bqs0.47w@flex--seanjc.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689724157; a=rsa-sha256; cv=none; b=wNKrxSJqa5UM/i+W3coc7eHWcYfWtUzeaAvS2wS+OinB/o3tKXfA2WX4M4YOvql1kfVL2f cZMZnxBs3gBOJH6NFecSVI/Bn9VAfflJGQP0yNCE8tFtSDpIpcIiuyffjOawhhXfqKg217 8WK2qlpmEvXZAfJ7eIsF/xdcUdprHNs= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1b9de7951easo32357465ad.0 for ; Tue, 18 Jul 2023 16:49:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1689724156; x=1692316156; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Q98IcrUH3D9bs3/KGOraZ2fk7jlneSyp5eOXwjGSzPo=; b=7kCME8pPCngu4gaFGpXg5a7UwFpOLrO0My4DJgf3Tu41/gplqgzLOGhIhYQhJyAw82 pOUm0o0ToYchYR+sqCJB8nr8JQwkY05K+mmvehoPNKX+xtAJe85ypqZrf7NdEBuc7dVO Loia1s+Vi2KmoQuGxm/1vLkByuS5Llxi1PwzbhcxuMWRdPNpQbFdZ1M0EUvFj2O4HykG lFhG2TpgpmRIG1QsBv9/L63IGFhxigQF4dGb407i8DsIDC4msrdSTzNotl+OShvkqjOU uJoKfxiuAmED0YbOhUTgqzr2Njgz4J6LJbFZG7TW80PXAaqsgV0K7Lq43642a6sItsfg Wm6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689724156; x=1692316156; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Q98IcrUH3D9bs3/KGOraZ2fk7jlneSyp5eOXwjGSzPo=; b=DB9HBrOHotzqycpFi1J9aK5KBl/0gYga3eX3XWR2EmXTNMfWukFMs9YAZSMAnooTr1 Y/pGNhRV5jxlBLHE7R550am4QL8mT8T4++wtpRfWnuyVf4nQFpJl3ADJRDT6Bd1U7r5a AoZmnwk6eP/rXmZKw02u6/l3pHQcWlGXY0LkpFHM81XJb2po5SwfbOuHl5/tDxxCAeh6 dwMH6+Rlx4Ef0Hy6nsSc4Wfb76Xbs3O1H5wVdG1SuOigHDlpkmE4VNQwT2t0FJZz8RVf Rq1ctaYncOxPswYJlZHReUs5uiNZwtmXp9A/i5ULGzjp6QFEn6NXWlrZhiqboDDQyUmD h99Q== X-Gm-Message-State: ABy/qLbW9PAhFhAWKxHFWvgrRO/NaNQW8lfu2NzZ8TOuLHFmWFNCK04a T5olKcQcNYT/fevqLc99sYqausJpKDI= X-Google-Smtp-Source: APBJJlGNaFpMXi9SkErxKUtHkpkIjtXAaXLFttzvWMq3kzuSVkOJWJ0e7R+lluyhOtMVN8C9CK1Syvi8Yw4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:22ce:b0:1b8:80c9:a98e with SMTP id y14-20020a17090322ce00b001b880c9a98emr18845plg.13.1689724155930; Tue, 18 Jul 2023 16:49:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 18 Jul 2023 16:45:06 -0700 In-Reply-To: <20230718234512.1690985-1-seanjc@google.com> Mime-Version: 1.0 References: <20230718234512.1690985-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog Message-ID: <20230718234512.1690985-24-seanjc@google.com> Subject: [RFC PATCH v11 23/29] KVM: selftests: Introduce VM "shape" to allow tests to specify the VM type From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , "Matthew Wilcox (Oracle)" , Andrew Morton , Paul Moore , James Morris , "Serge E. Hallyn" Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-kernel@vger.kernel.org, Chao Peng , Fuad Tabba , Jarkko Sakkinen , Yu Zhang , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , Vlastimil Babka , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A . Shutemov" X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 0C47340013 X-Stat-Signature: 7pwqkzb8qtjm8bd1deftdi5femho87ri X-HE-Tag: 1689724156-805168 X-HE-Meta: U2FsdGVkX18IrhJZNYxb5WVvBrpyIShVhDtCFS8FuptoC5kjgnbPcgl0dpDYteZNYU5BX4OXPf6CSQiIle5ntnrFSjiQKoRuPQ3gAtYQm16nkaU2wKZmGzDkSROEz3gyQklxJzV6CK0MsoQRPVTDVkoNLUrcnULor66o6lelJOtgsCkleLkgYpxqpFZhNPKxkgbdyU/R08Mz9DgxSuuIOWqeQ8y5hSedy8GI7WMipDe/s3XeccNTtKkBcDpkpazxaBPhIFhCK4xUauIx2qoxpV+amUTMSB2rBLdRA4mR+dtR2H+FISToq/+wkhHOI5EWzoapxw7OnGLTMZbcFSqoCmOPXs2h9bKCpbtYTkd2wqcejnVyjkJ4rRaMezxdnH0hH4vy72fQSyRvzzih0ZJJkMBubuvTcjkU252O0Hvy19sG38XI2Lk7T+eA7qMw0ee8y3xUs+xyyvSlxx1/ltKIoElhNBZE+wrT0DZa71x0FdgCgYw+dq/jckIybSyz1tQP6dT2gx1hZyzyGu3vB40sQGWt21EOjTBXnDo55Znyfw0qyyeXdRU9nOtA8+o3rrZk73ad50XCWm2jHWpsREhkVPBN71SVYaAX2xSWiNWrYjLdXx9i0SFLZGz3tcJEkWKgzxY3BprIr4md7GLm4JPY3sgTpQWsYtMJsYdXNglsHdPmd+WvJxRzeLmXYLEoIlwzthEV0ybzKvVT9nefqOlHbpweZ0gPU5ofQB9nskvzq7bN559cfCMX6fS4gmB4WWe3Ix8ol9ItlUIxJIWDkhb7J8ukr1U+bcJ9NypoJLW7pvduk5/EvH5ha00V4TfQQv92BR/x3lgd1TQINRqWYJwsY9299buPaCxAW39tEdL553QvreW42DhEMYE177cFO79KQyao3vSsyFZ0dBazNKdEcyvWDCnWIXncSvoBlhQZGlRVs7f5aiwAmfLnNG7yAYdBV44vn5Gf75eKjuIGZYc d+BLhcaK OZ+TkQf9F6zvyBRw1UoFWnb/kyINda+ALdIlTxKmy1qizWzSJtOwCSpyRVoO9DMd8oR7L7083bOfn02V5n9JUeQBeabJ70HcRl6vG+IO9abGkyt4XQv1ymms4s6VGHTRAKql66bTTbGyfEL2oeitoqn+BVx47ZoenWr+zUaUecuij3raFN6Tu9LBt3JGYj0l/oT4ppLEANMIui1NF0YBuRBVqzgBB9SxVhG6gHSG3TI0E/UrPctZmC3vm7XAPp3AuLfU4Dtwo5uwtil7/ljGqvlcTpBJDedVzgZRHMng4nGiW/tusua9jI8oBKBrTz7jUuIRtdlOnpA3z6UlMksDuq9insk3Ly+raZNwRqYZpfpM/E1+DrvO6uf4UBN1iDefHspwfkH0WYT6hV9gBtQ7M4E+xuZE1UHvX7HROSj3QrqH3kGZIU4RtlV2v9LUomc9XmjdBHvzuAAWxnEPK6zlAMk5/nOkUOYl+TUsGZZD2irFWMFwpQmiSBdRfyq4ayoMHhgTTQkstldQgXuXfoet20icor4Ry09N1Kq0XG+XQO9AAnZVuX7Xr9i9dUgSzvekhFaV+KzAm3lP10xfblKB5yQcFF1ytn5TTvrsGNQAH6RO1zmsTaWdJq5yDlG/H02A1ITfsavILdRr3SzAVR/7SpEXh8CMDeuJMkI44Q5gjKc7ClXvilKClK5gGuJPZIcWzgJKS1enB0Hcr4VD0K9F7WKGf1iL2autYDY82BWMasWdUjyp48D30kKeVysZZL65fzo7ITOGM+5yqM6M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/dirty_log_test.c | 2 +- .../selftests/kvm/include/kvm_util_base.h | 54 +++++++++++++++---- .../selftests/kvm/kvm_page_table_test.c | 2 +- tools/testing/selftests/kvm/lib/kvm_util.c | 43 +++++++-------- tools/testing/selftests/kvm/lib/memstress.c | 3 +- .../kvm/x86_64/ucna_injection_test.c | 2 +- 6 files changed, 72 insertions(+), 34 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index 936f3a8d1b83..6cbecf499767 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -699,7 +699,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, struct kvm_vcpu **vcpu, pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode)); - vm = __vm_create(mode, 1, extra_mem_pages); + vm = __vm_create(VM_SHAPE(mode), 1, extra_mem_pages); log_mode_create_vm_done(vm); *vcpu = vm_vcpu_add(vm, 0, guest_code); diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 1819787b773b..856440294013 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -167,6 +167,23 @@ enum vm_guest_mode { NUM_VM_MODES, }; +struct vm_shape { + enum vm_guest_mode mode; + unsigned int type; +}; + +#define VM_TYPE_DEFAULT 0 + +#define VM_SHAPE(__mode) \ +({ \ + struct vm_shape shape = { \ + .mode = (__mode), \ + .type = VM_TYPE_DEFAULT \ + }; \ + \ + shape; \ +}) + #if defined(__aarch64__) extern enum vm_guest_mode vm_mode_default; @@ -199,6 +216,8 @@ extern enum vm_guest_mode vm_mode_default; #endif +#define VM_SHAPE_DEFAULT VM_SHAPE(VM_MODE_DEFAULT) + #define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT) #define PTES_PER_MIN_PAGE ptes_per_page(MIN_PAGE_SIZE) @@ -754,21 +773,21 @@ vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to * calculate the amount of memory needed for per-vCPU data, e.g. stacks. */ -struct kvm_vm *____vm_create(enum vm_guest_mode mode); -struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, +struct kvm_vm *____vm_create(struct vm_shape shape); +struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus, uint64_t nr_extra_pages); static inline struct kvm_vm *vm_create_barebones(void) { - return ____vm_create(VM_MODE_DEFAULT); + return ____vm_create(VM_SHAPE_DEFAULT); } static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus) { - return __vm_create(VM_MODE_DEFAULT, nr_runnable_vcpus, 0); + return __vm_create(VM_SHAPE_DEFAULT, nr_runnable_vcpus, 0); } -struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus, +struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus, uint64_t extra_mem_pages, void *guest_code, struct kvm_vcpu *vcpus[]); @@ -776,17 +795,27 @@ static inline struct kvm_vm *vm_create_with_vcpus(uint32_t nr_vcpus, void *guest_code, struct kvm_vcpu *vcpus[]) { - return __vm_create_with_vcpus(VM_MODE_DEFAULT, nr_vcpus, 0, + return __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus, 0, guest_code, vcpus); } + +struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape, + struct kvm_vcpu **vcpu, + uint64_t extra_mem_pages, + void *guest_code); + /* * Create a VM with a single vCPU with reasonable defaults and @extra_mem_pages * additional pages of guest memory. Returns the VM and vCPU (via out param). */ -struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, - uint64_t extra_mem_pages, - void *guest_code); +static inline struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, + uint64_t extra_mem_pages, + void *guest_code) +{ + return __vm_create_shape_with_one_vcpu(VM_SHAPE_DEFAULT, vcpu, + extra_mem_pages, guest_code); +} static inline struct kvm_vm *vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, void *guest_code) @@ -794,6 +823,13 @@ static inline struct kvm_vm *vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, return __vm_create_with_one_vcpu(vcpu, 0, guest_code); } +static inline struct kvm_vm *vm_create_shape_with_one_vcpu(struct vm_shape shape, + struct kvm_vcpu **vcpu, + void *guest_code) +{ + return __vm_create_shape_with_one_vcpu(shape, vcpu, 0, guest_code); +} + struct kvm_vcpu *vm_recreate_with_one_vcpu(struct kvm_vm *vm); void kvm_pin_this_task_to_pcpu(uint32_t pcpu); diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c index b3b00be1ef82..e8c2aabbca2b 100644 --- a/tools/testing/selftests/kvm/kvm_page_table_test.c +++ b/tools/testing/selftests/kvm/kvm_page_table_test.c @@ -254,7 +254,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg) /* Create a VM with enough guest pages */ guest_num_pages = test_mem_size / guest_page_size; - vm = __vm_create_with_vcpus(mode, nr_vcpus, guest_num_pages, + vm = __vm_create_with_vcpus(VM_SHAPE(mode), nr_vcpus, guest_num_pages, guest_code, test_args.vcpus); /* Align down GPA of the testing memslot */ diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 1283e24b76f1..64221c320389 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -209,7 +209,7 @@ __weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm) (1ULL << (vm->va_bits - 1)) >> vm->page_shift); } -struct kvm_vm *____vm_create(enum vm_guest_mode mode) +struct kvm_vm *____vm_create(struct vm_shape shape) { struct kvm_vm *vm; @@ -221,13 +221,13 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode) vm->regions.hva_tree = RB_ROOT; hash_init(vm->regions.slot_hash); - vm->mode = mode; - vm->type = 0; + vm->mode = shape.mode; + vm->type = shape.type; - vm->pa_bits = vm_guest_mode_params[mode].pa_bits; - vm->va_bits = vm_guest_mode_params[mode].va_bits; - vm->page_size = vm_guest_mode_params[mode].page_size; - vm->page_shift = vm_guest_mode_params[mode].page_shift; + vm->pa_bits = vm_guest_mode_params[vm->mode].pa_bits; + vm->va_bits = vm_guest_mode_params[vm->mode].va_bits; + vm->page_size = vm_guest_mode_params[vm->mode].page_size; + vm->page_shift = vm_guest_mode_params[vm->mode].page_shift; /* Setup mode specific traits. */ switch (vm->mode) { @@ -265,7 +265,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode) /* * Ignore KVM support for 5-level paging (vm->va_bits == 57), * it doesn't take effect unless a CR4.LA57 is set, which it - * isn't for this VM_MODE. + * isn't for this mode (48-bit virtual address space). */ TEST_ASSERT(vm->va_bits == 48 || vm->va_bits == 57, "Linear address width (%d bits) not supported", @@ -285,10 +285,11 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode) vm->pgtable_levels = 5; break; default: - TEST_FAIL("Unknown guest mode, mode: 0x%x", mode); + TEST_FAIL("Unknown guest mode: 0x%x", vm->mode); } #ifdef __aarch64__ + TEST_ASSERT(!vm->type, "ARM doesn't support test-provided types"); if (vm->pa_bits != 40) vm->type = KVM_VM_TYPE_ARM_IPA_SIZE(vm->pa_bits); #endif @@ -343,19 +344,19 @@ static uint64_t vm_nr_pages_required(enum vm_guest_mode mode, return vm_adjust_num_guest_pages(mode, nr_pages); } -struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, +struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus, uint64_t nr_extra_pages) { - uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus, + uint64_t nr_pages = vm_nr_pages_required(shape.mode, nr_runnable_vcpus, nr_extra_pages); struct userspace_mem_region *slot0; struct kvm_vm *vm; int i; - pr_debug("%s: mode='%s' pages='%ld'\n", __func__, - vm_guest_mode_string(mode), nr_pages); + pr_debug("%s: mode='%s' type='%d', pages='%ld'\n", __func__, + vm_guest_mode_string(shape.mode), shape.type, nr_pages); - vm = ____vm_create(mode); + vm = ____vm_create(shape); vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, nr_pages, 0); for (i = 0; i < NR_MEM_REGIONS; i++) @@ -396,7 +397,7 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, * extra_mem_pages is only used to calculate the maximum page table size, * no real memory allocation for non-slot0 memory in this function. */ -struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus, +struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus, uint64_t extra_mem_pages, void *guest_code, struct kvm_vcpu *vcpus[]) { @@ -405,7 +406,7 @@ struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus TEST_ASSERT(!nr_vcpus || vcpus, "Must provide vCPU array"); - vm = __vm_create(mode, nr_vcpus, extra_mem_pages); + vm = __vm_create(shape, nr_vcpus, extra_mem_pages); for (i = 0; i < nr_vcpus; ++i) vcpus[i] = vm_vcpu_add(vm, i, guest_code); @@ -413,15 +414,15 @@ struct kvm_vm *__vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus return vm; } -struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, - uint64_t extra_mem_pages, - void *guest_code) +struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape, + struct kvm_vcpu **vcpu, + uint64_t extra_mem_pages, + void *guest_code) { struct kvm_vcpu *vcpus[1]; struct kvm_vm *vm; - vm = __vm_create_with_vcpus(VM_MODE_DEFAULT, 1, extra_mem_pages, - guest_code, vcpus); + vm = __vm_create_with_vcpus(shape, 1, extra_mem_pages, guest_code, vcpus); *vcpu = vcpus[0]; return vm; diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c index df457452d146..d05487e5a371 100644 --- a/tools/testing/selftests/kvm/lib/memstress.c +++ b/tools/testing/selftests/kvm/lib/memstress.c @@ -168,7 +168,8 @@ struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus, * The memory is also added to memslot 0, but that's a benign side * effect as KVM allows aliasing HVAs in meslots. */ - vm = __vm_create_with_vcpus(mode, nr_vcpus, slot0_pages + guest_num_pages, + vm = __vm_create_with_vcpus(VM_SHAPE(mode), nr_vcpus, + slot0_pages + guest_num_pages, memstress_guest_code, vcpus); args->vm = vm; diff --git a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c index 85f34ca7e49e..0ed32ec903d0 100644 --- a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c +++ b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c @@ -271,7 +271,7 @@ int main(int argc, char *argv[]) kvm_check_cap(KVM_CAP_MCE); - vm = __vm_create(VM_MODE_DEFAULT, 3, 0); + vm = __vm_create(VM_SHAPE_DEFAULT, 3, 0); kvm_ioctl(vm->kvm_fd, KVM_X86_GET_MCE_CAP_SUPPORTED, &supported_mcg_caps);