From patchwork Thu Mar 3 03:54:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12767005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F2AFC433FE for ; Thu, 3 Mar 2022 03:55:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229594AbiCCD4J (ORCPT ); Wed, 2 Mar 2022 22:56:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229606AbiCCD4G (ORCPT ); Wed, 2 Mar 2022 22:56:06 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A29A14562B for ; Wed, 2 Mar 2022 19:55:21 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2dbfe49a4aaso31556227b3.5 for ; Wed, 02 Mar 2022 19:55:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=7hpXMqaCEkEB7Bx15AIkPLODG6VSJouGbjh17YaxmTc=; b=OLpT04cetG8fcEWG5/b5prO9PeE1zRzjatqOWS9m9F75ZeuNMmMwnRotwWDPlQaixf utXpi/1iwLCp6gBKiUY7SRcGDL80co1c7/yLGdrHvCuLL4SbwyWIrLcx1zvcfcc9Q0E5 zU7Znery7da7+ng1uq0iGaNam/ATVUnQQUGqxQG7kkTn2SdqWhQ4HlHSelu0zkzHEDBY 9CK6Cn81oYdbT3S54AV4e8RLcsmMRLhWUgXVWaAUPLAMXNket6a3SUxGtiqzeoJUvzZZ waUQlYvEmRwx+RLFBIZcGQbD+Qa3XJkD4vyeUcS81LaNGDKH50lfIXIM8Thxcmmslc6N MTjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=7hpXMqaCEkEB7Bx15AIkPLODG6VSJouGbjh17YaxmTc=; b=RvHbr9loufD4dkDDfcnAo/8peglZyCXHU4As+tM5JuUpuiNpoZLQLLrFBbB/4rN2bk jOghI2Y20bO9S4/jsOPJF5tegdTspqstrI8oKY5gwZd4DjulvewezdaXP2kqvCJi1xV9 rn5AAf2C5fq2aE5axa7ltP4r2sDwtqb3MfSko6eWp8Z1dHCEC9pMBshlOXpkTtOYr/V7 z+64vVNRPqU+sxjJb4Y+oK1V+Py08pFula6cH/8q4vXEZ0wy1aCSvK0ruPsg00FiC1NP LodOHFB44642WEhn6BIEwkPThRf3VwaFMOZwZe3naGwdaKrNXAzqcuzrrxpwApRM6ojp GPug== X-Gm-Message-State: AOAM531aqF1uUD/Hu75NEDnvFFQtwLSkNmjXm+J+P7Kb2R3woKhqkf/Z 9oLfVDOJj+n5TzVdyG0tNV1SHXyrmF8= X-Google-Smtp-Source: ABdhPJzUJIgejr3blx9qXWZIWOIzHNseYmCooyAJDB7EyY/J0oalqOvUtDwIAbZy2nCrb1py0++M00oZmGU= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a25:4454:0:b0:60a:69f9:f1c5 with SMTP id r81-20020a254454000000b0060a69f9f1c5mr32171447yba.285.1646279720231; Wed, 02 Mar 2022 19:55:20 -0800 (PST) Date: Wed, 2 Mar 2022 19:54:06 -0800 Message-Id: <20220303035408.3708241-1-reijiw@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.35.1.574.g5d30c73bfb-goog Subject: [PATCH v3 1/3] KVM: arm64: Generalise VM features into a set of flags From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Marc Zyngier We currently deal with a set of booleans for VM features, while they could be better represented as set of flags contained in an unsigned long, similarily to what we are doing on the CPU side. Signed-off-by: Marc Zyngier Reviewed-by: Andrew Jones Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_host.h | 12 +++++++----- arch/arm64/kvm/arm.c | 5 +++-- arch/arm64/kvm/mmio.c | 3 ++- 3 files changed, 12 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5bc01e62c08a..11a7ae747ded 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -122,7 +122,10 @@ struct kvm_arch { * should) opt in to this feature if KVM_CAP_ARM_NISV_TO_USER is * supported. */ - bool return_nisv_io_abort_to_user; +#define KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER 0 + /* Memory Tagging Extension enabled for the guest */ +#define KVM_ARCH_FLAG_MTE_ENABLED 1 + unsigned long flags; /* * VM-wide PMU filter, implemented as a bitmap and big enough for @@ -133,9 +136,6 @@ struct kvm_arch { u8 pfr0_csv2; u8 pfr0_csv3; - - /* Memory Tagging Extension enabled for the guest */ - bool mte_enabled; }; struct kvm_vcpu_fault_info { @@ -786,7 +786,9 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); #define kvm_arm_vcpu_sve_finalized(vcpu) \ ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED) -#define kvm_has_mte(kvm) (system_supports_mte() && (kvm)->arch.mte_enabled) +#define kvm_has_mte(kvm) \ + (system_supports_mte() && \ + test_bit(KVM_ARCH_FLAG_MTE_ENABLED, &(kvm)->arch.flags)) #define kvm_vcpu_has_pmu(vcpu) \ (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features)) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index ecc5958e27fe..9a2d240ef6a3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -89,7 +89,8 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, switch (cap->cap) { case KVM_CAP_ARM_NISV_TO_USER: r = 0; - kvm->arch.return_nisv_io_abort_to_user = true; + set_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER, + &kvm->arch.flags); break; case KVM_CAP_ARM_MTE: mutex_lock(&kvm->lock); @@ -97,7 +98,7 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, r = -EINVAL; } else { r = 0; - kvm->arch.mte_enabled = true; + set_bit(KVM_ARCH_FLAG_MTE_ENABLED, &kvm->arch.flags); } mutex_unlock(&kvm->lock); break; diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c index 3e2d8ba11a02..3dd38a151d2a 100644 --- a/arch/arm64/kvm/mmio.c +++ b/arch/arm64/kvm/mmio.c @@ -135,7 +135,8 @@ int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) * volunteered to do so, and bail out otherwise. */ if (!kvm_vcpu_dabt_isvalid(vcpu)) { - if (vcpu->kvm->arch.return_nisv_io_abort_to_user) { + if (test_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER, + &vcpu->kvm->arch.flags)) { run->exit_reason = KVM_EXIT_ARM_NISV; run->arm_nisv.esr_iss = kvm_vcpu_dabt_iss_nisv_sanitized(vcpu); run->arm_nisv.fault_ipa = fault_ipa; From patchwork Thu Mar 3 03:54:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12767006 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC6DAC433F5 for ; Thu, 3 Mar 2022 03:56:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229607AbiCCD4p (ORCPT ); Wed, 2 Mar 2022 22:56:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229486AbiCCD4o (ORCPT ); Wed, 2 Mar 2022 22:56:44 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC20F14A6E7 for ; Wed, 2 Mar 2022 19:55:59 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id s19-20020a170902b19300b00149a463ad43so2158274plr.1 for ; Wed, 02 Mar 2022 19:55:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=2O5tqm85dJQhkLMWZJPT1tS2tYSeXeUxQ48vdWcXam8=; b=bgptN9PY5GNZgfMeVshpiNtEDjxaMAlBk8Yqu0QEftcmyVcfjWL3WKh1QFk2dh9FNi 0ME1/Jjr4igNaLdvStc9ZdZtzcq5G+B2wY3smWw1YfQTcqa55l3ew49S25DarCwInMGV R+yYG/o0UtV0SDYL/7ODecscfIkjY1nLrI5gPnQzzsZLq+mTRfQSlZ1PpBvGNn4A+I2J ZRtf1HwsYMiYwxIsbXHVwTccJqn2YFhht1RdfW2ZPsVSGJxKaW//kkDdNqViJL8Sf49e OgtVqNiVFLy7vwK/IX5NmHEIA0xYm009EF7+bPtUnTIOOMYYdB3mfuCBhKy4gM1q/yyc +kzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=2O5tqm85dJQhkLMWZJPT1tS2tYSeXeUxQ48vdWcXam8=; b=DlGubkbYkG8vhLcskzEr+ogvSFhkanxZmrE35ogxC5igrJM+jTewTkwnKD8wSKbf0I IdexpJl06pI9oaNeTNQgAK2Q4106ZqWC7UukWQnohRVhrwibv39e94WQu7Z/rS1iZLet cg6eWxYEkpmQUGZgxbZJ2ziZPfF5qbLQHcy+K9CMnvZ3N9Bs4qXtJrrRS+ol6Jvf+Lkg 8bXMvKogFPINVxejYgkVEFH8Q33JRmppZC/jG8c9r2Ch1b/F8mawdNvMTSvINyQlSHir yICVpK+9VslDH/tLa9ZDSDmfG1DgvDFj3Mo9vBNYJQ9GZxkqrAO+mhWoOJSVRhZUxU46 GAVg== X-Gm-Message-State: AOAM531PAKEj3cBvaa7+crK9YzyMTN2aiTzRkTSXh3wKbVtUutH8HqpT kTuNAbcZXXL1QMog+vshrJis22jW1D0= X-Google-Smtp-Source: ABdhPJw9yx/IyJvkFVOwn5qC3Id+SAO2jS9BZYH+J414vcDMXDqLc0Xq9fymwb4L9yNCuks6le/ym0ztXHE= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a05:6a00:a15:b0:4f6:5051:61ad with SMTP id p21-20020a056a000a1500b004f6505161admr6582732pfh.69.1646279759153; Wed, 02 Mar 2022 19:55:59 -0800 (PST) Date: Wed, 2 Mar 2022 19:54:07 -0800 In-Reply-To: <20220303035408.3708241-1-reijiw@google.com> Message-Id: <20220303035408.3708241-2-reijiw@google.com> Mime-Version: 1.0 References: <20220303035408.3708241-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.574.g5d30c73bfb-goog Subject: [PATCH v3 2/3] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org KVM allows userspace to configure either all EL1 32bit or 64bit vCPUs for a guest. At vCPU reset, vcpu_allowed_register_width() checks if the vcpu's register width is consistent with all other vCPUs'. Since the checking is done even against vCPUs that are not initialized (KVM_ARM_VCPU_INIT has not been done) yet, the uninitialized vCPUs are erroneously treated as 64bit vCPU, which causes the function to incorrectly detect a mixed-width VM. Introduce KVM_ARCH_FLAG_EL1_32BIT and KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED bits for kvm->arch.flags. A value of the EL1_32BIT bit indicates that the guest needs to be configured with all 32bit or 64bit vCPUs, and a value of the REG_WIDTH_CONFIGURED bit indicates if a value of the EL1_32BIT bit is valid (already set up). Values in those bits are set at the first KVM_ARM_VCPU_INIT for the guest based on KVM_ARM_VCPU_EL1_32BIT configuration for the vCPU. Check vcpu's register width against those new bits at the vcpu's KVM_ARM_VCPU_INIT (instead of against other vCPUs' register width). Fixes: 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation") Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_emulate.h | 25 +++++++++++------ arch/arm64/include/asm/kvm_host.h | 8 ++++++ arch/arm64/kvm/arm.c | 41 ++++++++++++++++++++++++++++ arch/arm64/kvm/reset.c | 8 ------ 4 files changed, 65 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index d62405ce3e6d..f4f960819888 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -20,6 +20,7 @@ #include #include #include +#include #define CURRENT_EL_SP_EL0_VECTOR 0x0 #define CURRENT_EL_SP_ELx_VECTOR 0x200 @@ -45,7 +46,14 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu); static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) { - return !(vcpu->arch.hcr_el2 & HCR_RW); + struct kvm *kvm; + + kvm = is_kernel_in_hyp_mode() ? kern_hyp_va(vcpu->kvm) : vcpu->kvm; + + WARN_ON_ONCE(!test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, + &kvm->arch.flags)); + + return test_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags); } static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) @@ -72,15 +80,14 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 |= HCR_TVM; } - if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) + if (vcpu_el1_is_32bit(vcpu)) vcpu->arch.hcr_el2 &= ~HCR_RW; - - /* - * TID3: trap feature register accesses that we virtualise. - * For now this is conditional, since no AArch32 feature regs - * are currently virtualised. - */ - if (!vcpu_el1_is_32bit(vcpu)) + else + /* + * TID3: trap feature register accesses that we virtualise. + * For now this is conditional, since no AArch32 feature regs + * are currently virtualised. + */ vcpu->arch.hcr_el2 |= HCR_TID3; if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) || diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 11a7ae747ded..5cde7f7b5042 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -125,6 +125,14 @@ struct kvm_arch { #define KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER 0 /* Memory Tagging Extension enabled for the guest */ #define KVM_ARCH_FLAG_MTE_ENABLED 1 + /* + * The guest's EL1 register width. A value of KVM_ARCH_FLAG_EL1_32BIT + * bit is valid only when KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED is set. + * Otherwise, the guest's EL1 register width has not yet been + * determined yet. + */ +#define KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED 2 +#define KVM_ARCH_FLAG_EL1_32BIT 3 unsigned long flags; /* diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9a2d240ef6a3..9ac75aa46e2f 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1101,6 +1101,43 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, return -EINVAL; } +/* + * A guest can have either all EL1 32bit or 64bit vcpus only. It is + * indicated by a value of KVM_ARCH_FLAG_EL1_32BIT bit in kvm->arch.flags, + * which is valid only when KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED in + * kvm->arch.flags is set. + * This function checks if the vCPU's register width configuration is + * consistent with a value of the EL1_32BIT bit in kvm->arch.flags + * when the REG_WIDTH_CONFIGURED bit is set. + * Otherwise, the function sets a value of EL1_32BIT bit based on the vcpu's + * KVM_ARM_VCPU_EL1_32BIT configuration (and sets the REG_WIDTH_CONFIGURED + * bit of kvm->arch.flags). + */ +static int kvm_register_width_check_or_init(struct kvm_vcpu *vcpu) +{ + bool is32bit; + bool allowed = true; + struct kvm *kvm = vcpu->kvm; + + is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT); + + mutex_lock(&kvm->lock); + + if (test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags)) { + allowed = (is32bit == + test_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags)); + } else { + if (is32bit) + set_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags); + + set_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags); + } + + mutex_unlock(&kvm->lock); + + return allowed ? 0 : -EINVAL; +} + static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu, const struct kvm_vcpu_init *init) { @@ -1140,6 +1177,10 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu, /* Now we know what it is, we can reset it. */ ret = kvm_reset_vcpu(vcpu); + + if (!ret) + ret = kvm_register_width_check_or_init(vcpu); + if (ret) { vcpu->arch.target = -1; bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index ecc40c8cd6f6..6c5f7677057d 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -183,9 +183,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu) { - struct kvm_vcpu *tmp; bool is32bit; - unsigned long i; is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT); if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) && is32bit) @@ -195,12 +193,6 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu) if (kvm_has_mte(vcpu->kvm) && is32bit) return false; - /* Check that the vcpus are either all 32bit or all 64bit */ - kvm_for_each_vcpu(i, tmp, vcpu->kvm) { - if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit) - return false; - } - return true; } From patchwork Thu Mar 3 03:54:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12767007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CDEDC433EF for ; Thu, 3 Mar 2022 03:56:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229538AbiCCD5J (ORCPT ); Wed, 2 Mar 2022 22:57:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229507AbiCCD5I (ORCPT ); Wed, 2 Mar 2022 22:57:08 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94B7214EF47 for ; Wed, 2 Mar 2022 19:56:23 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id b9-20020a17090aa58900b001b8b14b4aabso2300495pjq.9 for ; Wed, 02 Mar 2022 19:56:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NckzY4aHIyiZXsaClAwfWp/pDb1JmWqk+qjn0+sOUoI=; b=VW1g/aLap7Cx412U6MVCmQ3huTTCGQIsKF+5EhVu8PSpIXmzOtImfHebUE3SNnu1mG fKzBzvc/D5fFVMAQZC5cJwDANguFabZD9m0VZn3nDb3SkhFQRx3ywQkbS1D49Q9Y/Wm+ QI1Src4SiGKfxBcOUtZZUpuLnXUrE5WKonM52u9TyvUpWItBv2gaj3Kc84R6gm7REmku JqYjYvATpVM5MiRVbj1op9a141FocnG4UbL3XIZFhAptEjKTeEOlMQJb9j0GTlgQQ4VT aISjFD4PKqsSwepPxKw17QTGDpaq1pna70ZldBznas6RSZcJK12P6lyhkNDoD9+Z/MyB R3ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NckzY4aHIyiZXsaClAwfWp/pDb1JmWqk+qjn0+sOUoI=; b=5K5igdXqWpMMQhRIMCozLuUEUmmVbJ4fPOwqF5iC0Pyu07SanUkq8jJ5eutZ8z1rJc oB1rM7SA3lWkYBi0SV36AqIMfj5tFvAlesGuM+nIHdGIZpSOmUeyCP66QQuK/mqwUX3D /Y69v5UgVdKSq5VA7FVkZd3jjuGwOzxH8jwxk4rAHBif3pFyKJN8V7nzoWIanvXrZ03D bzgo4IA0TlAI3PbZ0M+yc39fhiFeoT4iK+yG/AEbIsZtz1qYLjWQHvefXUPhOvUpxHeF Q9WlX/3w4ECAXwolf7TeryBiKo8Y5nREgKkVOndDtSxwfI3mFNdOq1Ny3pvT98YO689q MkOA== X-Gm-Message-State: AOAM531xypvD3lSFiDpSbgWAhUhQZmHrsjx7bPlp/CleRz5BuFhTwmPD XFU1au0HjQC094ycjh6XbpRm4O626V0= X-Google-Smtp-Source: ABdhPJwGkqx1HAK+FV7iAAHRTN1diZQ3YkCi5pD9Jjog5yTbQqCGb2a6Lajv7Zk8gguxjYp3GH6tcJVztQw= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a05:6a00:1d8a:b0:4e1:559d:2f62 with SMTP id z10-20020a056a001d8a00b004e1559d2f62mr36600999pfw.26.1646279783056; Wed, 02 Mar 2022 19:56:23 -0800 (PST) Date: Wed, 2 Mar 2022 19:54:08 -0800 In-Reply-To: <20220303035408.3708241-1-reijiw@google.com> Message-Id: <20220303035408.3708241-3-reijiw@google.com> Mime-Version: 1.0 References: <20220303035408.3708241-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.574.g5d30c73bfb-goog Subject: [PATCH v3 3/3] KVM: arm64: selftests: Introduce vcpu_width_config From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce a test for aarch64 that ensures non-mixed-width vCPUs (all 64bit vCPUs or all 32bit vcPUs) can be configured, and mixed-width vCPUs cannot be configured. Reviewed-by: Andrew Jones Signed-off-by: Reiji Watanabe --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/aarch64/vcpu_width_config.c | 125 ++++++++++++++++++ 3 files changed, 127 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/vcpu_width_config.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index dce7de7755e6..4e884e29b2a8 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -3,6 +3,7 @@ /aarch64/debug-exceptions /aarch64/get-reg-list /aarch64/psci_cpu_on_test +/aarch64/vcpu_width_config /aarch64/vgic_init /aarch64/vgic_irq /s390x/memop diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 17c3f0749f05..3482586c6e33 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -103,6 +103,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/arch_timer TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test +TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq TEST_GEN_PROGS_aarch64 += demand_paging_test diff --git a/tools/testing/selftests/kvm/aarch64/vcpu_width_config.c b/tools/testing/selftests/kvm/aarch64/vcpu_width_config.c new file mode 100644 index 000000000000..6e6e6a9f69e3 --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/vcpu_width_config.c @@ -0,0 +1,125 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * vcpu_width_config - Test KVM_ARM_VCPU_INIT() with KVM_ARM_VCPU_EL1_32BIT. + * + * Copyright (c) 2022 Google LLC. + * + * This is a test that ensures that non-mixed-width vCPUs (all 64bit vCPUs + * or all 32bit vcPUs) can be configured and mixed-width vCPUs cannot be + * configured. + */ + +#define _GNU_SOURCE + +#include "kvm_util.h" +#include "processor.h" +#include "test_util.h" + + +/* + * Add a vCPU, run KVM_ARM_VCPU_INIT with @init1, and then + * add another vCPU, and run KVM_ARM_VCPU_INIT with @init2. + */ +static int add_init_2vcpus(struct kvm_vcpu_init *init1, + struct kvm_vcpu_init *init2) +{ + struct kvm_vm *vm; + int ret; + + vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + + vm_vcpu_add(vm, 0); + ret = _vcpu_ioctl(vm, 0, KVM_ARM_VCPU_INIT, init1); + if (ret) + goto free_exit; + + vm_vcpu_add(vm, 1); + ret = _vcpu_ioctl(vm, 1, KVM_ARM_VCPU_INIT, init2); + +free_exit: + kvm_vm_free(vm); + return ret; +} + +/* + * Add two vCPUs, then run KVM_ARM_VCPU_INIT for one vCPU with @init1, + * and run KVM_ARM_VCPU_INIT for another vCPU with @init2. + */ +static int add_2vcpus_init_2vcpus(struct kvm_vcpu_init *init1, + struct kvm_vcpu_init *init2) +{ + struct kvm_vm *vm; + int ret; + + vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + + vm_vcpu_add(vm, 0); + vm_vcpu_add(vm, 1); + + ret = _vcpu_ioctl(vm, 0, KVM_ARM_VCPU_INIT, init1); + if (ret) + goto free_exit; + + ret = _vcpu_ioctl(vm, 1, KVM_ARM_VCPU_INIT, init2); + +free_exit: + kvm_vm_free(vm); + return ret; +} + +/* + * Tests that two 64bit vCPUs can be configured, two 32bit vCPUs can be + * configured, and two mixed-witgh vCPUs cannot be configured. + * Each of those three cases, configure vCPUs in two different orders. + * The one is running KVM_CREATE_VCPU for 2 vCPUs, and then running + * KVM_ARM_VCPU_INIT for them. + * The other is running KVM_CREATE_VCPU and KVM_ARM_VCPU_INIT for a vCPU, + * and then run those commands for another vCPU. + */ +int main(void) +{ + struct kvm_vcpu_init init1, init2; + struct kvm_vm *vm; + int ret; + + if (kvm_check_cap(KVM_CAP_ARM_EL1_32BIT) <= 0) { + print_skip("KVM_CAP_ARM_EL1_32BIT is not supported"); + exit(KSFT_SKIP); + } + + /* Get the preferred target type and copy that to init2 */ + vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init1); + kvm_vm_free(vm); + memcpy(&init2, &init1, sizeof(init2)); + + /* Test with 64bit vCPUs */ + ret = add_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret == 0, + "Configuring 64bit EL1 vCPUs failed unexpectedly"); + ret = add_2vcpus_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret == 0, + "Configuring 64bit EL1 vCPUs failed unexpectedly"); + + /* Test with 32bit vCPUs */ + init1.features[0] = (1 << KVM_ARM_VCPU_EL1_32BIT); + init2.features[0] = (1 << KVM_ARM_VCPU_EL1_32BIT); + ret = add_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret == 0, + "Configuring 32bit EL1 vCPUs failed unexpectedly"); + ret = add_2vcpus_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret == 0, + "Configuring 32bit EL1 vCPUs failed unexpectedly"); + + /* Test with mixed-width vCPUs */ + init1.features[0] = 0; + init2.features[0] = (1 << KVM_ARM_VCPU_EL1_32BIT); + ret = add_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret != 0, + "Configuring mixed-width vCPUs worked unexpectedly"); + ret = add_2vcpus_init_2vcpus(&init1, &init2); + TEST_ASSERT(ret != 0, + "Configuring mixed-width vCPUs worked unexpectedly"); + + return 0; +}