From patchwork Sat Nov 13 01:22:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 12617493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11651C433FE for ; Sat, 13 Nov 2021 01:22:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E2231610A5 for ; Sat, 13 Nov 2021 01:22:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235216AbhKMBZj (ORCPT ); Fri, 12 Nov 2021 20:25:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231618AbhKMBZi (ORCPT ); Fri, 12 Nov 2021 20:25:38 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E78FC061767 for ; Fri, 12 Nov 2021 17:22:47 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id bk21-20020a05620a1a1500b004631b196a46so7744455qkb.4 for ; Fri, 12 Nov 2021 17:22:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=7scaufr+fSmUGkDWYxMdRP4jgelZ944emEOpNCEc9Eg=; b=f2p+dMSE/hxM9pPI6RxTMpYlAyq2uYCz+5x54De/N9sbBCw/BtAwnw0AqterpvhhT6 yZG6EHFjdtlvCquKrYw7MmE9XRxwfdQqNkgAJadLq6IcAvicsk2tKv9GOZz69shaz/Kn DNPtwIehvoan35jKafMcbIxC7etpt7ht//5pn2oo3MxXAalNsT9yjZV042lMAGKpn4IR g/VbYTAI9BbMmen5NbkQb0xzuHasvabh4mDV65oibi2J5mKt0rkAmS89FWTfnL4hiBJW bATCeAD4u0yZ1bZUQdkw/Nruoh/HR817xwqRPQTaqHr4MoS675kbNvf+u/I1hnjqcuV/ EYYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7scaufr+fSmUGkDWYxMdRP4jgelZ944emEOpNCEc9Eg=; b=wgmVVqOXURORz72w3zVr1SpfiycRBhWSEfj2ucBp3Iq7mGobOV3Q+taMN4etQwjpeH RYkmP6G4f1YyEjLCoWT57Xqronr1M3AjLxdYCIPbm7y7ereABXSBM6HeOmceK/PE0rqX 6dS/gJqYzl6malouu4IIP0AxcfD6YSjGhnl3nI65fnDVEkj+MLGHxh3lU1IN7EZBBEjE rV0q2PdWRxffMqyOJI2PouCHH0zAhzTGB4hnGqEuRFNWPNTiX3sD4/X2fhBu38ef/LQX 3cj7ZiRcWV5w3vABMCEUQARdvHsdohRVf7Q2CogwyiGlKskhefTY3o22P9WD5NyiSj9w B0vw== X-Gm-Message-State: AOAM531JtpVUaynymgbIW6LatJf32+jpcP37IE7xMAEmfvUTZaJdUwAa UEFRtXkCrLJEnj7YmeTZ+S+7JlQdEmV7 X-Google-Smtp-Source: ABdhPJxZvklnal6cCVKhFsaRPydHT3v+HQTAU7dF0BxepD57Wml6+u85JB+gj4pC0SCmATnGNRPb1B2fq8yX X-Received: from rananta-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1bcc]) (user=rananta job=sendgmr) by 2002:ac8:7f8f:: with SMTP id z15mr20919575qtj.259.1636766566369; Fri, 12 Nov 2021 17:22:46 -0800 (PST) Date: Sat, 13 Nov 2021 01:22:24 +0000 In-Reply-To: <20211113012234.1443009-1-rananta@google.com> Message-Id: <20211113012234.1443009-2-rananta@google.com> Mime-Version: 1.0 References: <20211113012234.1443009-1-rananta@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [RFC PATCH v2 01/11] KVM: arm64: Factor out firmware register handling from psci.c From: Raghavendra Rao Ananta To: Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Paolo Bonzini , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Common hypercall firmware register handing is currently employed by psci.c. Since the upcoming patches add more of these registers, it's better to move the generic handling to hypercall.c for a cleaner presentation. While we are at it, collect all the firmware registers under fw_reg_ids[] to help implement kvm_arm_get_fw_num_regs() and kvm_arm_copy_fw_reg_indices() in a generic way. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/guest.c | 2 +- arch/arm64/kvm/hypercalls.c | 170 +++++++++++++++++++++++++++++++++++ arch/arm64/kvm/psci.c | 166 ---------------------------------- include/kvm/arm_hypercalls.h | 7 ++ include/kvm/arm_psci.h | 7 -- 5 files changed, 178 insertions(+), 174 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 5ce26bedf23c..625f97f7b304 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -18,7 +18,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c index 30da78f72b3b..9e136d91b470 100644 --- a/arch/arm64/kvm/hypercalls.c +++ b/arch/arm64/kvm/hypercalls.c @@ -146,3 +146,173 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) smccc_set_retval(vcpu, val[0], val[1], val[2], val[3]); return 1; } + +static const u64 fw_reg_ids[] = { + KVM_REG_ARM_PSCI_VERSION, + KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1, + KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2, +}; + +int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu) +{ + return ARRAY_SIZE(fw_reg_ids); +} + +int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(fw_reg_ids); i++) { + if (put_user(fw_reg_ids[i], uindices)) + return -EFAULT; + } + + return 0; +} + +#define KVM_REG_FEATURE_LEVEL_WIDTH 4 +#define KVM_REG_FEATURE_LEVEL_MASK (BIT(KVM_REG_FEATURE_LEVEL_WIDTH) - 1) + +/* + * Convert the workaround level into an easy-to-compare number, where higher + * values mean better protection. + */ +static int get_kernel_wa_level(u64 regid) +{ + switch (regid) { + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: + switch (arm64_get_spectre_v2_state()) { + case SPECTRE_VULNERABLE: + return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL; + case SPECTRE_MITIGATED: + return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL; + case SPECTRE_UNAFFECTED: + return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_REQUIRED; + } + return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL; + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2: + switch (arm64_get_spectre_v4_state()) { + case SPECTRE_MITIGATED: + /* + * As for the hypercall discovery, we pretend we + * don't have any FW mitigation if SSBS is there at + * all times. + */ + if (cpus_have_final_cap(ARM64_SSBS)) + return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL; + fallthrough; + case SPECTRE_UNAFFECTED: + return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED; + case SPECTRE_VULNERABLE: + return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL; + } + } + + return -EINVAL; +} + +int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + void __user *uaddr = (void __user *)(long)reg->addr; + u64 val; + + switch (reg->id) { + case KVM_REG_ARM_PSCI_VERSION: + val = kvm_psci_version(vcpu, vcpu->kvm); + break; + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2: + val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK; + break; + default: + return -ENOENT; + } + + if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + return 0; +} + +int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + void __user *uaddr = (void __user *)(long)reg->addr; + u64 val; + int wa_level; + + if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + switch (reg->id) { + case KVM_REG_ARM_PSCI_VERSION: + { + bool wants_02; + + wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features); + + switch (val) { + case KVM_ARM_PSCI_0_1: + if (wants_02) + return -EINVAL; + vcpu->kvm->arch.psci_version = val; + return 0; + case KVM_ARM_PSCI_0_2: + case KVM_ARM_PSCI_1_0: + if (!wants_02) + return -EINVAL; + vcpu->kvm->arch.psci_version = val; + return 0; + } + break; + } + + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: + if (val & ~KVM_REG_FEATURE_LEVEL_MASK) + return -EINVAL; + + if (get_kernel_wa_level(reg->id) < val) + return -EINVAL; + + return 0; + + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2: + if (val & ~(KVM_REG_FEATURE_LEVEL_MASK | + KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED)) + return -EINVAL; + + /* The enabled bit must not be set unless the level is AVAIL. */ + if ((val & KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED) && + (val & KVM_REG_FEATURE_LEVEL_MASK) != KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL) + return -EINVAL; + + /* + * Map all the possible incoming states to the only two we + * really want to deal with. + */ + switch (val & KVM_REG_FEATURE_LEVEL_MASK) { + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: + wa_level = KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL; + break; + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED: + wa_level = KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED; + break; + default: + return -EINVAL; + } + + /* + * We can deal with NOT_AVAIL on NOT_REQUIRED, but not the + * other way around. + */ + if (get_kernel_wa_level(reg->id) < wa_level) + return -EINVAL; + + return 0; + default: + return -ENOENT; + } + + return -EINVAL; +} diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c index 74c47d420253..6c8323ae32f2 100644 --- a/arch/arm64/kvm/psci.c +++ b/arch/arm64/kvm/psci.c @@ -403,169 +403,3 @@ int kvm_psci_call(struct kvm_vcpu *vcpu) return -EINVAL; }; } - -int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu) -{ - return 3; /* PSCI version and two workaround registers */ -} - -int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) -{ - if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices++)) - return -EFAULT; - - if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1, uindices++)) - return -EFAULT; - - if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2, uindices++)) - return -EFAULT; - - return 0; -} - -#define KVM_REG_FEATURE_LEVEL_WIDTH 4 -#define KVM_REG_FEATURE_LEVEL_MASK (BIT(KVM_REG_FEATURE_LEVEL_WIDTH) - 1) - -/* - * Convert the workaround level into an easy-to-compare number, where higher - * values mean better protection. - */ -static int get_kernel_wa_level(u64 regid) -{ - switch (regid) { - case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: - switch (arm64_get_spectre_v2_state()) { - case SPECTRE_VULNERABLE: - return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL; - case SPECTRE_MITIGATED: - return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL; - case SPECTRE_UNAFFECTED: - return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_REQUIRED; - } - return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL; - case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2: - switch (arm64_get_spectre_v4_state()) { - case SPECTRE_MITIGATED: - /* - * As for the hypercall discovery, we pretend we - * don't have any FW mitigation if SSBS is there at - * all times. - */ - if (cpus_have_final_cap(ARM64_SSBS)) - return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL; - fallthrough; - case SPECTRE_UNAFFECTED: - return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED; - case SPECTRE_VULNERABLE: - return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL; - } - } - - return -EINVAL; -} - -int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) -{ - void __user *uaddr = (void __user *)(long)reg->addr; - u64 val; - - switch (reg->id) { - case KVM_REG_ARM_PSCI_VERSION: - val = kvm_psci_version(vcpu, vcpu->kvm); - break; - case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: - case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2: - val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK; - break; - default: - return -ENOENT; - } - - if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id))) - return -EFAULT; - - return 0; -} - -int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) -{ - void __user *uaddr = (void __user *)(long)reg->addr; - u64 val; - int wa_level; - - if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id))) - return -EFAULT; - - switch (reg->id) { - case KVM_REG_ARM_PSCI_VERSION: - { - bool wants_02; - - wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features); - - switch (val) { - case KVM_ARM_PSCI_0_1: - if (wants_02) - return -EINVAL; - vcpu->kvm->arch.psci_version = val; - return 0; - case KVM_ARM_PSCI_0_2: - case KVM_ARM_PSCI_1_0: - if (!wants_02) - return -EINVAL; - vcpu->kvm->arch.psci_version = val; - return 0; - } - break; - } - - case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: - if (val & ~KVM_REG_FEATURE_LEVEL_MASK) - return -EINVAL; - - if (get_kernel_wa_level(reg->id) < val) - return -EINVAL; - - return 0; - - case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2: - if (val & ~(KVM_REG_FEATURE_LEVEL_MASK | - KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED)) - return -EINVAL; - - /* The enabled bit must not be set unless the level is AVAIL. */ - if ((val & KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED) && - (val & KVM_REG_FEATURE_LEVEL_MASK) != KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL) - return -EINVAL; - - /* - * Map all the possible incoming states to the only two we - * really want to deal with. - */ - switch (val & KVM_REG_FEATURE_LEVEL_MASK) { - case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: - case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: - wa_level = KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL; - break; - case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: - case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED: - wa_level = KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED; - break; - default: - return -EINVAL; - } - - /* - * We can deal with NOT_AVAIL on NOT_REQUIRED, but not the - * other way around. - */ - if (get_kernel_wa_level(reg->id) < wa_level) - return -EINVAL; - - return 0; - default: - return -ENOENT; - } - - return -EINVAL; -} diff --git a/include/kvm/arm_hypercalls.h b/include/kvm/arm_hypercalls.h index 0e2509d27910..5d38628a8d04 100644 --- a/include/kvm/arm_hypercalls.h +++ b/include/kvm/arm_hypercalls.h @@ -40,4 +40,11 @@ static inline void smccc_set_retval(struct kvm_vcpu *vcpu, vcpu_set_reg(vcpu, 3, a3); } +struct kvm_one_reg; + +int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu); +int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices); +int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); +int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); + #endif diff --git a/include/kvm/arm_psci.h b/include/kvm/arm_psci.h index 5b58bd2fe088..080c2d0bd6e7 100644 --- a/include/kvm/arm_psci.h +++ b/include/kvm/arm_psci.h @@ -42,11 +42,4 @@ static inline int kvm_psci_version(struct kvm_vcpu *vcpu, struct kvm *kvm) int kvm_psci_call(struct kvm_vcpu *vcpu); -struct kvm_one_reg; - -int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu); -int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices); -int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); -int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); - #endif /* __KVM_ARM_PSCI_H__ */ From patchwork Sat Nov 13 01:22:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 12617495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C09FC433F5 for ; Sat, 13 Nov 2021 01:22:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1641F6108B for ; Sat, 13 Nov 2021 01:22:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235335AbhKMBZl (ORCPT ); Fri, 12 Nov 2021 20:25:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235288AbhKMBZk (ORCPT ); Fri, 12 Nov 2021 20:25:40 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83982C061767 for ; Fri, 12 Nov 2021 17:22:49 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id r7-20020a63ce47000000b002a5cadd2f25so5647380pgi.9 for ; Fri, 12 Nov 2021 17:22:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=K2RrFeCoM3oSUmdtd3lbeaWPVCloJKULJo2uGZ1r79E=; b=ZtlR2+nS0IKTZkdZ+iOB/3biPj5mqutgRk8kNQsIEw7z+XWEo+BqN3FnuTCtrKmxet xNbwn80O8CG3bALgllFt7Of37R9JN5+ezAmmYga3PViucXn1Wnep5A4SP/0ZPL0NTIN6 AhhJuelp0+TnzXGrC++hwN40dscFhSY6Byq/AhFM1m93gKU0BgVKAqPinBR6XkrBLZf3 nVQcov7Qv2xjfCP6yHNUXxuIwImLhmkRHw/SoupitowgMors3vwezLwgyCJghjcbHDa8 jZqnC/Xe7SZLkEqGg+UcGe33Hsvzh4eGCusKcFFPy0KgYQmmVthhe7dxNA0AVJirjbqM bDlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=K2RrFeCoM3oSUmdtd3lbeaWPVCloJKULJo2uGZ1r79E=; b=monekVv6oLHvO7xME47Z6uihxOUspuS5W04XzLBJBgHant8xPy+lR99nwSNzdytLAw kmmREIywTz71am/D91tmzWjL9tqarYaeFwf3VYMHbuOwHAqPt1CHWQpNkvgwQRdnaKiu 4PVy5BAgD5jHt17tKGVvgxIf6RAXZVvzNfinFQzFu3QfCTguw0VkFOz+2UeCIf5db60u LJ9ab7VdaEuRyPKfTAhiW6doK8HvjlWQHezv/bZC0Vohwc0Hh2IrLJITzr0KNHmH7HTL hvDXpZjWNofqg3S6Pbazyuixn2adRIAdyQfptq7uKaP0jcbZKm63M+/VDVsMrBZjLOPN NiEQ== X-Gm-Message-State: AOAM532h7pfI69vfjb9yq16PNXjZx4cI/Q4bG+rFOZL7b7ZsOj5UcCWI y5eQdNeIDa+XPR1dE556eC4gHEnuQIFp X-Google-Smtp-Source: ABdhPJwF3UWNAsLH1G2l2TXGsB0hvJnIBmWY3EVPoDjpML3MqbXumCuHHbdlyxNGDRm7QYJjESgDoDoTwgEz X-Received: from rananta-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1bcc]) (user=rananta job=sendgmr) by 2002:a62:7a54:0:b0:494:6e78:994b with SMTP id v81-20020a627a54000000b004946e78994bmr17996474pfc.5.1636766568955; Fri, 12 Nov 2021 17:22:48 -0800 (PST) Date: Sat, 13 Nov 2021 01:22:25 +0000 In-Reply-To: <20211113012234.1443009-1-rananta@google.com> Message-Id: <20211113012234.1443009-3-rananta@google.com> Mime-Version: 1.0 References: <20211113012234.1443009-1-rananta@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [RFC PATCH v2 02/11] KVM: Introduce kvm_vcpu_has_run_once From: Raghavendra Rao Ananta To: Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Paolo Bonzini , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Architectures such as arm64 and riscv uses vcpu variables such as has_run_once and ran_atleast_once, respectively, to mark if the vCPU has started running. Since these are architecture agnostic variables, introduce kvm_vcpu_has_run_once() as a core kvm functionality and use this instead of the architecture defined variables. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 3 --- arch/arm64/kvm/arm.c | 8 ++++---- arch/arm64/kvm/vgic/vgic-init.c | 2 +- arch/riscv/include/asm/kvm_host.h | 3 --- arch/riscv/kvm/vcpu.c | 7 ++----- include/linux/kvm_host.h | 7 +++++++ virt/kvm/kvm_main.c | 1 + 7 files changed, 15 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4be8486042a7..02dffe50a20c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -367,9 +367,6 @@ struct kvm_vcpu_arch { int target; DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES); - /* Detect first run of a vcpu */ - bool has_run_once; - /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ u64 vsesr_el2; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index f5490afe1ebf..0cc148211b4e 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -344,7 +344,7 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) { - if (vcpu->arch.has_run_once && unlikely(!irqchip_in_kernel(vcpu->kvm))) + if (kvm_vcpu_has_run_once(vcpu) && unlikely(!irqchip_in_kernel(vcpu->kvm))) static_branch_dec(&userspace_irqchip_in_use); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); @@ -582,13 +582,13 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) struct kvm *kvm = vcpu->kvm; int ret = 0; - if (likely(vcpu->arch.has_run_once)) + if (likely(kvm_vcpu_has_run_once(vcpu))) return 0; if (!kvm_arm_vcpu_is_finalized(vcpu)) return -EPERM; - vcpu->arch.has_run_once = true; + vcpu->has_run_once = true; kvm_arm_vcpu_init_debug(vcpu); @@ -1116,7 +1116,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, * need to invalidate the I-cache though, as FWB does *not* * imply CTR_EL0.DIC. */ - if (vcpu->arch.has_run_once) { + if (kvm_vcpu_has_run_once(vcpu)) { if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) stage2_unmap_vm(vcpu->kvm); else diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c index 0a06d0648970..6fb41097880b 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -91,7 +91,7 @@ int kvm_vgic_create(struct kvm *kvm, u32 type) return ret; kvm_for_each_vcpu(i, vcpu, kvm) { - if (vcpu->arch.has_run_once) + if (kvm_vcpu_has_run_once(vcpu)) goto out_unlock; } ret = 0; diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 25ba21f98504..645e95f61d47 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -147,9 +147,6 @@ struct kvm_vcpu_csr { }; struct kvm_vcpu_arch { - /* VCPU ran at least once */ - bool ran_atleast_once; - /* ISA feature bits (similar to MISA) */ unsigned long isa; diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index e3d3aed46184..18cbc8b0c03d 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -75,9 +75,6 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *cntx; - /* Mark this VCPU never ran */ - vcpu->arch.ran_atleast_once = false; - /* Setup ISA features available to VCPU */ vcpu->arch.isa = riscv_isa_extension_base(NULL) & KVM_RISCV_ISA_ALLOWED; @@ -190,7 +187,7 @@ static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu, switch (reg_num) { case KVM_REG_RISCV_CONFIG_REG(isa): - if (!vcpu->arch.ran_atleast_once) { + if (!kvm_vcpu_has_run_once(vcpu)) { vcpu->arch.isa = reg_val; vcpu->arch.isa &= riscv_isa_extension_base(NULL); vcpu->arch.isa &= KVM_RISCV_ISA_ALLOWED; @@ -682,7 +679,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) struct kvm_run *run = vcpu->run; /* Mark this VCPU ran at least once */ - vcpu->arch.ran_atleast_once = true; + vcpu->has_run_once = true; vcpu->arch.srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 60a35d9fe259..b373929c71eb 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -360,6 +360,8 @@ struct kvm_vcpu { * it is a valid slot. */ int last_used_slot; + + bool has_run_once; }; /* must be called with irqs disabled */ @@ -1847,4 +1849,9 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 +static inline bool kvm_vcpu_has_run_once(struct kvm_vcpu *vcpu) +{ + return vcpu->has_run_once; +} + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3f6d450355f0..1ec8a8e959b2 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -433,6 +433,7 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) vcpu->ready = false; preempt_notifier_init(&vcpu->preempt_notifier, &kvm_preempt_ops); vcpu->last_used_slot = 0; + vcpu->has_run_once = false; } void kvm_vcpu_destroy(struct kvm_vcpu *vcpu) From patchwork Sat Nov 13 01:22:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 12617497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEE81C433EF for ; Sat, 13 Nov 2021 01:22:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B12626109D for ; Sat, 13 Nov 2021 01:22:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235555AbhKMBZo (ORCPT ); Fri, 12 Nov 2021 20:25:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235548AbhKMBZn (ORCPT ); Fri, 12 Nov 2021 20:25:43 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E3DAC061767 for ; Fri, 12 Nov 2021 17:22:52 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id l10-20020a17090a4d4a00b001a6f817f57eso5273096pjh.3 for ; Fri, 12 Nov 2021 17:22:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Ba+wxobSaiFgmTcgNPo/VsfplrPQ2ANaflUkZ8U4a0Y=; b=B2RXTSE+PsDQxq7oyt7onb151ClNd8WGtfxbuo8u0TZgh9aOnDTWAnvdB/5/Dy0rzQ W0SVhNnv9T96lHG4/HucXMgaCt2V6L/wjk9GMjXIyF1SLKA3QiUJ1+NyrPEjPNeQCF8L L7kGJS/2VpOghFFwbmHpzBKTMy1rH586KW5wzi0d6hgzl5ZT2hHP62qfgOs/PBN1QLR3 I283eK87ObcpukWe/Tw04Po+bpk1cxT80eJ/Vb3iF3jFtt6VLa1UmfuvwFNryAuWFGf+ NbsBedNB+i2THjX+c+6HFo+oIyw/Fidj1xiS3ksdMJJhZhV4oinNo3UnIYCJtHJGTh+W j+qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Ba+wxobSaiFgmTcgNPo/VsfplrPQ2ANaflUkZ8U4a0Y=; b=hNENoeDUos+FCy5z1+1ZBkgDs64vmr97EvjaE3RW7LD5alWmMQoAt5AoBx6bqQyPdw whh/hHFGMxDOKX1pgJ83G6ZqM+5ICQGH/+ok8GgT5ydzuIFcyReP9UarOF/YZmb52NVO 8rzwdiQiMQrXhJZodae+H7FFcu04uRXMrWE7/6JLcoRnClGqBzbskIUvkpGBc+20rSMD K9JglRGN8cPmsoo3b7uyClcky4rUxHsCFORJ7sja/lRBDAJzhUeOenKDrfEdYHFnEEmJ gVhIgS01AqaYvCzoYkj7pdRBqVxYu9tdBm43yvFm5HJ1TD2HZhPVzDQvykyPwHl4mylh QT+Q== X-Gm-Message-State: AOAM530wzb0XCfi0HqhADk9ymIF4YmZe01pYI3uS8MRCI7GUKifKC0eO P5JPPd4JASG+ftfnDUkwIp0GOd0YAdJn X-Google-Smtp-Source: ABdhPJxd9EBwwuIBPspUU4u87Oi0TkCWLlVKjb1nJ8VxwRhr5W5IAS2T2mLzZod60TuVuSRdfolwZ6rvNCJY X-Received: from rananta-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1bcc]) (user=rananta job=sendgmr) by 2002:a17:90a:3905:: with SMTP id y5mr141689pjb.0.1636766571415; Fri, 12 Nov 2021 17:22:51 -0800 (PST) Date: Sat, 13 Nov 2021 01:22:26 +0000 In-Reply-To: <20211113012234.1443009-1-rananta@google.com> Message-Id: <20211113012234.1443009-4-rananta@google.com> Mime-Version: 1.0 References: <20211113012234.1443009-1-rananta@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [RFC PATCH v2 03/11] KVM: Introduce kvm_vm_has_run_once From: Raghavendra Rao Ananta To: Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Paolo Bonzini , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The upcoming patches need a way to detect if the VM, as a whole, has started. Hence, unionize kvm_vcpu_has_run_once() of all the vcpus of the VM and build kvm_vm_has_run_once() to achieve the functionality. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- include/linux/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 17 +++++++++++++++++ 2 files changed, 19 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index b373929c71eb..102e00c0e21c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1854,4 +1854,6 @@ static inline bool kvm_vcpu_has_run_once(struct kvm_vcpu *vcpu) return vcpu->has_run_once; } +bool kvm_vm_has_run_once(struct kvm *kvm); + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1ec8a8e959b2..3d8d96e8f61d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4339,6 +4339,23 @@ static int kvm_vm_ioctl_get_stats_fd(struct kvm *kvm) return fd; } +bool kvm_vm_has_run_once(struct kvm *kvm) +{ + int i, ret = false; + struct kvm_vcpu *vcpu; + + mutex_lock(&kvm->lock); + + kvm_for_each_vcpu(i, vcpu, kvm) { + ret = kvm_vcpu_has_run_once(vcpu); + if (ret) + break; + } + + mutex_unlock(&kvm->lock); + return ret; +} + static long kvm_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { From patchwork Sat Nov 13 01:22:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 12617499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B922C433EF for ; Sat, 13 Nov 2021 01:22:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F1D296108B for ; Sat, 13 Nov 2021 01:22:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235595AbhKMBZr (ORCPT ); Fri, 12 Nov 2021 20:25:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235598AbhKMBZq (ORCPT ); Fri, 12 Nov 2021 20:25:46 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 959F3C061767 for ; Fri, 12 Nov 2021 17:22:54 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id s22-20020a056a0008d600b00480fea2e96cso6604596pfu.7 for ; Fri, 12 Nov 2021 17:22:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+aTc395V6StwlPMMozSaDYyuowW/pxa49nUmlm+vlGY=; b=Ok27NYYmRgN2/z3RWIZhPphOf3rQMPDTd8TDzYrQElylGFgb2NFU78LSbFWqgnQnkh WFc+Edik8kKsOPvBoB+4sYx0QDpJ7pF1WJWYAURM1Fy58LSjQo1evNB+IMdy881jtniH K2oEhsmQ0C1Q0eSCivY8r/UuNLCK8WiYbozEemo4NZkA4VH554Klz7vNEsODDiuDZNj1 pszvd7xkdEe9UluHWyTQEYrz/BTVZHYlRmUKoO5bqzPHoKTD7mauBd5ZcgIwK8aSaMQK tlnVXYUYae9kFs/MF8NHX2gZhsmKGZt5oxVEyFaHCRmtY/4eVUnuIDFy+ntPL/Z23sUV iZnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+aTc395V6StwlPMMozSaDYyuowW/pxa49nUmlm+vlGY=; b=jBi9wik0bLojx1AhCOUUyxrHJhQU3rMJgnEFYV9myecrX1hbmQmGpKqgm49dIwa+Y7 sivQKG3+acoMYCVTPjXFNKmFUsNHuXk9fNPRypmWEPmgMdtHSo8p73OVEQm/52FY76x6 H7anaVc/ayDMc2NplnusoJ8e1UP8nibaPhOqMtB2TsyT1KwplWdM4yH9m/mhEm0aIcKB 65aHrnmNOIV2zyoQlKhRw7d9StpA72+ovXRvqQ3cA/7SSl0dH+pocf3ce6oiPOlEYDGj uAIskseA/nL7TMWkjotnKVqlAeOumvJpCjUJZWGd401nkuslHSCBPIIcBrG8TRN93441 5KtQ== X-Gm-Message-State: AOAM530z/dfTWz52knMjyBPcL2P9aO00SsRHQczsCTHU4C9MoFE04nBf pnc0exRihRCXtvS1toijdSGfJNGGoLgs X-Google-Smtp-Source: ABdhPJxi6EDbVldKMojd8H/bfjMtkpYbD8II0syl77qvXLNkEbiH2L/XTiDNhpaSL6TwDf1T9xDt+y18ClLS X-Received: from rananta-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1bcc]) (user=rananta job=sendgmr) by 2002:a17:90b:1c87:: with SMTP id oo7mr17488390pjb.159.1636766574127; Fri, 12 Nov 2021 17:22:54 -0800 (PST) Date: Sat, 13 Nov 2021 01:22:27 +0000 In-Reply-To: <20211113012234.1443009-1-rananta@google.com> Message-Id: <20211113012234.1443009-5-rananta@google.com> Mime-Version: 1.0 References: <20211113012234.1443009-1-rananta@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [RFC PATCH v2 04/11] KVM: arm64: Setup a framework for hypercall bitmap firmware registers From: Raghavendra Rao Ananta To: Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Paolo Bonzini , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org KVM regularly introduces new hypercall services to the guests without any consent from the Virtual Machine Manager (VMM). This means, the guests can observe hypercall services in and out as they migrate across various host kernel versions. This could be a major problem if the guest discovered a hypercall, started using it, and after getting migrated to an older kernel realizes that it's no longer available. Depending on how the guest handles the change, there's a potential chance that the guest would just panic. As a result, there's a need for the VMM to elect the services that it wishes the guest to discover. VMM can elect these services based on the kernels spread across its (migration) fleet. To remedy this, extend the existing firmware psuedo-registers, such as KVM_REG_ARM_PSCI_VERSION, for all the hypercall services available. These firmware registers are categorized based on the service call owners, and unlike the existing firmware psuedo-registers, they hold the features supported in the form of a bitmap. During VM (vCPU) initialization, the registers shows an upper-limit of the features supported by the corresponding registers. The VMM can simply use GET_ONE_REG to discover the features. If it's unhappy with any of the features, it can simply write-back the desired feature bitmap using SET_ONE_REG. KVM allows these modification only until a VM has started. KVM also assumes that the VMM is unaware of a register if a register remains unaccessed (read/write), and would simply clear all the bits of the registers such that the guest accidently doesn't get exposed to the features. Finally, the set of bitmaps from all the registers are the services that are exposed to the guest. In order to provide backward compatibility with already existing VMMs, a new capability, KVM_CAP_ARM_HVC_FW_REG_BMAP, is introduced. To enable the bitmap firmware registers extension, the capability must be explicitly enabled. If not, the behavior is similar to the previous setup. In this patch, the framework adds the register only for ARM's standard secure services (owner value 4). Currently, this includes support only for ARM True Random Number Generator (TRNG) service, with bit-0 of the register representing mandatory features of v1.0. Other services are momentarily added in the upcoming patches. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 16 +++ arch/arm64/include/uapi/asm/kvm.h | 4 + arch/arm64/kvm/arm.c | 23 +++- arch/arm64/kvm/hypercalls.c | 217 +++++++++++++++++++++++++++++- arch/arm64/kvm/trng.c | 9 +- include/kvm/arm_hypercalls.h | 7 + include/uapi/linux/kvm.h | 1 + 7 files changed, 262 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 02dffe50a20c..1546a2f973ef 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -102,6 +102,19 @@ struct kvm_s2_mmu { struct kvm_arch_memory_slot { }; +struct hvc_fw_reg_bmap { + bool accessed; + u64 reg_id; + u64 bmap; +}; + +struct hvc_reg_desc { + spinlock_t lock; + bool fw_reg_bmap_enabled; + + struct hvc_fw_reg_bmap hvc_std_bmap; +}; + struct kvm_arch { struct kvm_s2_mmu mmu; @@ -137,6 +150,9 @@ struct kvm_arch { /* Memory Tagging Extension enabled for the guest */ bool mte_enabled; + + /* Hypercall firmware registers' descriptor */ + struct hvc_reg_desc hvc_desc; }; struct kvm_vcpu_fault_info { diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index b3edde68bc3e..d6e099ed14ef 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -281,6 +281,10 @@ struct kvm_arm_copy_mte_tags { #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED 3 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED (1U << 4) +#define KVM_REG_ARM_STD_BMAP KVM_REG_ARM_FW_REG(3) +#define KVM_REG_ARM_STD_BIT_TRNG_V1_0 BIT(0) +#define KVM_REG_ARM_STD_BMAP_BIT_MAX 0 /* Last valid bit */ + /* SVE registers */ #define KVM_REG_ARM64_SVE (0x15 << KVM_REG_ARM_COPROC_SHIFT) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 0cc148211b4e..f2099e4d1109 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -81,26 +81,32 @@ int kvm_arch_check_processor_compat(void *opaque) int kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) { - int r; + int r = 0; + struct hvc_reg_desc *hvc_desc = &kvm->arch.hvc_desc; if (cap->flags) return -EINVAL; switch (cap->cap) { case KVM_CAP_ARM_NISV_TO_USER: - r = 0; kvm->arch.return_nisv_io_abort_to_user = true; break; case KVM_CAP_ARM_MTE: mutex_lock(&kvm->lock); - if (!system_supports_mte() || kvm->created_vcpus) { + if (!system_supports_mte() || kvm->created_vcpus) r = -EINVAL; - } else { - r = 0; + else kvm->arch.mte_enabled = true; - } mutex_unlock(&kvm->lock); break; + case KVM_CAP_ARM_HVC_FW_REG_BMAP: + if (kvm_vm_has_run_once(kvm)) + return -EBUSY; + + spin_lock(&hvc_desc->lock); + hvc_desc->fw_reg_bmap_enabled = true; + spin_unlock(&hvc_desc->lock); + break; default: r = -EINVAL; break; @@ -157,6 +163,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) set_default_spectre(kvm); + kvm_arm_init_hypercalls(kvm); + return ret; out_free_stage2_pgd: kvm_free_stage2_pgd(&kvm->arch.mmu); @@ -215,6 +223,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_SET_GUEST_DEBUG: case KVM_CAP_VCPU_ATTRIBUTES: case KVM_CAP_PTP_KVM: + case KVM_CAP_ARM_HVC_FW_REG_BMAP: r = 1; break; case KVM_CAP_SET_GUEST_DEBUG2: @@ -622,6 +631,8 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) if (kvm_vm_is_protected(kvm)) kvm_call_hyp_nvhe(__pkvm_vcpu_init_traps, vcpu); + kvm_arm_sanitize_fw_regs(kvm); + return ret; } diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c index 9e136d91b470..f5df7bc61146 100644 --- a/arch/arm64/kvm/hypercalls.c +++ b/arch/arm64/kvm/hypercalls.c @@ -58,6 +58,41 @@ static void kvm_ptp_get_time(struct kvm_vcpu *vcpu, u64 *val) val[3] = lower_32_bits(cycles); } +static bool +kvm_arm_fw_reg_feat_enabled(struct hvc_fw_reg_bmap *reg_bmap, u64 feat_bit) +{ + return reg_bmap->bmap & feat_bit; +} + +bool kvm_hvc_call_supported(struct kvm_vcpu *vcpu, u32 func_id) +{ + struct hvc_reg_desc *hvc_desc = &vcpu->kvm->arch.hvc_desc; + + /* + * To ensure backward compatibility, support all the service calls, + * including new additions, if the firmware registers holding the + * feature bitmaps isn't explicitly enabled. + */ + if (!hvc_desc->fw_reg_bmap_enabled) + return true; + + switch (func_id) { + case ARM_SMCCC_TRNG_VERSION: + case ARM_SMCCC_TRNG_FEATURES: + case ARM_SMCCC_TRNG_GET_UUID: + case ARM_SMCCC_TRNG_RND32: + case ARM_SMCCC_TRNG_RND64: + return kvm_arm_fw_reg_feat_enabled(&hvc_desc->hvc_std_bmap, + KVM_REG_ARM_STD_BIT_TRNG_V1_0); + default: + /* By default, allow the services that aren't listed here */ + return true; + } + + /* We shouldn't be reaching here */ + return true; +} + int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) { u32 func_id = smccc_get_function(vcpu); @@ -65,6 +100,9 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) u32 feature; gpa_t gpa; + if (!kvm_hvc_call_supported(vcpu, func_id)) + goto out; + switch (func_id) { case ARM_SMCCC_VERSION_FUNC_ID: val[0] = ARM_SMCCC_VERSION_1_1; @@ -143,6 +181,7 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) return kvm_psci_call(vcpu); } +out: smccc_set_retval(vcpu, val[0], val[1], val[2], val[3]); return 1; } @@ -153,17 +192,178 @@ static const u64 fw_reg_ids[] = { KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2, }; +static const u64 fw_reg_bmap_ids[] = { + KVM_REG_ARM_STD_BMAP, +}; + +static void kvm_arm_fw_reg_init_hvc(struct hvc_reg_desc *hvc_desc, + struct hvc_fw_reg_bmap *fw_reg_bmap, + u64 reg_id, u64 default_map) +{ + fw_reg_bmap->reg_id = reg_id; + fw_reg_bmap->bmap = default_map; +} + +void kvm_arm_init_hypercalls(struct kvm *kvm) +{ + struct hvc_reg_desc *hvc_desc = &kvm->arch.hvc_desc; + + spin_lock_init(&hvc_desc->lock); + + kvm_arm_fw_reg_init_hvc(hvc_desc, &hvc_desc->hvc_std_bmap, + KVM_REG_ARM_STD_BMAP, ARM_SMCCC_STD_FEATURES); +} + +static void kvm_arm_fw_reg_sanitize(struct hvc_fw_reg_bmap *fw_reg_bmap) +{ + if (!fw_reg_bmap->accessed) + fw_reg_bmap->bmap = 0; +} + +/* + * kvm_arm_sanitize_fw_regs: Sanitize the hypercall firmware registers + * + * Sanitization, in the case of hypercall firmware registers, is basically + * clearing out the feature bitmaps so that the guests are not exposed to + * the services corresponding to a particular register. The registers that + * needs sanitization is decided on two factors on the user-space part: + * 1. Enablement of KVM_CAP_ARM_HVC_FW_REG_BMAP: + * If the user-space hasn't enabled the capability, it either means + * that it's unaware of its existence, or it simply doesn't want to + * participate in the arrangement and is okay with the default settings. + * The former case is to ensure backward compatibility. + * + * 2. Has the user-space accessed (read/write) the register? : + * If yes, it means that the user-space is aware of the register's + * existence and can set the bits as it sees fit for the guest. A + * read-only access from user-space indicates that the user-space is + * happy with the default settings, and doesn't wish to change it. + * + * The logic for sanitizing a register will then be: + * --------------------------------------------------------------------------- + * | CAP enabled | Accessed reg | Clear reg | Comments | + * --------------------------------------------------------------------------- + * | N | N | N | | + * | N | Y | N | -ENOENT returned during access | + * | Y | N | Y | | + * | Y | Y | N | | + * --------------------------------------------------------------------------- + */ +void kvm_arm_sanitize_fw_regs(struct kvm *kvm) +{ + struct hvc_reg_desc *hvc_desc = &kvm->arch.hvc_desc; + + spin_lock(&hvc_desc->lock); + + if (!hvc_desc->fw_reg_bmap_enabled) + goto out; + + kvm_arm_fw_reg_sanitize(&hvc_desc->hvc_std_bmap); + +out: + spin_unlock(&hvc_desc->lock); +} + +static int kvm_arm_fw_reg_get_bmap(struct kvm *kvm, + struct hvc_fw_reg_bmap *fw_reg_bmap, u64 *val) +{ + int ret = 0; + struct hvc_reg_desc *hvc_desc = &kvm->arch.hvc_desc; + + spin_lock(&hvc_desc->lock); + + if (!hvc_desc->fw_reg_bmap_enabled) { + ret = -ENOENT; + goto out; + } + + fw_reg_bmap->accessed = true; + *val = fw_reg_bmap->bmap; +out: + spin_unlock(&hvc_desc->lock); + return ret; +} + +static int kvm_arm_fw_reg_set_bmap(struct kvm *kvm, + struct hvc_fw_reg_bmap *fw_reg_bmap, u64 val) +{ + int ret = 0; + u64 fw_reg_features; + struct hvc_reg_desc *hvc_desc = &kvm->arch.hvc_desc; + + spin_lock(&hvc_desc->lock); + + if (!hvc_desc->fw_reg_bmap_enabled) { + ret = -ENOENT; + goto out; + } + + if (fw_reg_bmap->bmap == val) + goto out; + + if (kvm_vm_has_run_once(kvm)) { + ret = -EBUSY; + goto out; + } + + switch (fw_reg_bmap->reg_id) { + case KVM_REG_ARM_STD_BMAP: + fw_reg_features = ARM_SMCCC_STD_FEATURES; + break; + default: + ret = -EINVAL; + goto out; + } + + /* Check for unsupported feature bit */ + if (val & ~fw_reg_features) { + ret = -EINVAL; + goto out; + } + + fw_reg_bmap->accessed = true; + fw_reg_bmap->bmap = val; +out: + spin_unlock(&hvc_desc->lock); + return ret; +} + int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu) { - return ARRAY_SIZE(fw_reg_ids); + struct hvc_reg_desc *hvc_desc = &vcpu->kvm->arch.hvc_desc; + int n_regs = ARRAY_SIZE(fw_reg_ids); + + spin_lock(&hvc_desc->lock); + + if (hvc_desc->fw_reg_bmap_enabled) + n_regs += ARRAY_SIZE(fw_reg_bmap_ids); + + spin_unlock(&hvc_desc->lock); + + return n_regs; } int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) { + struct hvc_reg_desc *hvc_desc = &vcpu->kvm->arch.hvc_desc; int i; for (i = 0; i < ARRAY_SIZE(fw_reg_ids); i++) { - if (put_user(fw_reg_ids[i], uindices)) + if (put_user(fw_reg_ids[i], uindices++)) + return -EFAULT; + } + + spin_lock(&hvc_desc->lock); + + if (!hvc_desc->fw_reg_bmap_enabled) { + spin_unlock(&hvc_desc->lock); + return 0; + } + + spin_unlock(&hvc_desc->lock); + + for (i = 0; i < ARRAY_SIZE(fw_reg_bmap_ids); i++) { + if (put_user(fw_reg_bmap_ids[i], uindices++)) return -EFAULT; } @@ -213,8 +413,11 @@ static int get_kernel_wa_level(u64 regid) int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { + struct hvc_reg_desc *hvc_desc = &vcpu->kvm->arch.hvc_desc; void __user *uaddr = (void __user *)(long)reg->addr; + struct kvm *kvm = vcpu->kvm; u64 val; + int ret; switch (reg->id) { case KVM_REG_ARM_PSCI_VERSION: @@ -223,6 +426,12 @@ int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2: val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK; + break; + case KVM_REG_ARM_STD_BMAP: + ret = kvm_arm_fw_reg_get_bmap(kvm, &hvc_desc->hvc_std_bmap, &val); + if (ret) + return ret; + break; default: return -ENOENT; @@ -236,6 +445,8 @@ int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { + struct kvm *kvm = vcpu->kvm; + struct hvc_reg_desc *hvc_desc = &kvm->arch.hvc_desc; void __user *uaddr = (void __user *)(long)reg->addr; u64 val; int wa_level; @@ -310,6 +521,8 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return -EINVAL; return 0; + case KVM_REG_ARM_STD_BMAP: + return kvm_arm_fw_reg_set_bmap(kvm, &hvc_desc->hvc_std_bmap, val); default: return -ENOENT; } diff --git a/arch/arm64/kvm/trng.c b/arch/arm64/kvm/trng.c index 99bdd7103c9c..6dff765f5b9b 100644 --- a/arch/arm64/kvm/trng.c +++ b/arch/arm64/kvm/trng.c @@ -60,14 +60,9 @@ int kvm_trng_call(struct kvm_vcpu *vcpu) val = ARM_SMCCC_TRNG_VERSION_1_0; break; case ARM_SMCCC_TRNG_FEATURES: - switch (smccc_get_arg1(vcpu)) { - case ARM_SMCCC_TRNG_VERSION: - case ARM_SMCCC_TRNG_FEATURES: - case ARM_SMCCC_TRNG_GET_UUID: - case ARM_SMCCC_TRNG_RND32: - case ARM_SMCCC_TRNG_RND64: + if (kvm_hvc_call_supported(vcpu, smccc_get_arg1(vcpu))) val = TRNG_SUCCESS; - } + break; case ARM_SMCCC_TRNG_GET_UUID: smccc_set_retval(vcpu, le32_to_cpu(u[0]), le32_to_cpu(u[1]), diff --git a/include/kvm/arm_hypercalls.h b/include/kvm/arm_hypercalls.h index 5d38628a8d04..8c6300d1cbaf 100644 --- a/include/kvm/arm_hypercalls.h +++ b/include/kvm/arm_hypercalls.h @@ -6,6 +6,9 @@ #include +#define ARM_SMCCC_STD_FEATURES \ + GENMASK_ULL(KVM_REG_ARM_STD_BMAP_BIT_MAX, 0) + int kvm_hvc_call_handler(struct kvm_vcpu *vcpu); static inline u32 smccc_get_function(struct kvm_vcpu *vcpu) @@ -42,9 +45,13 @@ static inline void smccc_set_retval(struct kvm_vcpu *vcpu, struct kvm_one_reg; +void kvm_arm_init_hypercalls(struct kvm *kvm); int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu); int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices); int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); +void kvm_arm_sanitize_fw_regs(struct kvm *kvm); + +bool kvm_hvc_call_supported(struct kvm_vcpu *vcpu, u32 func_id); #endif diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 78f0719cc2a3..3855b7b33bb3 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1130,6 +1130,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_BINARY_STATS_FD 203 #define KVM_CAP_EXIT_ON_EMULATION_FAILURE 204 #define KVM_CAP_ARM_MTE 205 +#define KVM_CAP_ARM_HVC_FW_REG_BMAP 206 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Sat Nov 13 01:22:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 12617501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF207C433EF for ; Sat, 13 Nov 2021 01:23:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C84B860F55 for ; Sat, 13 Nov 2021 01:23:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235646AbhKMBZv (ORCPT ); Fri, 12 Nov 2021 20:25:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235639AbhKMBZs (ORCPT ); Fri, 12 Nov 2021 20:25:48 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B253C061766 for ; Fri, 12 Nov 2021 17:22:57 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id f32-20020a635560000000b002df057654faso4864891pgm.4 for ; Fri, 12 Nov 2021 17:22:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=VI4xUZX2SgaAUorlezds0frm9PtlAxFmUgo0ykk1Vjg=; b=scMwAu8uHMha4/BUOYBN/DHwd5kD3H4etP4vODOnebCHMA5+ZMA6XA+9WcxolUgdlA IkEEIKMtuSTcrV3AYHRHnSDc2WcM9fgefTudv5cKtUUbV8uM+vXW4BebOqMzPNm2bk5n 1+Vzc5z+PGUzjyUPowlblOoOucYN25e/FMjNFU1M2x2NJ+Fzmg2z1O18lKq7B0fl44BU TTE3twuBRh3rrmNb4KjlyuzgT+l8jbGHkSsMgeFNAM3UvIkAaukhITdb7IXe5VZwRt0X By2MwSmIrgjScbvutz1/X08WcUy+yFlER9yr/Evb6Md//Nc8ZM021XHHRfypv+vq8SYF GXaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VI4xUZX2SgaAUorlezds0frm9PtlAxFmUgo0ykk1Vjg=; b=MohJ5bWKHh7fQ5QPF7BX/1uOUcqWZrp/XVrPnGPvCEkaVulAspPHPv7/r+OS+n0nom HO65Wouwyo8jeV7mbkf6ubdtTTgtJsqL3KJDucEcBEcZr4q56OtGacz/4bhvIQw5/4mP 4OEiaFUUrWe+9me9ynaGLrT/B0gPvJygkVRw69tok8L9X6P17Dm//sYypjUcQtzYS4IQ qw1Xz9HlUBiVlpW0+OpIKznAhXNQyAYVIz1vxsvQRHbipp3gdcELEuyAG89xAtzUP414 ptZ+4E++dvelNDqs0x/fNA0ty5I2mABm0WG3sWqFoUToZaPPQn1sV+h2Hwk4ULw6ChmO peYQ== X-Gm-Message-State: AOAM531DhX2Xk/pYqFDidlGmaQfzHJcYeNhPppvV+6AT/yJSdULKxFXo 4mLpsZy5PuALpptUubIPtw47VhTXoBBh X-Google-Smtp-Source: ABdhPJwqzrBvcTPCYoT1Tldzfbztz8BwdLZEi5012/xMNTVOkaH29q73tnA2xy0RwfBuUM0JzfMABilP0mzT X-Received: from rananta-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1bcc]) (user=rananta job=sendgmr) by 2002:a17:90b:1b06:: with SMTP id nu6mr23110635pjb.155.1636766576513; Fri, 12 Nov 2021 17:22:56 -0800 (PST) Date: Sat, 13 Nov 2021 01:22:28 +0000 In-Reply-To: <20211113012234.1443009-1-rananta@google.com> Message-Id: <20211113012234.1443009-6-rananta@google.com> Mime-Version: 1.0 References: <20211113012234.1443009-1-rananta@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [RFC PATCH v2 05/11] KVM: arm64: Add standard hypervisor firmware register From: Raghavendra Rao Ananta To: Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Paolo Bonzini , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce the firmware register to hold the standard hypervisor service calls (owner value 5) as a bitmap. The bitmap represents the features that'll be enabled for the guest, as configured by the user-space. Currently, this includes support only for Paravirtualized time, represented by bit-0. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/uapi/asm/kvm.h | 4 ++++ arch/arm64/kvm/hypercalls.c | 24 ++++++++++++++++++++++++ arch/arm64/kvm/pvtime.c | 3 +++ include/kvm/arm_hypercalls.h | 3 +++ 5 files changed, 35 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 1546a2f973ef..e8e540bd1fe5 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -113,6 +113,7 @@ struct hvc_reg_desc { bool fw_reg_bmap_enabled; struct hvc_fw_reg_bmap hvc_std_bmap; + struct hvc_fw_reg_bmap hvc_std_hyp_bmap; }; struct kvm_arch { diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index d6e099ed14ef..5890cbcd6385 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -285,6 +285,10 @@ struct kvm_arm_copy_mte_tags { #define KVM_REG_ARM_STD_BIT_TRNG_V1_0 BIT(0) #define KVM_REG_ARM_STD_BMAP_BIT_MAX 0 /* Last valid bit */ +#define KVM_REG_ARM_STD_HYP_BMAP KVM_REG_ARM_FW_REG(4) +#define KVM_REG_ARM_STD_HYP_BIT_PV_TIME BIT(0) +#define KVM_REG_ARM_STD_HYP_BMAP_BIT_MAX 0 /* Last valid bit */ + /* SVE registers */ #define KVM_REG_ARM64_SVE (0x15 << KVM_REG_ARM_COPROC_SHIFT) diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c index f5df7bc61146..b3320adc068c 100644 --- a/arch/arm64/kvm/hypercalls.c +++ b/arch/arm64/kvm/hypercalls.c @@ -84,6 +84,10 @@ bool kvm_hvc_call_supported(struct kvm_vcpu *vcpu, u32 func_id) case ARM_SMCCC_TRNG_RND64: return kvm_arm_fw_reg_feat_enabled(&hvc_desc->hvc_std_bmap, KVM_REG_ARM_STD_BIT_TRNG_V1_0); + case ARM_SMCCC_HV_PV_TIME_FEATURES: + case ARM_SMCCC_HV_PV_TIME_ST: + return kvm_arm_fw_reg_feat_enabled(&hvc_desc->hvc_std_hyp_bmap, + KVM_REG_ARM_STD_HYP_BIT_PV_TIME); default: /* By default, allow the services that aren't listed here */ return true; @@ -109,6 +113,9 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) break; case ARM_SMCCC_ARCH_FEATURES_FUNC_ID: feature = smccc_get_arg1(vcpu); + if (!kvm_hvc_call_supported(vcpu, feature)) + break; + switch (feature) { case ARM_SMCCC_ARCH_WORKAROUND_1: switch (arm64_get_spectre_v2_state()) { @@ -194,6 +201,7 @@ static const u64 fw_reg_ids[] = { static const u64 fw_reg_bmap_ids[] = { KVM_REG_ARM_STD_BMAP, + KVM_REG_ARM_STD_HYP_BMAP, }; static void kvm_arm_fw_reg_init_hvc(struct hvc_reg_desc *hvc_desc, @@ -212,6 +220,8 @@ void kvm_arm_init_hypercalls(struct kvm *kvm) kvm_arm_fw_reg_init_hvc(hvc_desc, &hvc_desc->hvc_std_bmap, KVM_REG_ARM_STD_BMAP, ARM_SMCCC_STD_FEATURES); + kvm_arm_fw_reg_init_hvc(hvc_desc, &hvc_desc->hvc_std_hyp_bmap, + KVM_REG_ARM_STD_HYP_BMAP, ARM_SMCCC_STD_HYP_FEATURES); } static void kvm_arm_fw_reg_sanitize(struct hvc_fw_reg_bmap *fw_reg_bmap) @@ -259,6 +269,7 @@ void kvm_arm_sanitize_fw_regs(struct kvm *kvm) goto out; kvm_arm_fw_reg_sanitize(&hvc_desc->hvc_std_bmap); + kvm_arm_fw_reg_sanitize(&hvc_desc->hvc_std_hyp_bmap); out: spin_unlock(&hvc_desc->lock); @@ -310,6 +321,9 @@ static int kvm_arm_fw_reg_set_bmap(struct kvm *kvm, case KVM_REG_ARM_STD_BMAP: fw_reg_features = ARM_SMCCC_STD_FEATURES; break; + case KVM_REG_ARM_STD_HYP_BMAP: + fw_reg_features = ARM_SMCCC_STD_HYP_FEATURES; + break; default: ret = -EINVAL; goto out; @@ -432,6 +446,13 @@ int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (ret) return ret; + break; + case KVM_REG_ARM_STD_HYP_BMAP: + ret = kvm_arm_fw_reg_get_bmap(kvm, + &hvc_desc->hvc_std_hyp_bmap, &val); + if (ret) + return ret; + break; default: return -ENOENT; @@ -523,6 +544,9 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return 0; case KVM_REG_ARM_STD_BMAP: return kvm_arm_fw_reg_set_bmap(kvm, &hvc_desc->hvc_std_bmap, val); + case KVM_REG_ARM_STD_HYP_BMAP: + return kvm_arm_fw_reg_set_bmap(kvm, + &hvc_desc->hvc_std_hyp_bmap, val); default: return -ENOENT; } diff --git a/arch/arm64/kvm/pvtime.c b/arch/arm64/kvm/pvtime.c index 78a09f7a6637..4fa436dbd0b7 100644 --- a/arch/arm64/kvm/pvtime.c +++ b/arch/arm64/kvm/pvtime.c @@ -37,6 +37,9 @@ long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu) u32 feature = smccc_get_arg1(vcpu); long val = SMCCC_RET_NOT_SUPPORTED; + if (!kvm_hvc_call_supported(vcpu, feature)) + return val; + switch (feature) { case ARM_SMCCC_HV_PV_TIME_FEATURES: case ARM_SMCCC_HV_PV_TIME_ST: diff --git a/include/kvm/arm_hypercalls.h b/include/kvm/arm_hypercalls.h index 8c6300d1cbaf..77c30e335f44 100644 --- a/include/kvm/arm_hypercalls.h +++ b/include/kvm/arm_hypercalls.h @@ -9,6 +9,9 @@ #define ARM_SMCCC_STD_FEATURES \ GENMASK_ULL(KVM_REG_ARM_STD_BMAP_BIT_MAX, 0) +#define ARM_SMCCC_STD_HYP_FEATURES \ + GENMASK_ULL(KVM_REG_ARM_STD_HYP_BMAP_BIT_MAX, 0) + int kvm_hvc_call_handler(struct kvm_vcpu *vcpu); static inline u32 smccc_get_function(struct kvm_vcpu *vcpu) From patchwork Sat Nov 13 01:22:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 12617503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB08EC433F5 for ; Sat, 13 Nov 2021 01:23:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A5D606108B for ; Sat, 13 Nov 2021 01:23:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235681AbhKMBZz (ORCPT ); Fri, 12 Nov 2021 20:25:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47686 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235634AbhKMBZu (ORCPT ); Fri, 12 Nov 2021 20:25:50 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A7B7C061767 for ; Fri, 12 Nov 2021 17:22:59 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id p13-20020a63c14d000000b002da483902b1so5644310pgi.12 for ; Fri, 12 Nov 2021 17:22:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Qw/clVNLM0HNZHqaD4lCmZow7BlWKfkgGErSltFYrjA=; b=XK/XfQZyOCv34OdPg13Gv7n46oxO77mh2e+k5qYG7JBdYFUE3ZvOs67O2HdEVwt7ay TDSCi5NN6WlIsUglBQmwVa0xCSQlrTU6V3LC0YtIxzPRDuj9aWXUvgUVV1paxUHYJjML dW9nV9gC0MUKl4mpGfYmdvaDaqnGYQaq0plREFfPfPNxgDWzkl+k1fGvZZEXZGbjmI+J jj9+pXb4NF9LeCdRyZkgfzX1CW9YWXRXGBHMC7YSLjshKl3o+KRs9DqUeOJO9yrdTV4F Q1ZAOxg5FqizoSGbzsMn3fbcyKLdWhKvIidT8om4gB+hh0c/DW3FX69+c23czcDm0V5J UNow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Qw/clVNLM0HNZHqaD4lCmZow7BlWKfkgGErSltFYrjA=; b=B9zH1VcJGHhogavZJKLFp00/l3nYVovyVEmxGf5yPWWxDzu650vUg/l5302ph/ZUB2 syE8W0U7sAoU8yx8mCBgqwF0BhlWxNAF/Fcr0eHO0n++drWyf2FpU3i/ynX1U6sISHYM EKsIIwcOAnz8HtQJHcAgLfOIwZqA8d3F/b6x9R0CV3yyepA2dQxkN4NeyQptgNrdXVXv ULYeafedpvAkFX+t08Twnha7WlBi3tDv7WpQfiwmx2GPOUbdu4QTs/YziXHwjCv4GYLL 16EMTDIXsx61DkH8LogqlbOO2gLc/OPd+MidE0SCbCTTysDrdBi0E6PALMcjBFv6fRgJ 0qiw== X-Gm-Message-State: AOAM532mu2EaVTOClnHqixGgIAfE4iO7O7CS1+UMvNw8VNymf+SosRhX 1jzi71rG5JdjokWdi524ID9B6WPSOa1/ X-Google-Smtp-Source: ABdhPJy57fAHa9LRUb/E2nHa4PDT9a3BCQlcAHcEgns3RhXXXKmRA4HhXAK0WxiN2T67Dr6WLbiAMYiHpry3 X-Received: from rananta-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1bcc]) (user=rananta job=sendgmr) by 2002:aa7:9dcd:0:b0:494:658c:3943 with SMTP id g13-20020aa79dcd000000b00494658c3943mr17575007pfq.19.1636766578705; Fri, 12 Nov 2021 17:22:58 -0800 (PST) Date: Sat, 13 Nov 2021 01:22:29 +0000 In-Reply-To: <20211113012234.1443009-1-rananta@google.com> Message-Id: <20211113012234.1443009-7-rananta@google.com> Mime-Version: 1.0 References: <20211113012234.1443009-1-rananta@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [RFC PATCH v2 06/11] KVM: arm64: Add vendor hypervisor firmware register From: Raghavendra Rao Ananta To: Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Paolo Bonzini , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce the firmware register to hold the vendor specific hypervisor service calls (owner value 6) as a bitmap. The bitmap represents the features that'll be enabled for the guest, as configured by the user-space. Currently, this includes support only for Precision Time Protocol (PTP), represented by bit-0. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/uapi/asm/kvm.h | 4 ++++ arch/arm64/kvm/hypercalls.c | 30 +++++++++++++++++++++++++++++- include/kvm/arm_hypercalls.h | 3 +++ 4 files changed, 37 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e8e540bd1fe5..ef1d10bdf562 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -114,6 +114,7 @@ struct hvc_reg_desc { struct hvc_fw_reg_bmap hvc_std_bmap; struct hvc_fw_reg_bmap hvc_std_hyp_bmap; + struct hvc_fw_reg_bmap hvc_vendor_hyp_bmap; }; struct kvm_arch { diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 5890cbcd6385..8468e5d265df 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -289,6 +289,10 @@ struct kvm_arm_copy_mte_tags { #define KVM_REG_ARM_STD_HYP_BIT_PV_TIME BIT(0) #define KVM_REG_ARM_STD_HYP_BMAP_BIT_MAX 0 /* Last valid bit */ +#define KVM_REG_ARM_VENDOR_HYP_BMAP KVM_REG_ARM_FW_REG(5) +#define KVM_REG_ARM_VENDOR_HYP_BIT_PTP BIT(0) +#define KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_MAX 0 /* Last valid bit */ + /* SVE registers */ #define KVM_REG_ARM64_SVE (0x15 << KVM_REG_ARM_COPROC_SHIFT) diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c index b3320adc068c..e1361029101e 100644 --- a/arch/arm64/kvm/hypercalls.c +++ b/arch/arm64/kvm/hypercalls.c @@ -88,6 +88,9 @@ bool kvm_hvc_call_supported(struct kvm_vcpu *vcpu, u32 func_id) case ARM_SMCCC_HV_PV_TIME_ST: return kvm_arm_fw_reg_feat_enabled(&hvc_desc->hvc_std_hyp_bmap, KVM_REG_ARM_STD_HYP_BIT_PV_TIME); + case ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID: + return kvm_arm_fw_reg_feat_enabled(&hvc_desc->hvc_vendor_hyp_bmap, + KVM_REG_ARM_VENDOR_HYP_BIT_PTP); default: /* By default, allow the services that aren't listed here */ return true; @@ -99,6 +102,7 @@ bool kvm_hvc_call_supported(struct kvm_vcpu *vcpu, u32 func_id) int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) { + struct hvc_reg_desc *hvc_desc = &vcpu->kvm->arch.hvc_desc; u32 func_id = smccc_get_function(vcpu); u64 val[4] = {SMCCC_RET_NOT_SUPPORTED}; u32 feature; @@ -173,7 +177,14 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) break; case ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID: val[0] = BIT(ARM_SMCCC_KVM_FUNC_FEATURES); - val[0] |= BIT(ARM_SMCCC_KVM_FUNC_PTP); + + /* + * The feature bits exposed to user-space doesn't include + * ARM_SMCCC_KVM_FUNC_FEATURES. However, we expose this to + * the guest as bit-0. Hence, left-shift the user-space + * exposed bitmap by 1 to accommodate this. + */ + val[0] |= hvc_desc->hvc_vendor_hyp_bmap.bmap << 1; break; case ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID: kvm_ptp_get_time(vcpu, val); @@ -202,6 +213,7 @@ static const u64 fw_reg_ids[] = { static const u64 fw_reg_bmap_ids[] = { KVM_REG_ARM_STD_BMAP, KVM_REG_ARM_STD_HYP_BMAP, + KVM_REG_ARM_VENDOR_HYP_BMAP, }; static void kvm_arm_fw_reg_init_hvc(struct hvc_reg_desc *hvc_desc, @@ -222,6 +234,8 @@ void kvm_arm_init_hypercalls(struct kvm *kvm) KVM_REG_ARM_STD_BMAP, ARM_SMCCC_STD_FEATURES); kvm_arm_fw_reg_init_hvc(hvc_desc, &hvc_desc->hvc_std_hyp_bmap, KVM_REG_ARM_STD_HYP_BMAP, ARM_SMCCC_STD_HYP_FEATURES); + kvm_arm_fw_reg_init_hvc(hvc_desc, &hvc_desc->hvc_vendor_hyp_bmap, + KVM_REG_ARM_VENDOR_HYP_BMAP, ARM_SMCCC_VENDOR_HYP_FEATURES); } static void kvm_arm_fw_reg_sanitize(struct hvc_fw_reg_bmap *fw_reg_bmap) @@ -270,6 +284,7 @@ void kvm_arm_sanitize_fw_regs(struct kvm *kvm) kvm_arm_fw_reg_sanitize(&hvc_desc->hvc_std_bmap); kvm_arm_fw_reg_sanitize(&hvc_desc->hvc_std_hyp_bmap); + kvm_arm_fw_reg_sanitize(&hvc_desc->hvc_vendor_hyp_bmap); out: spin_unlock(&hvc_desc->lock); @@ -324,6 +339,9 @@ static int kvm_arm_fw_reg_set_bmap(struct kvm *kvm, case KVM_REG_ARM_STD_HYP_BMAP: fw_reg_features = ARM_SMCCC_STD_HYP_FEATURES; break; + case KVM_REG_ARM_VENDOR_HYP_BMAP: + fw_reg_features = ARM_SMCCC_VENDOR_HYP_FEATURES; + break; default: ret = -EINVAL; goto out; @@ -453,6 +471,13 @@ int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (ret) return ret; + break; + case KVM_REG_ARM_VENDOR_HYP_BMAP: + ret = kvm_arm_fw_reg_get_bmap(kvm, + &hvc_desc->hvc_vendor_hyp_bmap, &val); + if (ret) + return ret; + break; default: return -ENOENT; @@ -547,6 +572,9 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) case KVM_REG_ARM_STD_HYP_BMAP: return kvm_arm_fw_reg_set_bmap(kvm, &hvc_desc->hvc_std_hyp_bmap, val); + case KVM_REG_ARM_VENDOR_HYP_BMAP: + return kvm_arm_fw_reg_set_bmap(kvm, + &hvc_desc->hvc_vendor_hyp_bmap, val); default: return -ENOENT; } diff --git a/include/kvm/arm_hypercalls.h b/include/kvm/arm_hypercalls.h index 77c30e335f44..94f56562fea8 100644 --- a/include/kvm/arm_hypercalls.h +++ b/include/kvm/arm_hypercalls.h @@ -12,6 +12,9 @@ #define ARM_SMCCC_STD_HYP_FEATURES \ GENMASK_ULL(KVM_REG_ARM_STD_HYP_BMAP_BIT_MAX, 0) +#define ARM_SMCCC_VENDOR_HYP_FEATURES \ + GENMASK_ULL(KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_MAX, 0) + int kvm_hvc_call_handler(struct kvm_vcpu *vcpu); static inline u32 smccc_get_function(struct kvm_vcpu *vcpu) From patchwork Sat Nov 13 01:22:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 12617505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C8D5C433EF for ; Sat, 13 Nov 2021 01:23:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 45C0E6108B for ; Sat, 13 Nov 2021 01:23:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235762AbhKMBZ4 (ORCPT ); Fri, 12 Nov 2021 20:25:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235684AbhKMBZx (ORCPT ); Fri, 12 Nov 2021 20:25:53 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF054C0613F5 for ; Fri, 12 Nov 2021 17:23:01 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id j18-20020a63fc12000000b002dd2237eb1cso5672472pgi.5 for ; Fri, 12 Nov 2021 17:23:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=g3+aox7fzytzBAtVqJz8rKFvz5lbt92AaNYPgpLVkEY=; b=NiFli5Y+14kem70XbljlK2MHcjurgn6MPuwKPmXSd7rGWvYOQRIvwfcEd+tRcZvodI WZoTfZ6Hr6DXe8PBAGKo9bD6SDnf/NSBbbgnM63/yllW2rduAKr2314Pb1Q7Z5PnknJ/ Qx1O+HpfnX2lRKhqPnMWf6RLvEA/HpSqE2lmaXhKUEuX5qErIKC55+VxNLIbnmHl7QZA TvVZv/V1hhdxdQ20nNW12eEPgRMmAOago5mzHjDem5lVhK7DHRB34qnbSZNghdjojCB4 rpTs330JDOwgMyQYEbVXeTjodxncPNmB/SYMPPLrBurYsuHWjTUdrFCUQHyluAgiBfvF g0IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=g3+aox7fzytzBAtVqJz8rKFvz5lbt92AaNYPgpLVkEY=; b=UkTh6CvXdyi8IbcWvcgpTl2YsnN7XrMmYliO04ACFxh97eSYb1f9+bNkn047FFpMbU oYLggkgfV8RU9gjLvMw+3E6+uJ4eEYEG6n7urY2nx4Rk5IhfWkDshJG0I5Ry2MBxSAF/ zYcnJgPplJRdhVeAf0WQ+pd8cXkEuoFixEaQ7mXcHHh5Smx1/4aLKacEUqdPqRt+Rcnj Lyn8kzCVBxriFJo+emt1hZTAPf5u7j9S4wNp5wynue8gK2HzYwLZJVoZjyvnAtCYQRTq 7Y20Ej6qR2gwvZg1JWWJUxnqDwZGBB8f823olXoVKu1EUO2RE0HHX+7ROMmGR8B56/bL wACQ== X-Gm-Message-State: AOAM5312OHG/0EVTPZ69DC9UsquTbqpplOUx69jeMoDUmrRhSf4G9KNj 7F2gRjOoUqxQ97A9fp+nTjB9DwrPoTIA X-Google-Smtp-Source: ABdhPJw6EZS2Qj/BRyr/pJRFu4I87U2GEaFJToIWpSRiGeVJhc0MJgA1bcNg7gzyDPzhMD5DBksKCxhbzwlF X-Received: from rananta-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1bcc]) (user=rananta job=sendgmr) by 2002:a17:90a:3905:: with SMTP id y5mr141745pjb.0.1636766580759; Fri, 12 Nov 2021 17:23:00 -0800 (PST) Date: Sat, 13 Nov 2021 01:22:30 +0000 In-Reply-To: <20211113012234.1443009-1-rananta@google.com> Message-Id: <20211113012234.1443009-8-rananta@google.com> Mime-Version: 1.0 References: <20211113012234.1443009-1-rananta@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [RFC PATCH v2 07/11] Docs: KVM: Add doc for the bitmap firmware registers From: Raghavendra Rao Ananta To: Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Paolo Bonzini , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org - Add documentation for the capability, KVM_CAP_ARM_HVC_FW_REG_BMAP, in KVM's api.rst. - Add the documentation for the bitmap firmware registers in psci.rst. This includes the details for KVM_REG_ARM_STD_BMAP, KVM_REG_ARM_STD_HYP_BMAP, and KVM_REG_ARM_VENDOR_HYP_BMAP registers. Signed-off-by: Raghavendra Rao Ananta --- Documentation/virt/kvm/api.rst | 23 ++++++++ Documentation/virt/kvm/arm/psci.rst | 89 +++++++++++++++++++++++------ 2 files changed, 95 insertions(+), 17 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 3b093d6dbe22..7d88567feaa7 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6911,6 +6911,29 @@ MAP_SHARED mmap will result in an -EINVAL return. When enabled the VMM may make use of the ``KVM_ARM_MTE_COPY_TAGS`` ioctl to perform a bulk copy of tags to/from the guest. +7.29 KVM_CAP_ARM_HVC_FW_REG_BMAP +-------------------------------- + +:Architecture: arm64 +:Parameters: none + +This capability indicates that KVM for arm64 supports the psuedo-firmware +register bitmap extension. It must be explicitly enabled. Once enabled, +KVM allows access to the firmware registers that hold the bitmap of the +hypercall services that should be exposed to the guest. + +By default, the registers are set with the upper-limit of the features +exposed to the guest. User-space can discover them via the GET_ONE_REG +interface. If unsatisfied with the configuration, it can write-back the +bitmap that it sees fit for the guest via SET_ONE_REG interface. The +registers that are never accessed by the user-space (read/write) are +by default cleared just before the vCPU runs. This is to make sure that +the features are not accidentally exposed to the guest without the +consent of user-space. + +Note that the capability has to be enabled before running any vCPU. Also, +the capability cannot be disabled. The VM has to be restarted for that. + 8. Other capabilities. ====================== diff --git a/Documentation/virt/kvm/arm/psci.rst b/Documentation/virt/kvm/arm/psci.rst index d52c2e83b5b8..f6306b91168d 100644 --- a/Documentation/virt/kvm/arm/psci.rst +++ b/Documentation/virt/kvm/arm/psci.rst @@ -1,32 +1,32 @@ .. SPDX-License-Identifier: GPL-2.0 -========================================= -Power State Coordination Interface (PSCI) -========================================= +======================= +ARM Hypercall Interface +======================= -KVM implements the PSCI (Power State Coordination Interface) -specification in order to provide services such as CPU on/off, reset -and power-off to the guest. +KVM handles the hypercall services as requested by the guests. New hypercall +services are regularly made available by the ARM specification or by KVM (as +vendor services) if they make sense from a virtualization point of view. -The PSCI specification is regularly updated to provide new features, -and KVM implements these updates if they make sense from a virtualization -point of view. - -This means that a guest booted on two different versions of KVM can -observe two different "firmware" revisions. This could cause issues if -a given guest is tied to a particular PSCI revision (unlikely), or if -a migration causes a different PSCI version to be exposed out of the -blue to an unsuspecting guest. +This means that a guest booted on two different versions of KVM can observe +two different "firmware" revisions. This could cause issues if a given guest +is tied to a particular version of a hypercall service, or if a migration +causes a different version to be exposed out of the blue to an unsuspecting +guest. In order to remedy this situation, KVM exposes a set of "firmware pseudo-registers" that can be manipulated using the GET/SET_ONE_REG -interface. These registers can be saved/restored by userspace, and set +interface. These registers can be saved/restored by user-space, and set to a convenient value if required. -The following register is defined: +The following registers are defined: * KVM_REG_ARM_PSCI_VERSION: + KVM implements the PSCI (Power State Coordination Interface) + specification in order to provide services such as CPU on/off, reset + and power-off to the guest. + - Only valid if the vcpu has the KVM_ARM_VCPU_PSCI_0_2 feature set (and thus has already been initialized) - Returns the current PSCI version on GET_ONE_REG (defaulting to the @@ -74,4 +74,59 @@ The following register is defined: KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED: The workaround is always active on this vCPU or it is not needed. +Contrary to the above registers, the following registers exposes the hypercall +services in the form of a feature-bitmap. This bitmap is translated to the +services that are exposed to the guest. There is a register defined per service +call owner and can be accessed via GET/SET_ONE_REG interface. + +A new KVM capability, KVM_CAP_ARM_HVC_FW_REG_BMAP, is introduced to let +user-space know of this extension. It has to explicitly enable the capability +to get access to these registers. If the capability is enabled, a 'read' of +these registers will simply expose the upper-limit of all features supported +by the corresponding service call owner in the form of a bitmap. If the +user-space is unhappy with the arrangement, it can 'write-back' the bitmap +that it wishes to expose. + +If a register is not accessed (either read/write), KVM will assume that the +user-space is unaware of its existence. In such a case, KVM would simply +clear all the bits of that register just before starting the VM. This way +no new features are accidentally exposed to the guest. + +The psuedo-firmware bitmap register are as follows: + +* KVM_REG_ARM_STD_BMAP: + Controls the bitmap of the ARM Standard Secure Service Calls. + + The following bits are accepted: + + KVM_REG_ARM_STD_BIT_TRNG_V1_0: + The bit represents the services offered under v1.0 of ARM True Random + Number Generator (TRNG) specification, ARM DEN0098. + +* KVM_REG_ARM_STD_HYP_BMAP: + Controls the bitmap of the ARM Standard Hypervisor Service Calls. + + The following bits are accepted: + + KVM_REG_ARM_STD_HYP_BIT_PV_TIME: + The bit represents the Paravirtualized Time service as represented by + ARM DEN0057A. + +* KVM_REG_ARM_VENDOR_HYP_BMAP: + Controls the bitmap of the Vendor specific Hypervisor Service Calls. + + The following bits are accepted: + + KVM_REG_ARM_VENDOR_HYP_BIT_PTP: + The bit represents the Precision Time Protocol KVM service. + +Errors: + + ======= ============================================================= + -ENOENT Register accessed (read/write) without enabling + KVM_CAP_ARM_HVC_FW_REG_BMAP. + -EBUSY Attempt a 'write' to the register after the VM has started. + -EINVAL Invalid bitmap written to the register. + ======= ============================================================= + .. [1] https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf From patchwork Sat Nov 13 01:22:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 12617507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF671C433F5 for ; Sat, 13 Nov 2021 01:23:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C7086610CC for ; Sat, 13 Nov 2021 01:23:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235823AbhKMB0B (ORCPT ); Fri, 12 Nov 2021 20:26:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235700AbhKMBZz (ORCPT ); Fri, 12 Nov 2021 20:25:55 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF4CDC061767 for ; Fri, 12 Nov 2021 17:23:03 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id 184-20020a6217c1000000b0049f9aad0040so6540308pfx.21 for ; Fri, 12 Nov 2021 17:23:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Q7GrWuTY0VLlfBXSVo/l1AbRU3c932xDVsXpIdGJOrQ=; b=Qmzc0naE2HLfLu5C2KfMiITU+UJiTaXnpCuFgySBcgb58yGhXt/fTDiojjVpCSj2E7 jIYeXWI43Mlupul7TfFnith9qWXSb/JWu9NvAQq21lHSzusJ1WVCxxpLuI9ftGeD/FLj AXxNCkeI8yA+FJ+odDw+8Yx60MOcHICOIGSaETOI2q9GsL++lJUuG9X1BY5TNG/pRWYH TzP5x6ZXpeiyibR6glYKFyx+v7Sod8MrhMPNOwyOMc0ccYYcp8tKq8j7516kKiXGo+0e z02V/ThWb4O2kALL4Ayv6zetGqgPT1sQQW14SJ4i37TYo0VO6zNOyOnGm28ClqVuX/Nb fMFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Q7GrWuTY0VLlfBXSVo/l1AbRU3c932xDVsXpIdGJOrQ=; b=5lKTMkaHWDdurrUtpo8XZQEEjvfh24/ImNbHlPTjqLiScZ6w/DXzvgP4PIF8B7YJ/v abXR9FucWSBIEb0k9MATmZtuw2kOzuCOiGqohUz5XegLgq4xr40MWTFEon8zjwXkvVfp OCISsuJsXJtN2WGxTv5JMsvn+AJx7ZZyexTl+jrJEhIHUxxmVk1/mRRc36AfRFMsNIjw 8mHBzgdb5QQz1HP+E9itc9+9j/zTP7tgFKHXB99BURMjYrcnuBuKqZixdass5Xs0t0Pl 03bdya6252aGOgVnxgIkEyphVcJYW6V7PWrjLyv0V8TvVPLd8JoVkcKS47Fbq31rb19G 8CeA== X-Gm-Message-State: AOAM533P2SfQMvjkcL9EIluQ7X3EoMbiTvbN1T6k6mHfMOgoZnJDNYbo cwqZwCMAQxFi+xcKnSP3E3JFyR0BVhWR X-Google-Smtp-Source: ABdhPJzq7F0HIvX+HNKivSfbV1x5ZTlVJPloR1co4VbUadyRLqRwx5DRYLGw8MvzuMhDYyd7Cv5Zq3lt07td X-Received: from rananta-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1bcc]) (user=rananta job=sendgmr) by 2002:a17:902:c643:b0:141:cf6b:6999 with SMTP id s3-20020a170902c64300b00141cf6b6999mr13246105pls.80.1636766583281; Fri, 12 Nov 2021 17:23:03 -0800 (PST) Date: Sat, 13 Nov 2021 01:22:31 +0000 In-Reply-To: <20211113012234.1443009-1-rananta@google.com> Message-Id: <20211113012234.1443009-9-rananta@google.com> Mime-Version: 1.0 References: <20211113012234.1443009-1-rananta@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [RFC PATCH v2 08/11] Docs: KVM: Rename psci.rst to hypercalls.rst From: Raghavendra Rao Ananta To: Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Paolo Bonzini , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Since the doc now covers more of general hypercalls' details, rather than just PSCI, rename the file to a more appropriate name- hypercalls.rst. Signed-off-by: Raghavendra Rao Ananta --- Documentation/virt/kvm/arm/{psci.rst => hypercalls.rst} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename Documentation/virt/kvm/arm/{psci.rst => hypercalls.rst} (100%) diff --git a/Documentation/virt/kvm/arm/psci.rst b/Documentation/virt/kvm/arm/hypercalls.rst similarity index 100% rename from Documentation/virt/kvm/arm/psci.rst rename to Documentation/virt/kvm/arm/hypercalls.rst From patchwork Sat Nov 13 01:22:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 12617509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47695C433F5 for ; Sat, 13 Nov 2021 01:23:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 28FF560F55 for ; Sat, 13 Nov 2021 01:23:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235933AbhKMB0S (ORCPT ); Fri, 12 Nov 2021 20:26:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235803AbhKMBZ7 (ORCPT ); Fri, 12 Nov 2021 20:25:59 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E532C061766 for ; Fri, 12 Nov 2021 17:23:06 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id x14-20020a63cc0e000000b002a5bc462947so5630075pgf.20 for ; Fri, 12 Nov 2021 17:23:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1CZoQf4R3Llwe7QOwZRwWahGVK4YcdZ2JDH+KcdbSp8=; b=Lk6+LX2rdro7XyHoAWr+atN/6MF8Do4h2M3iOLUzn1v5EDRAcOH3jLBMoQDpO6JK4P OC5gHkh3PHEAFV4caOBAS3GNIcbVfB/rthVZzy/E/YVF3QFp8cec9PHcjulQGkUTylH7 03MBwSMnP0vf9rH+o7kZKXVgza3mAQt4evdHAyGR/jDtTtiLVjyMQnq3zGpnj95mBRNF 431oBygQgmltw1VBKu1J42xKbnPkXf3N2BGbuceB0muITZP1AoeKy9emJUl9V0+g+vol 6A4diwprhSWKDJ52wdg57H0F2q+sgaxU7qaIeN6xrTM3T9wvhmWKc5yrh0ealjO+iZSW Wi2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1CZoQf4R3Llwe7QOwZRwWahGVK4YcdZ2JDH+KcdbSp8=; b=g/PthtyskKpyfg4qoJNP0l53vpdNfeaZ4jmbo0QroW1YtH3XPgpGGWAiBBYh1dxO3r Lu7HFuMOFaLEgN9rR913YXmpb+P15WQFeSHWeDCVhobC+rB7yd35cVEB6cJnM2Y2VAJx Vcu15ykQaov3f/kIADMzhvZ8btUrWhih6aiwxo1b/3mj5wutw5KOxMl2P3N0DMbCcDiG 5pK7/itaJUKdGnuy69uZ85TNW03vDKQPGC+5JUveRlXZDct81ARZPM3jZusZ5c9RfIv+ f3L/NHrb3ihfTo1teEOKB/hELtLBf8gDk6ipyY+eUFaDV7EfmWc/u37ub7H8II5bIbov sKaQ== X-Gm-Message-State: AOAM530zEzYBZWpGHK2eISEcgJSkIE4TbMOijxzT96AsLHvMrVt52xaQ 9LfIi0GpfnVtKS/oO5WQ9ggqCglxNmy6 X-Google-Smtp-Source: ABdhPJyyFvtg2j3LS+2nufHjQcH5Iu/XxqiSnxAytU7b9mbEFemfzUIZ0QHFzKO4nRMgEeLKJYEJO/fMZRdB X-Received: from rananta-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1bcc]) (user=rananta job=sendgmr) by 2002:a17:90a:284f:: with SMTP id p15mr141461pjf.1.1636766585473; Fri, 12 Nov 2021 17:23:05 -0800 (PST) Date: Sat, 13 Nov 2021 01:22:32 +0000 In-Reply-To: <20211113012234.1443009-1-rananta@google.com> Message-Id: <20211113012234.1443009-10-rananta@google.com> Mime-Version: 1.0 References: <20211113012234.1443009-1-rananta@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [RFC PATCH v2 09/11] tools: Import ARM SMCCC definitions From: Raghavendra Rao Ananta To: Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Paolo Bonzini , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Import the standard SMCCC definitions from include/linux/arm-smccc.h. Signed-off-by: Raghavendra Rao Ananta --- tools/include/linux/arm-smccc.h | 188 ++++++++++++++++++++++++++++++++ 1 file changed, 188 insertions(+) create mode 100644 tools/include/linux/arm-smccc.h diff --git a/tools/include/linux/arm-smccc.h b/tools/include/linux/arm-smccc.h new file mode 100644 index 000000000000..a11c0bbabd5b --- /dev/null +++ b/tools/include/linux/arm-smccc.h @@ -0,0 +1,188 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2015, Linaro Limited + */ +#ifndef __LINUX_ARM_SMCCC_H +#define __LINUX_ARM_SMCCC_H + +#include + +/* + * This file provides common defines for ARM SMC Calling Convention as + * specified in + * https://developer.arm.com/docs/den0028/latest + * + * This code is up-to-date with version DEN 0028 C + */ + +#define ARM_SMCCC_STD_CALL _AC(0,U) +#define ARM_SMCCC_FAST_CALL _AC(1,U) +#define ARM_SMCCC_TYPE_SHIFT 31 + +#define ARM_SMCCC_SMC_32 0 +#define ARM_SMCCC_SMC_64 1 +#define ARM_SMCCC_CALL_CONV_SHIFT 30 + +#define ARM_SMCCC_OWNER_MASK 0x3F +#define ARM_SMCCC_OWNER_SHIFT 24 + +#define ARM_SMCCC_FUNC_MASK 0xFFFF + +#define ARM_SMCCC_IS_FAST_CALL(smc_val) \ + ((smc_val) & (ARM_SMCCC_FAST_CALL << ARM_SMCCC_TYPE_SHIFT)) +#define ARM_SMCCC_IS_64(smc_val) \ + ((smc_val) & (ARM_SMCCC_SMC_64 << ARM_SMCCC_CALL_CONV_SHIFT)) +#define ARM_SMCCC_FUNC_NUM(smc_val) ((smc_val) & ARM_SMCCC_FUNC_MASK) +#define ARM_SMCCC_OWNER_NUM(smc_val) \ + (((smc_val) >> ARM_SMCCC_OWNER_SHIFT) & ARM_SMCCC_OWNER_MASK) + +#define ARM_SMCCC_CALL_VAL(type, calling_convention, owner, func_num) \ + (((type) << ARM_SMCCC_TYPE_SHIFT) | \ + ((calling_convention) << ARM_SMCCC_CALL_CONV_SHIFT) | \ + (((owner) & ARM_SMCCC_OWNER_MASK) << ARM_SMCCC_OWNER_SHIFT) | \ + ((func_num) & ARM_SMCCC_FUNC_MASK)) + +#define ARM_SMCCC_OWNER_ARCH 0 +#define ARM_SMCCC_OWNER_CPU 1 +#define ARM_SMCCC_OWNER_SIP 2 +#define ARM_SMCCC_OWNER_OEM 3 +#define ARM_SMCCC_OWNER_STANDARD 4 +#define ARM_SMCCC_OWNER_STANDARD_HYP 5 +#define ARM_SMCCC_OWNER_VENDOR_HYP 6 +#define ARM_SMCCC_OWNER_TRUSTED_APP 48 +#define ARM_SMCCC_OWNER_TRUSTED_APP_END 49 +#define ARM_SMCCC_OWNER_TRUSTED_OS 50 +#define ARM_SMCCC_OWNER_TRUSTED_OS_END 63 + +#define ARM_SMCCC_FUNC_QUERY_CALL_UID 0xff01 + +#define ARM_SMCCC_QUIRK_NONE 0 +#define ARM_SMCCC_QUIRK_QCOM_A6 1 /* Save/restore register a6 */ + +#define ARM_SMCCC_VERSION_1_0 0x10000 +#define ARM_SMCCC_VERSION_1_1 0x10001 +#define ARM_SMCCC_VERSION_1_2 0x10002 +#define ARM_SMCCC_VERSION_1_3 0x10003 + +#define ARM_SMCCC_1_3_SVE_HINT 0x10000 + +#define ARM_SMCCC_VERSION_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + 0, 0) + +#define ARM_SMCCC_ARCH_FEATURES_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + 0, 1) + +#define ARM_SMCCC_ARCH_SOC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + 0, 2) + +#define ARM_SMCCC_ARCH_WORKAROUND_1 \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + 0, 0x8000) + +#define ARM_SMCCC_ARCH_WORKAROUND_2 \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + 0, 0x7fff) + +#define ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + ARM_SMCCC_FUNC_QUERY_CALL_UID) + +/* KVM UID value: 28b46fb6-2ec5-11e9-a9ca-4b564d003a74 */ +#define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_0 0xb66fb428U +#define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_1 0xe911c52eU +#define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_2 0x564bcaa9U +#define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_3 0x743a004dU + +/* KVM "vendor specific" services */ +#define ARM_SMCCC_KVM_FUNC_FEATURES 0 +#define ARM_SMCCC_KVM_FUNC_PTP 1 +#define ARM_SMCCC_KVM_FUNC_FEATURES_2 127 +#define ARM_SMCCC_KVM_NUM_FUNCS 128 + +#define ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + ARM_SMCCC_KVM_FUNC_FEATURES) + +#define SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED 1 + +/* + * ptp_kvm is a feature used for time sync between vm and host. + * ptp_kvm module in guest kernel will get service from host using + * this hypercall ID. + */ +#define ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + ARM_SMCCC_KVM_FUNC_PTP) + +/* ptp_kvm counter type ID */ +#define KVM_PTP_VIRT_COUNTER 0 +#define KVM_PTP_PHYS_COUNTER 1 + +/* Paravirtualised time calls (defined by ARM DEN0057A) */ +#define ARM_SMCCC_HV_PV_TIME_FEATURES \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_STANDARD_HYP, \ + 0x20) + +#define ARM_SMCCC_HV_PV_TIME_ST \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_STANDARD_HYP, \ + 0x21) + +/* TRNG entropy source calls (defined by ARM DEN0098) */ +#define ARM_SMCCC_TRNG_VERSION \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_STANDARD, \ + 0x50) + +#define ARM_SMCCC_TRNG_FEATURES \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_STANDARD, \ + 0x51) + +#define ARM_SMCCC_TRNG_GET_UUID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_STANDARD, \ + 0x52) + +#define ARM_SMCCC_TRNG_RND32 \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_STANDARD, \ + 0x53) + +#define ARM_SMCCC_TRNG_RND64 \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_STANDARD, \ + 0x53) + +/* + * Return codes defined in ARM DEN 0070A + * ARM DEN 0070A is now merged/consolidated into ARM DEN 0028 C + */ +#define SMCCC_RET_SUCCESS 0 +#define SMCCC_RET_NOT_SUPPORTED -1 +#define SMCCC_RET_NOT_REQUIRED -2 +#define SMCCC_RET_INVALID_PARAMETER -3 + +#endif /*__LINUX_ARM_SMCCC_H*/ From patchwork Sat Nov 13 01:22:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 12617511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4495AC433F5 for ; Sat, 13 Nov 2021 01:23:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 30E8C610A0 for ; Sat, 13 Nov 2021 01:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235702AbhKMB0W (ORCPT ); Fri, 12 Nov 2021 20:26:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235820AbhKMB0B (ORCPT ); Fri, 12 Nov 2021 20:26:01 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DBDAC061224 for ; Fri, 12 Nov 2021 17:23:08 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id u5-20020a63d3450000b029023a5f6e6f9bso5610164pgi.21 for ; Fri, 12 Nov 2021 17:23:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=a9yoG4Tqo8lUV1x6xFG75nrLi42gXDfwK7UCDmtG5M4=; b=ANXTdYq4U+mA4DSdGarfXnFS4H7ezhtDjTfhzhA7I4E9NXbwOhGlldk+BkhTWoBJ/j Q/nUXdAQbLojgZ9aNljDrIUOLDAIgcHH3k9RRi76DCyuyJsGt0GLYWhUduy2lBdQ1Zjv 019Ynz5GMLNRBIMTmLrBFTHb5xLM5tOmYEy71BXWUx61hpRIYDg9TzsXGX35YJnI6HMe w1PD0EmqlMGYpRCbZpyHTF4obzzZvRUotFl7mt6gbrzxVQa+HCUiX/plqLWG1dEYFrs7 A7RtHULYs5PIMqG6XHVCoutefX5CFVv1SGJ9P43VO0LiPwlwaEmAL1QvQHtztj6arkCF ND+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=a9yoG4Tqo8lUV1x6xFG75nrLi42gXDfwK7UCDmtG5M4=; b=MlEUOQX/Vffcf5vAuyMKvJNAYyogcYvCHZug4sckZmIGObqybWDxOq5uhnDqIVDdBm lgzy3Aj+/VXZDNh4PsJlZOKKPcaiiVn3gCfSKhz1jyuX1o1+6XlHGlFYuSoo4UkT7iit UDtJh1hgkoJl2QrcHqciPs8GBDvk9QJjQdOU2VSFZafkZASPlEBajb7A80gcp43EMb4e 1BPUzR0TuxC0sktIwbArm4DUqRVCXXwgF9AtzBwvCpUilNVB9nG/iyQ0UYDsdBshgq9l RN3QVG8WGLZhHbExRq+c3UrEkgmIW1iJKK4bql1IY/+goNatEfMNJFSiR7CIUhp/LXNy V/ew== X-Gm-Message-State: AOAM533Aum9PX0NVgZ8VPSblHpmPOZ3P8uC1aSrZLK9Zvl1tIH/KQwq7 DXpbKr2AWm62S4IPs6JLCra9u2yD1vZC X-Google-Smtp-Source: ABdhPJwuH0wz14M876CKSlsnT/HNoIXeLt3W7J3xl/hyZGzvxJncj4A/RcFx9uBvCwUr7im9Nd1IGPDk+9W5 X-Received: from rananta-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1bcc]) (user=rananta job=sendgmr) by 2002:a17:902:d2cd:b0:141:fbe2:b658 with SMTP id n13-20020a170902d2cd00b00141fbe2b658mr12856668plc.49.1636766587790; Fri, 12 Nov 2021 17:23:07 -0800 (PST) Date: Sat, 13 Nov 2021 01:22:33 +0000 In-Reply-To: <20211113012234.1443009-1-rananta@google.com> Message-Id: <20211113012234.1443009-11-rananta@google.com> Mime-Version: 1.0 References: <20211113012234.1443009-1-rananta@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [RFC PATCH v2 10/11] selftests: KVM: aarch64: Introduce hypercall ABI test From: Raghavendra Rao Ananta To: Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Paolo Bonzini , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce a KVM selftest to check the hypercall interface for arm64 platforms. The test validates the user-space's IOCTL interface to read/write the psuedo-firmware registers as well as its effects on the guest upon certain configurations. Signed-off-by: Raghavendra Rao Ananta --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/aarch64/hypercalls.c | 367 ++++++++++++++++++ 3 files changed, 369 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/hypercalls.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 9d9571d19a29..f3662b10aa7f 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -2,6 +2,7 @@ /aarch64/arch_timer /aarch64/debug-exceptions /aarch64/get-reg-list +/aarch64/hypercalls /aarch64/psci_test /aarch64/vgic_init /s390x/memop diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 4ef9fa47597b..4f462d7b2e40 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -91,6 +91,7 @@ TEST_GEN_PROGS_x86_64 += system_counter_offset_test TEST_GEN_PROGS_aarch64 += aarch64/arch_timer TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list +TEST_GEN_PROGS_aarch64 += aarch64/hypercalls TEST_GEN_PROGS_aarch64 += aarch64/psci_test TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += demand_paging_test diff --git a/tools/testing/selftests/kvm/aarch64/hypercalls.c b/tools/testing/selftests/kvm/aarch64/hypercalls.c new file mode 100644 index 000000000000..c89f73109f1d --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/hypercalls.c @@ -0,0 +1,367 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include +#include +#include + +#include "processor.h" + +#define FW_REG_ULIMIT_VAL(max_feat_bit) (GENMASK_ULL(max_feat_bit, 0)) + +struct kvm_fw_reg_info { + uint64_t reg; /* Register definition */ + uint64_t max_feat_bit; /* Bit that represents the upper limit of the feature-map */ +}; + +#define FW_REG_INFO(r, bit_max) \ + { \ + .reg = r, \ + .max_feat_bit = bit_max, \ + } + +static const struct kvm_fw_reg_info fw_reg_info[] = { + FW_REG_INFO(KVM_REG_ARM_STD_BMAP, KVM_REG_ARM_STD_BMAP_BIT_MAX), + FW_REG_INFO(KVM_REG_ARM_STD_HYP_BMAP, KVM_REG_ARM_STD_HYP_BMAP_BIT_MAX), + FW_REG_INFO(KVM_REG_ARM_VENDOR_HYP_BMAP, KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_MAX), +}; + +enum test_stage { + TEST_STAGE_REG_IFACE, + TEST_STAGE_HVC_IFACE_FEAT_DISABLED, + TEST_STAGE_HVC_IFACE_FEAT_ENABLED, + TEST_STAGE_END, +}; + +static int stage; + +struct test_hvc_info { + uint32_t func_id; + int64_t arg0; + + void (*test_hvc_disabled)(const struct test_hvc_info *hc_info, + struct arm_smccc_res *res); + void (*test_hvc_enabled)(const struct test_hvc_info *hc_info, + struct arm_smccc_res *res); +}; + +#define TEST_HVC_INFO(f, a0, test_disabled, test_enabled) \ + { \ + .func_id = f, \ + .arg0 = a0, \ + .test_hvc_disabled = test_disabled, \ + .test_hvc_enabled = test_enabled, \ + } + +static void +test_ptp_feat_hvc_disabled(const struct test_hvc_info *hc_info, struct arm_smccc_res *res) +{ + GUEST_ASSERT_3((res->a0 & BIT(ARM_SMCCC_KVM_FUNC_PTP)) == 0, + res->a0, hc_info->func_id, hc_info->arg0); +} + +static void +test_ptp_feat_hvc_enabled(const struct test_hvc_info *hc_info, struct arm_smccc_res *res) +{ + GUEST_ASSERT_3((res->a0 & BIT(ARM_SMCCC_KVM_FUNC_PTP)) != 0, + res->a0, hc_info->func_id, hc_info->arg0); +} + +static const struct test_hvc_info hvc_info[] = { + /* KVM_REG_ARM_STD_BMAP: KVM_REG_ARM_STD_BIT_TRNG_V1_0 */ + TEST_HVC_INFO(ARM_SMCCC_TRNG_VERSION, 0, NULL, NULL), + TEST_HVC_INFO(ARM_SMCCC_TRNG_FEATURES, ARM_SMCCC_TRNG_RND64, NULL, NULL), + TEST_HVC_INFO(ARM_SMCCC_TRNG_GET_UUID, 0, NULL, NULL), + TEST_HVC_INFO(ARM_SMCCC_TRNG_RND32, 0, NULL, NULL), + TEST_HVC_INFO(ARM_SMCCC_TRNG_RND64, 0, NULL, NULL), + + /* KVM_REG_ARM_STD_HYP_BMAP: KVM_REG_ARM_STD_HYP_BIT_PV_TIME */ + TEST_HVC_INFO(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, + ARM_SMCCC_HV_PV_TIME_FEATURES, NULL, NULL), + TEST_HVC_INFO(ARM_SMCCC_HV_PV_TIME_FEATURES, + ARM_SMCCC_HV_PV_TIME_ST, NULL, NULL), + TEST_HVC_INFO(ARM_SMCCC_HV_PV_TIME_ST, 0, NULL, NULL), + + /* KVM_REG_ARM_VENDOR_HYP_BMAP: KVM_REG_ARM_VENDOR_HYP_BIT_PTP */ + TEST_HVC_INFO(ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID, + ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID, + test_ptp_feat_hvc_disabled, test_ptp_feat_hvc_enabled), + TEST_HVC_INFO(ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID, + KVM_PTP_VIRT_COUNTER, NULL, NULL), +}; + +static void guest_test_hvc(int stage) +{ + unsigned int i; + struct arm_smccc_res res; + + for (i = 0; i < ARRAY_SIZE(hvc_info); i++) { + const struct test_hvc_info *hc_info = &hvc_info[i]; + + memset(&res, 0, sizeof(res)); + smccc_hvc(hc_info->func_id, hc_info->arg0, 0, 0, 0, 0, 0, 0, &res); + + switch (stage) { + case TEST_STAGE_HVC_IFACE_FEAT_DISABLED: + if (hc_info->test_hvc_disabled) + hc_info->test_hvc_disabled(hc_info, &res); + else + GUEST_ASSERT_3(res.a0 == SMCCC_RET_NOT_SUPPORTED, + res.a0, hc_info->func_id, hc_info->arg0); + break; + case TEST_STAGE_HVC_IFACE_FEAT_ENABLED: + if (hc_info->test_hvc_enabled) + hc_info->test_hvc_enabled(hc_info, &res); + else + GUEST_ASSERT_3(res.a0 != SMCCC_RET_NOT_SUPPORTED, + res.a0, hc_info->func_id, hc_info->arg0); + break; + default: + GUEST_ASSERT_1(0, stage); + } + } +} + +static void guest_code(void) +{ + while (stage != TEST_STAGE_END) { + switch (stage) { + case TEST_STAGE_REG_IFACE: + break; + case TEST_STAGE_HVC_IFACE_FEAT_DISABLED: + case TEST_STAGE_HVC_IFACE_FEAT_ENABLED: + guest_test_hvc(stage); + break; + default: + GUEST_ASSERT_1(0, stage); + } + + GUEST_SYNC(stage); + } + + GUEST_DONE(); +} + +static int set_fw_reg(struct kvm_vm *vm, uint64_t id, uint64_t val) +{ + struct kvm_one_reg reg = { + .id = KVM_REG_ARM_FW_REG(id), + .addr = (uint64_t)&val, + }; + + return _vcpu_ioctl(vm, 0, KVM_SET_ONE_REG, ®); +} + +static void get_fw_reg(struct kvm_vm *vm, uint64_t id, uint64_t *addr) +{ + struct kvm_one_reg reg = { + .id = KVM_REG_ARM_FW_REG(id), + .addr = (uint64_t)addr, + }; + + return vcpu_ioctl(vm, 0, KVM_GET_ONE_REG, ®); +} + +struct st_time { + uint32_t rev; + uint32_t attr; + uint64_t st_time; +}; + +#define STEAL_TIME_SIZE ((sizeof(struct st_time) + 63) & ~63) +#define ST_GPA_BASE (1 << 30) + +static void steal_time_init(struct kvm_vm *vm) +{ + uint64_t st_ipa = (ulong)ST_GPA_BASE; + unsigned int gpages; + struct kvm_device_attr dev = { + .group = KVM_ARM_VCPU_PVTIME_CTRL, + .attr = KVM_ARM_VCPU_PVTIME_IPA, + .addr = (uint64_t)&st_ipa, + }; + + gpages = vm_calc_num_guest_pages(VM_MODE_DEFAULT, STEAL_TIME_SIZE); + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, ST_GPA_BASE, 1, gpages, 0); + + vcpu_ioctl(vm, 0, KVM_SET_DEVICE_ATTR, &dev); +} + +static void test_fw_regs_first_read(struct kvm_vm *vm) +{ + uint64_t val; + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(fw_reg_info); i++) { + const struct kvm_fw_reg_info *reg_info = &fw_reg_info[i]; + + /* First read should be an upper limit of the features supported */ + get_fw_reg(vm, reg_info->reg, &val); + TEST_ASSERT(val == FW_REG_ULIMIT_VAL(reg_info->max_feat_bit), + "Expected all the features to be set for reg: 0x%lx; expected: 0x%llx; read: 0x%lx\n", + reg_info->reg, GENMASK_ULL(reg_info->max_feat_bit, 0), val); + } +} + +static void test_fw_regs_before_vm_start(struct kvm_vm *vm) +{ + uint64_t val; + unsigned int i; + int ret; + + for (i = 0; i < ARRAY_SIZE(fw_reg_info); i++) { + const struct kvm_fw_reg_info *reg_info = &fw_reg_info[i]; + + /* Test 'write' by disabling all the features of the register map */ + ret = set_fw_reg(vm, reg_info->reg, 0); + TEST_ASSERT(ret == 0, + "Failed to clear all the features of reg: 0x%lx; ret: %d\n", + reg_info->reg, errno); + + get_fw_reg(vm, reg_info->reg, &val); + TEST_ASSERT(val == 0, + "Expected all the features to be cleared for reg: 0x%lx\n", reg_info->reg); + + /* + * Test enabling a feature that's not supported. + * Avoid this check if all the bits are occupied. + */ + if (reg_info->max_feat_bit < 63) { + ret = set_fw_reg(vm, reg_info->reg, BIT(reg_info->max_feat_bit + 1)); + TEST_ASSERT(ret != 0 && errno == EINVAL, + "Unexpected behavior or return value (%d) while setting an unsupported feature for reg: 0x%lx\n", + errno, reg_info->reg); + } + } +} + +static void test_fw_regs_after_vm_start(struct kvm_vm *vm) +{ + uint64_t val; + unsigned int i; + int ret; + + for (i = 0; i < ARRAY_SIZE(fw_reg_info); i++) { + const struct kvm_fw_reg_info *reg_info = &fw_reg_info[i]; + + /* + * Before starting the VM, the test clears all the bits. + * Check if that's still the case. + */ + get_fw_reg(vm, reg_info->reg, &val); + TEST_ASSERT(val == 0, + "Expected all the features to be cleared for reg: 0x%lx\n", + reg_info->reg); + + /* + * Test setting the last known value. KVM should allow this + * even if VM has started running. + */ + ret = set_fw_reg(vm, reg_info->reg, 0); + TEST_ASSERT(ret == 0, + "Failed to clear all the features of reg: 0x%lx; ret: %d\n", + reg_info->reg, errno); + + /* + * Set all the features for this register again. KVM shouldn't + * allow this as the VM is running. + */ + ret = set_fw_reg(vm, reg_info->reg, FW_REG_ULIMIT_VAL(reg_info->max_feat_bit)); + TEST_ASSERT(ret != 0 && errno == EBUSY, + "Unexpected behavior or return value (%d) while setting a feature while VM is running for reg: 0x%lx\n", + errno, reg_info->reg); + } +} + +static struct kvm_vm *test_vm_create(void) +{ + struct kvm_vm *vm; + struct kvm_enable_cap cap = { + .cap = KVM_CAP_ARM_HVC_FW_REG_BMAP, + }; + + vm = vm_create_default(0, 0, guest_code); + + vm_enable_cap(vm, &cap); + + ucall_init(vm, NULL); + steal_time_init(vm); + + /* Read all the registers to mark them as 'accessed' */ + test_fw_regs_first_read(vm); + + return vm; +} + +static struct kvm_vm *test_guest_stage(struct kvm_vm *vm) +{ + struct kvm_vm *ret_vm = vm; + + pr_debug("Stage: %d\n", stage); + + switch (stage) { + case TEST_STAGE_REG_IFACE: + test_fw_regs_after_vm_start(vm); + break; + case TEST_STAGE_HVC_IFACE_FEAT_DISABLED: + /* Start a new VM so that all the features are now enabled by default */ + kvm_vm_free(vm); + ret_vm = test_vm_create(); + break; + case TEST_STAGE_HVC_IFACE_FEAT_ENABLED: + break; + default: + TEST_FAIL("Unknown test stage: %d\n", stage); + } + + stage++; + sync_global_to_guest(vm, stage); + + return ret_vm; +} + +static void test_run(void) +{ + struct kvm_vm *vm; + struct ucall uc; + bool guest_done = false; + + vm = test_vm_create(); + + test_fw_regs_before_vm_start(vm); + + while (!guest_done) { + vcpu_run(vm, 0); + + switch (get_ucall(vm, 0, &uc)) { + case UCALL_SYNC: + vm = test_guest_stage(vm); + break; + case UCALL_DONE: + guest_done = true; + break; + case UCALL_ABORT: + TEST_FAIL("%s at %s:%ld\n\tvalues: 0x%lx, %lu; %lu, stage: %u", + (const char *)uc.args[0], __FILE__, uc.args[1], + uc.args[2], uc.args[3], uc.args[4], stage); + break; + default: + TEST_FAIL("Unexpected guest exit\n"); + } + } + + kvm_vm_free(vm); +} + +int main(void) +{ + setbuf(stdout, NULL); + + if (kvm_check_cap(KVM_CAP_ARM_HVC_FW_REG_BMAP) != 1) { + print_skip("ARM64 fw registers bitmap not supported\n"); + exit(KSFT_SKIP); + } + + test_run(); + return 0; +} From patchwork Sat Nov 13 01:22:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 12617513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C4A8C433FE for ; Sat, 13 Nov 2021 01:23:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4714460F55 for ; Sat, 13 Nov 2021 01:23:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235829AbhKMB0Y (ORCPT ); Fri, 12 Nov 2021 20:26:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235837AbhKMB0C (ORCPT ); Fri, 12 Nov 2021 20:26:02 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E43D1C061203 for ; Fri, 12 Nov 2021 17:23:10 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id l8-20020a056a0016c800b0049ffee8cebfso6547930pfc.20 for ; Fri, 12 Nov 2021 17:23:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=W9jgYHrBbOBPThTYAHaXWa7wASjXvARMo5Uw/YrP4D4=; b=j/7E/Vu3kI/f1cEYZS/kLiJf80TELS9BMtqVtMVK14mKCIl14PK7ClNZsBgnOLnTb4 lVpbTvKA0n9MERMBMncC6o/S7pSa2rPaxrtRdBirpDqwbpN5JEJJBTmWEwTPzze8FIr7 35bvrbOR+puUi1tS3dolUhQN6hZZzQaN/n3bftdWd38D5u+fQelOHw7irNv2JP7eWbwh wJri8Ec4MSC8KRrbZcFT6sxVw50msmp7OHJALPXjoeqWsDd4JseFzc+7rPeb2PWluIri KXj3KQKAsv1tvIB/gMG5EQctkI1vQFgAdc3r7sWxvzlv+nzsJcmxxYGgXwtSm50PheaZ N5/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=W9jgYHrBbOBPThTYAHaXWa7wASjXvARMo5Uw/YrP4D4=; b=YuyLadnMshGJPohYSuGRJiE2RodfTiGizFTI/ylZiaZRne3RhvGTIefpYARs5aIS28 b/g+FnugKNKSX2vRVMOooN9N5t7eWGekLEW/tAyZYnKWgGWhxPvHyGLG7ZE8v/ZExkPt HssPCbTY/Go7fU62LZRcAhXQYLymKLqe1HRDbx8pIm/rRIB6NguBjwr6ZZhJU8fgEpeu S8OSBiU+MQBXfcSGcVjhEhva84qCntw08aN73bXrv+OrMJzmrZeN5qqb8miAG8e/5ISD X3OxhomyvyyMH3oC0Xctq/zcaf7air1nU5pJvJHz6jZGmf3GlngKhL7ba7kdOZISH1ku 5wYQ== X-Gm-Message-State: AOAM533kToy3PuIRsHOUZybAyC8UNXEnvpOTJ8ycHu2rnjOvhc2kv6Wn gmkENh9NVnGHHnpxEtlJzK/0h4JQdKwL X-Google-Smtp-Source: ABdhPJwcBcvi28OtcoG+93M1eX9jXnuPRibbcU2MahTwrQfS01Fey723ak2y8SCBZC4gDLQsXQ3mgwAI2Skg X-Received: from rananta-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1bcc]) (user=rananta job=sendgmr) by 2002:a05:6a00:1305:b0:4a2:75cd:883b with SMTP id j5-20020a056a00130500b004a275cd883bmr6889352pfu.44.1636766590385; Fri, 12 Nov 2021 17:23:10 -0800 (PST) Date: Sat, 13 Nov 2021 01:22:34 +0000 In-Reply-To: <20211113012234.1443009-1-rananta@google.com> Message-Id: <20211113012234.1443009-12-rananta@google.com> Mime-Version: 1.0 References: <20211113012234.1443009-1-rananta@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [RFC PATCH v2 11/11] selftests: KVM: aarch64: Add the bitmap firmware registers to get-reg-list From: Raghavendra Rao Ananta To: Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Paolo Bonzini , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The bitmap firmware psuedo-registers needs special handling, such as enabling the capability KVM_CAP_ARM_HVC_FW_REG_BMAP. Since there's no support as of yet in get-reg-list to enable a capability, add a field 'enable_capability' in the 'struct reg_sublist' to incorporate this. Also, to not mess with the existing configurations, create a new vcpu_config to hold these bitmap firmware registers. Signed-off-by: Raghavendra Rao Ananta --- .../selftests/kvm/aarch64/get-reg-list.c | 35 +++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/get-reg-list.c b/tools/testing/selftests/kvm/aarch64/get-reg-list.c index cc898181faab..7479d0ae501e 100644 --- a/tools/testing/selftests/kvm/aarch64/get-reg-list.c +++ b/tools/testing/selftests/kvm/aarch64/get-reg-list.c @@ -40,6 +40,7 @@ static __u64 *blessed_reg, blessed_n; struct reg_sublist { const char *name; long capability; + long enable_capability; int feature; bool finalize; __u64 *regs; @@ -397,6 +398,19 @@ static void check_supported(struct vcpu_config *c) } } +static void enable_capabilities(struct kvm_vm *vm, struct vcpu_config *c) +{ + struct reg_sublist *s; + + for_each_sublist(c, s) { + if (s->enable_capability) { + struct kvm_enable_cap cap = {.cap = s->enable_capability}; + + vm_enable_cap(vm, &cap); + } + } +} + static bool print_list; static bool print_filtered; static bool fixup_core_regs; @@ -414,6 +428,7 @@ static void run_test(struct vcpu_config *c) vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR); prepare_vcpu_init(c, &init); aarch64_vcpu_add_default(vm, 0, &init, NULL); + enable_capabilities(vm, c); finalize_vcpu(vm, 0, c); reg_list = vcpu_get_reg_list(vm, 0); @@ -1014,6 +1029,12 @@ static __u64 sve_rejects_set[] = { KVM_REG_ARM64_SVE_VLS, }; +static __u64 fw_reg_bmap_set[] = { + KVM_REG_ARM_FW_REG(3), /* KVM_REG_ARM_STD_BMAP */ + KVM_REG_ARM_FW_REG(4), /* KVM_REG_ARM_STD_HYP_BMAP */ + KVM_REG_ARM_FW_REG(5), /* KVM_REG_ARM_VENDOR_HYP_BMAP */ +}; + #define BASE_SUBLIST \ { "base", .regs = base_regs, .regs_n = ARRAY_SIZE(base_regs), } #define VREGS_SUBLIST \ @@ -1025,6 +1046,10 @@ static __u64 sve_rejects_set[] = { { "sve", .capability = KVM_CAP_ARM_SVE, .feature = KVM_ARM_VCPU_SVE, .finalize = true, \ .regs = sve_regs, .regs_n = ARRAY_SIZE(sve_regs), \ .rejects_set = sve_rejects_set, .rejects_set_n = ARRAY_SIZE(sve_rejects_set), } +#define FW_REG_BMAP_SUBLIST \ + { "fw_reg_bmap", .regs = fw_reg_bmap_set, .regs_n = ARRAY_SIZE(fw_reg_bmap_set), \ + .capability = KVM_CAP_ARM_HVC_FW_REG_BMAP, \ + .enable_capability = KVM_CAP_ARM_HVC_FW_REG_BMAP, } static struct vcpu_config vregs_config = { .sublists = { @@ -1057,10 +1082,20 @@ static struct vcpu_config sve_pmu_config = { }, }; +static struct vcpu_config vregs_fw_regs_bmap_config = { + .sublists = { + BASE_SUBLIST, + VREGS_SUBLIST, + FW_REG_BMAP_SUBLIST, + {0}, + }, +}; + static struct vcpu_config *vcpu_configs[] = { &vregs_config, &vregs_pmu_config, &sve_config, &sve_pmu_config, + &vregs_fw_regs_bmap_config, }; static int vcpu_configs_n = ARRAY_SIZE(vcpu_configs);