From patchwork Sun Apr 2 18:37:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13197544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BE98AC76196 for ; Sun, 2 Apr 2023 18:38:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=5N4HdV5xrENR7vKqgqs4S4F97kUQdB0TKzPz5hayORU=; b=pZrOxuRPlQgTIAeygoi55oU8YN BrMp8mqnBNFl/UKVeee423qpjFAh7sAXuj9/wr9c8yfVoBbdMmaElwdBaYpAyd50l3/IRTDFdrfmv ePoP2THfEu4zyuCj3OUNR4d/yjaYOenVibiyU6EzUtc+aUgOwwOyr/p7uq9sAd9TR31cyqtD2ZCZB f+LWdNdw16Px0ACuzrYA/PBSD/rrCMgixvlWO0T0jlRtpSzY3LYyoXycBQznMYQy60Qqyk4MicLUg NrcwKERQAMOLQOgg+I1ejiZ216YMv1TLGExQ3T8aQdphnUAAkb/3MWi/J+CHOzWFzwzmPFvZe+tmC q2q5+3Sw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pj2aW-00D2MD-1S; Sun, 02 Apr 2023 18:38:00 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pj2aQ-00D2Ig-1J for linux-arm-kernel@lists.infradead.org; Sun, 02 Apr 2023 18:37:58 +0000 Received: by mail-pl1-x649.google.com with SMTP id k1-20020a170902c40100b001a20f75cd40so16512160plk.22 for ; Sun, 02 Apr 2023 11:37:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680460672; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2p8kdRVykhHmqurFzIzDeQ9uqDPP936SXMlWrrcPF6I=; b=BH+j2oHN1+0rUgDmWzQ0deUZdgu3v4PAw0QqtdC4dbGHCh4789tlL6O2r+cNjGyxi0 Et9qsDhQZqVA8KtiDufYT+HNt+FiGgAhlJgGLncyH2TxEtHoROZ8YCX5gdIXG6vBZLaU gCGhG2HRZAwARID2v+7ns7ezHWkw2k7XYGLm8HH8VOqrkf74fIxNhNFhR3WoJ9LggrxY +ottroiYkH7fHihmfYGZbjyuIMdrBcGEVhY9kqysdX6X1J5K9+0uC2QIC7LVTGYugTFq mQ13SJZWiDJ541kJWcRyFtEuegd6Afg+Ir0MEVDIK2VRWD+1n8Kjk/CrGiD8Ex3tCUim zNyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680460672; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2p8kdRVykhHmqurFzIzDeQ9uqDPP936SXMlWrrcPF6I=; b=FhqKEkiKGwERg9cbY5hWE3OHpaJjRvr5SOL13FlBIs6ELqgBTqVs7+O6s8VQQ058uv QYeTCNJfXoP1W03uWLgM+dem5f7F0ykiXAAAXDIUjOEaa8skk0XMWWX9zTdc7lDE/RW5 EimVbkJmbbK5YTVMMovh8xM3vH6L5L3p6+Ank6qOlYegZ6Xs0fCVrsx95XSY4JOmLVj3 oX1IwldRFT24jiSmjgeD9ztSmHiW6tkUF5SWWwsxT5/29QthQdOY+MhzWUPKP02uY/Kp YQCufPLN6/1kpM6+O2KBV+fIo6uniKw2ZDew7jNVRRSPU7IjKg/OJJ/Rs9T+um9JF7ZS /Jkg== X-Gm-Message-State: AAQBX9eYGwep38cETAaLqd/fFPh3aGt6y7h28Sy3OExKsCgDb1uSUV4g L8tHCeRUR8d+vHMe6klEnxsIomY7Y4cBxNSXuQ== X-Google-Smtp-Source: AKy350ajCp+SWHHKhvJbskzr60GgOwewT8o7TEjT3GVHY1QdLM6ZK7DXrv9jW8QMsxY0Wnx9VHcVaLNwHH2nB53gJA== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a05:6a00:1388:b0:625:ccea:1627 with SMTP id t8-20020a056a00138800b00625ccea1627mr17811036pfg.5.1680460672042; Sun, 02 Apr 2023 11:37:52 -0700 (PDT) Date: Sun, 2 Apr 2023 18:37:30 +0000 In-Reply-To: <20230402183735.3011540-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230402183735.3011540-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230402183735.3011540-2-jingzhangos@google.com> Subject: [PATCH v5 1/6] KVM: arm64: Move CPU ID feature registers emulation into a separate file From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230402_113754_455080_81DB1582 X-CRM114-Status: GOOD ( 26.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Create a new file id_regs.c for CPU ID feature registers emulation code, which are moved from sys_regs.c and tweak sys_regs code accordingly. No functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/id_regs.c | 496 ++++++++++++++++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 461 +++-------------------------------- arch/arm64/kvm/sys_regs.h | 27 +++ 4 files changed, 559 insertions(+), 427 deletions(-) create mode 100644 arch/arm64/kvm/id_regs.c diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index c0c050e53157..a6a315fcd81e 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -13,7 +13,7 @@ obj-$(CONFIG_KVM) += hyp/ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \ inject_fault.o va_layout.o handle_exit.o \ guest.o debug.o reset.o sys_regs.o stacktrace.o \ - vgic-sys-reg-v3.o fpsimd.o pkvm.o \ + vgic-sys-reg-v3.o fpsimd.o pkvm.o id_regs.o \ arch_timer.o trng.o vmid.o emulate-nested.o nested.o \ vgic/vgic.o vgic/vgic-init.o \ vgic/vgic-irqfd.o vgic/vgic-v2.o \ diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c new file mode 100644 index 000000000000..638c0f8e4cd2 --- /dev/null +++ b/arch/arm64/kvm/id_regs.c @@ -0,0 +1,496 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 - Google LLC + * Author: Jing Zhang + * + * Moved from arch/arm64/kvm/sys_regs.c + * Copyright (C) 2012,2013 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "sys_regs.h" + +static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_has_pmu(vcpu)) + return vcpu->kvm->arch.dfr0_pmuver.imp; + + return vcpu->kvm->arch.dfr0_pmuver.unimp; +} + +static u8 perfmon_to_pmuver(u8 perfmon) +{ + switch (perfmon) { + case ID_DFR0_EL1_PerfMon_PMUv3: + return ID_AA64DFR0_EL1_PMUVer_IMP; + case ID_DFR0_EL1_PerfMon_IMPDEF: + return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; + default: + /* Anything ARMv8.1+ and NI have the same value. For now. */ + return perfmon; + } +} + +static u8 pmuver_to_perfmon(u8 pmuver) +{ + switch (pmuver) { + case ID_AA64DFR0_EL1_PMUVer_IMP: + return ID_DFR0_EL1_PerfMon_PMUv3; + case ID_AA64DFR0_EL1_PMUVer_IMP_DEF: + return ID_DFR0_EL1_PerfMon_IMPDEF; + default: + /* Anything ARMv8.1+ and NI have the same value. For now. */ + return pmuver; + } +} + +/* Read a sanitised cpufeature ID register by sys_reg_desc */ +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +{ + u32 id = reg_to_encoding(r); + u64 val; + + if (sysreg_visible_as_raz(vcpu, r)) + return 0; + + val = read_sanitised_ftr_reg(id); + + switch (id) { + case SYS_ID_AA64PFR0_EL1: + if (!vcpu_has_sve(vcpu)) + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), + (u64)vcpu->kvm->arch.pfr0_csv2); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), + (u64)vcpu->kvm->arch.pfr0_csv3); + if (kvm_vgic_global_state.type == VGIC_V3) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); + } + break; + case SYS_ID_AA64PFR1_EL1: + if (!kvm_has_mte(vcpu->kvm)) + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); + + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); + break; + case SYS_ID_AA64ISAR1_EL1: + if (!vcpu_has_ptrauth(vcpu)) + val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) | + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) | + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI)); + break; + case SYS_ID_AA64ISAR2_EL1: + if (!vcpu_has_ptrauth(vcpu)) + val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | + ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); + if (!cpus_have_final_cap(ARM64_HAS_WFXT)) + val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); + break; + case SYS_ID_AA64DFR0_EL1: + /* Limit debug to ARMv8.0 */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); + /* Set PMUver to the required version */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + vcpu_pmuver(vcpu)); + /* Hide SPE from guests */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); + break; + case SYS_ID_DFR0_EL1: + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), + pmuver_to_perfmon(vcpu_pmuver(vcpu))); + break; + case SYS_ID_AA64MMFR2_EL1: + val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; + break; + case SYS_ID_MMFR4_EL1: + val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX); + break; + } + + return val; +} + +/* cpufeature ID register access trap handlers */ + +static bool access_id_reg(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return write_to_read_only(vcpu, p, r); + + p->regval = read_id_reg(vcpu, r); + if (vcpu_has_nv(vcpu)) + access_nested_id_reg(vcpu, p, r); + + return true; +} + +/* + * cpufeature ID register user accessors + * + * For now, these registers are immutable for userspace, so no values + * are stored, and for set_id_reg() we don't allow the effective value + * to be changed. + */ +static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 *val) +{ + *val = read_id_reg(vcpu, rd); + return 0; +} + +static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 val) +{ + /* This is what we mean by invariant: you can't change it. */ + if (val != read_id_reg(vcpu, rd)) + return -EINVAL; + + return 0; +} + +static unsigned int id_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + u32 id = reg_to_encoding(r); + + switch (id) { + case SYS_ID_AA64ZFR0_EL1: + if (!vcpu_has_sve(vcpu)) + return REG_RAZ; + break; + } + + return 0; +} + +static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + /* + * AArch32 ID registers are UNKNOWN if AArch32 isn't implemented at any + * EL. Promote to RAZ/WI in order to guarantee consistency between + * systems. + */ + if (!kvm_supports_32bit_el0()) + return REG_RAZ | REG_USER_WI; + + return id_visibility(vcpu, r); +} + +static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + u8 csv2, csv3; + + /* + * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as + * it doesn't promise more than what is actually provided (the + * guest could otherwise be covered in ectoplasmic residue). + */ + csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT); + if (csv2 > 1 || + (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) + return -EINVAL; + + /* Same thing for CSV3 */ + csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT); + if (csv3 > 1 || + (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) + return -EINVAL; + + /* We can only differ with CSV[23], and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); + if (val) + return -EINVAL; + + vcpu->kvm->arch.pfr0_csv2 = csv2; + vcpu->kvm->arch.pfr0_csv3 = csv3; + + return 0; +} + +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + u8 pmuver, host_pmuver; + bool valid_pmu; + + host_pmuver = kvm_arm_pmu_get_pmuver_limit(); + + /* + * Allow AA64DFR0_EL1.PMUver to be set from userspace as long + * as it doesn't promise more than what the HW gives us. We + * allow an IMPDEF PMU though, only if no PMU is supported + * (KVM backward compatibility handling). + */ + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val); + if ((pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)) + return -EINVAL; + + valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF); + + /* Make sure view register and PMU support do match */ + if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) + return -EINVAL; + + /* We can only differ with PMUver, and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + if (val) + return -EINVAL; + + if (valid_pmu) + vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; + else + vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; + + return 0; +} + +static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + u8 perfmon, host_perfmon; + bool valid_pmu; + + host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); + + /* + * Allow DFR0_EL1.PerfMon to be set from userspace as long as + * it doesn't promise more than what the HW gives us on the + * AArch64 side (as everything is emulated with that), and + * that this is a PMUv3. + */ + perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), val); + if ((perfmon != ID_DFR0_EL1_PerfMon_IMPDEF && perfmon > host_perfmon) || + (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3)) + return -EINVAL; + + valid_pmu = (perfmon != 0 && perfmon != ID_DFR0_EL1_PerfMon_IMPDEF); + + /* Make sure view register and PMU support do match */ + if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) + return -EINVAL; + + /* We can only differ with PerfMon, and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + if (val) + return -EINVAL; + + if (valid_pmu) + vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); + else + vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); + + return 0; +} + +/* sys_reg_desc initialiser for known cpufeature ID registers */ +#define ID_SANITISED(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = id_visibility, \ +} + +/* sys_reg_desc initialiser for known cpufeature ID registers */ +#define AA32_ID_SANITISED(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = aa32_id_visibility, \ +} + +/* + * sys_reg_desc initialiser for architecturally unallocated cpufeature ID + * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2 + * (1 <= crm < 8, 0 <= Op2 < 8). + */ +#define ID_UNALLOCATED(crm, op2) { \ + Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility \ +} + +/* + * sys_reg_desc initialiser for known ID registers that we hide from guests. + * For now, these are exposed just like unallocated ID regs: they appear + * RAZ for the guest. + */ +#define ID_HIDDEN(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility, \ +} + +static const struct sys_reg_desc id_reg_descs[] = { + /* + * ID regs: all ID_SANITISED() entries here must have corresponding + * entries in arm64_ftr_regs[]. + */ + + /* AArch64 mappings of the AArch32 ID registers */ + /* CRm=1 */ + AA32_ID_SANITISED(ID_PFR0_EL1), + AA32_ID_SANITISED(ID_PFR1_EL1), + { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, + .get_user = get_id_reg, .set_user = set_id_dfr0_el1, + .visibility = aa32_id_visibility, }, + ID_HIDDEN(ID_AFR0_EL1), + AA32_ID_SANITISED(ID_MMFR0_EL1), + AA32_ID_SANITISED(ID_MMFR1_EL1), + AA32_ID_SANITISED(ID_MMFR2_EL1), + AA32_ID_SANITISED(ID_MMFR3_EL1), + + /* CRm=2 */ + AA32_ID_SANITISED(ID_ISAR0_EL1), + AA32_ID_SANITISED(ID_ISAR1_EL1), + AA32_ID_SANITISED(ID_ISAR2_EL1), + AA32_ID_SANITISED(ID_ISAR3_EL1), + AA32_ID_SANITISED(ID_ISAR4_EL1), + AA32_ID_SANITISED(ID_ISAR5_EL1), + AA32_ID_SANITISED(ID_MMFR4_EL1), + AA32_ID_SANITISED(ID_ISAR6_EL1), + + /* CRm=3 */ + AA32_ID_SANITISED(MVFR0_EL1), + AA32_ID_SANITISED(MVFR1_EL1), + AA32_ID_SANITISED(MVFR2_EL1), + ID_UNALLOCATED(3, 3), + AA32_ID_SANITISED(ID_PFR2_EL1), + ID_HIDDEN(ID_DFR1_EL1), + AA32_ID_SANITISED(ID_MMFR5_EL1), + ID_UNALLOCATED(3, 7), + + /* AArch64 ID registers */ + /* CRm=4 */ + { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, + .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + ID_SANITISED(ID_AA64PFR1_EL1), + ID_UNALLOCATED(4, 2), + ID_UNALLOCATED(4, 3), + ID_SANITISED(ID_AA64ZFR0_EL1), + ID_HIDDEN(ID_AA64SMFR0_EL1), + ID_UNALLOCATED(4, 6), + ID_UNALLOCATED(4, 7), + + /* CRm=5 */ + { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, + .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + ID_SANITISED(ID_AA64DFR1_EL1), + ID_UNALLOCATED(5, 2), + ID_UNALLOCATED(5, 3), + ID_HIDDEN(ID_AA64AFR0_EL1), + ID_HIDDEN(ID_AA64AFR1_EL1), + ID_UNALLOCATED(5, 6), + ID_UNALLOCATED(5, 7), + + /* CRm=6 */ + ID_SANITISED(ID_AA64ISAR0_EL1), + ID_SANITISED(ID_AA64ISAR1_EL1), + ID_SANITISED(ID_AA64ISAR2_EL1), + ID_UNALLOCATED(6, 3), + ID_UNALLOCATED(6, 4), + ID_UNALLOCATED(6, 5), + ID_UNALLOCATED(6, 6), + ID_UNALLOCATED(6, 7), + + /* CRm=7 */ + ID_SANITISED(ID_AA64MMFR0_EL1), + ID_SANITISED(ID_AA64MMFR1_EL1), + ID_SANITISED(ID_AA64MMFR2_EL1), + ID_UNALLOCATED(7, 3), + ID_UNALLOCATED(7, 4), + ID_UNALLOCATED(7, 5), + ID_UNALLOCATED(7, 6), + ID_UNALLOCATED(7, 7), +}; + +/** + * emulate_id_reg - Emulate a guest access to an AArch64 CPU ID feature register + * @vcpu: The VCPU pointer + * @params: Decoded system register parameters + * + * Return: true if the ID register access was successful, false otherwise. + */ +int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params) +{ + const struct sys_reg_desc *r; + + r = find_reg(params, id_reg_descs, ARRAY_SIZE(id_reg_descs)); + + if (likely(r)) { + perform_access(vcpu, params, r); + } else { + print_sys_reg_msg(params, + "Unsupported guest id_reg access at: %lx [%08lx]\n", + *vcpu_pc(vcpu), *vcpu_cpsr(vcpu)); + kvm_inject_undefined(vcpu); + } + + return 1; +} + +int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + return kvm_sys_reg_get_user(vcpu, reg, + id_reg_descs, ARRAY_SIZE(id_reg_descs)); +} + +int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + return kvm_sys_reg_set_user(vcpu, reg, + id_reg_descs, ARRAY_SIZE(id_reg_descs)); +} + +bool kvm_arm_check_idreg_table(void) +{ + return check_sysreg_table(id_reg_descs, ARRAY_SIZE(id_reg_descs), false); +} + +int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) +{ + const struct sys_reg_desc *i2, *end2; + unsigned int total = 0; + int err; + + i2 = id_reg_descs; + end2 = id_reg_descs + ARRAY_SIZE(id_reg_descs); + + while (i2 != end2) { + err = walk_one_sys_reg(vcpu, i2++, &uind, &total); + if (err) + return err; + } + return total; +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index be7c2598e563..40be494391fd 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -53,9 +53,9 @@ static bool read_from_write_only(struct kvm_vcpu *vcpu, return false; } -static bool write_to_read_only(struct kvm_vcpu *vcpu, - struct sys_reg_params *params, - const struct sys_reg_desc *r) +bool write_to_read_only(struct kvm_vcpu *vcpu, + struct sys_reg_params *params, + const struct sys_reg_desc *r) { WARN_ONCE(1, "Unexpected sys_reg write to read-only register\n"); print_sys_reg_instr(params); @@ -1159,163 +1159,11 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, return true; } -static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) -{ - if (kvm_vcpu_has_pmu(vcpu)) - return vcpu->kvm->arch.dfr0_pmuver.imp; - - return vcpu->kvm->arch.dfr0_pmuver.unimp; -} - -static u8 perfmon_to_pmuver(u8 perfmon) -{ - switch (perfmon) { - case ID_DFR0_EL1_PerfMon_PMUv3: - return ID_AA64DFR0_EL1_PMUVer_IMP; - case ID_DFR0_EL1_PerfMon_IMPDEF: - return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; - default: - /* Anything ARMv8.1+ and NI have the same value. For now. */ - return perfmon; - } -} - -static u8 pmuver_to_perfmon(u8 pmuver) -{ - switch (pmuver) { - case ID_AA64DFR0_EL1_PMUVer_IMP: - return ID_DFR0_EL1_PerfMon_PMUv3; - case ID_AA64DFR0_EL1_PMUVer_IMP_DEF: - return ID_DFR0_EL1_PerfMon_IMPDEF; - default: - /* Anything ARMv8.1+ and NI have the same value. For now. */ - return pmuver; - } -} - -/* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) -{ - u32 id = reg_to_encoding(r); - u64 val; - - if (sysreg_visible_as_raz(vcpu, r)) - return 0; - - val = read_sanitised_ftr_reg(id); - - switch (id) { - case SYS_ID_AA64PFR0_EL1: - if (!vcpu_has_sve(vcpu)) - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3); - if (kvm_vgic_global_state.type == VGIC_V3) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); - } - break; - case SYS_ID_AA64PFR1_EL1: - if (!kvm_has_mte(vcpu->kvm)) - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); - - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); - break; - case SYS_ID_AA64ISAR1_EL1: - if (!vcpu_has_ptrauth(vcpu)) - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI)); - break; - case SYS_ID_AA64ISAR2_EL1: - if (!vcpu_has_ptrauth(vcpu)) - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); - if (!cpus_have_final_cap(ARM64_HAS_WFXT)) - val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); - break; - case SYS_ID_AA64DFR0_EL1: - /* Limit debug to ARMv8.0 */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); - /* Set PMUver to the required version */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), - vcpu_pmuver(vcpu)); - /* Hide SPE from guests */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); - break; - case SYS_ID_DFR0_EL1: - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), - pmuver_to_perfmon(vcpu_pmuver(vcpu))); - break; - case SYS_ID_AA64MMFR2_EL1: - val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; - break; - case SYS_ID_MMFR4_EL1: - val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX); - break; - } - - return val; -} - -static unsigned int id_visibility(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *r) -{ - u32 id = reg_to_encoding(r); - - switch (id) { - case SYS_ID_AA64ZFR0_EL1: - if (!vcpu_has_sve(vcpu)) - return REG_RAZ; - break; - } - - return 0; -} - -static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *r) -{ - /* - * AArch32 ID registers are UNKNOWN if AArch32 isn't implemented at any - * EL. Promote to RAZ/WI in order to guarantee consistency between - * systems. - */ - if (!kvm_supports_32bit_el0()) - return REG_RAZ | REG_USER_WI; - - return id_visibility(vcpu, r); -} - -static unsigned int raz_visibility(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *r) +unsigned int raz_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { return REG_RAZ; } -/* cpufeature ID register access trap handlers */ - -static bool access_id_reg(struct kvm_vcpu *vcpu, - struct sys_reg_params *p, - const struct sys_reg_desc *r) -{ - if (p->is_write) - return write_to_read_only(vcpu, p, r); - - p->regval = read_id_reg(vcpu, r); - if (vcpu_has_nv(vcpu)) - access_nested_id_reg(vcpu, p, r); - - return true; -} - /* Visibility overrides for SVE-specific control registers */ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) @@ -1326,144 +1174,6 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } -static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) -{ - u8 csv2, csv3; - - /* - * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as - * it doesn't promise more than what is actually provided (the - * guest could otherwise be covered in ectoplasmic residue). - */ - csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT); - if (csv2 > 1 || - (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) - return -EINVAL; - - /* Same thing for CSV3 */ - csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT); - if (csv3 > 1 || - (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) - return -EINVAL; - - /* We can only differ with CSV[23], and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); - if (val) - return -EINVAL; - - vcpu->kvm->arch.pfr0_csv2 = csv2; - vcpu->kvm->arch.pfr0_csv3 = csv3; - - return 0; -} - -static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) -{ - u8 pmuver, host_pmuver; - bool valid_pmu; - - host_pmuver = kvm_arm_pmu_get_pmuver_limit(); - - /* - * Allow AA64DFR0_EL1.PMUver to be set from userspace as long - * as it doesn't promise more than what the HW gives us. We - * allow an IMPDEF PMU though, only if no PMU is supported - * (KVM backward compatibility handling). - */ - pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val); - if ((pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)) - return -EINVAL; - - valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF); - - /* Make sure view register and PMU support do match */ - if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) - return -EINVAL; - - /* We can only differ with PMUver, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - if (val) - return -EINVAL; - - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; - else - vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; - - return 0; -} - -static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) -{ - u8 perfmon, host_perfmon; - bool valid_pmu; - - host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); - - /* - * Allow DFR0_EL1.PerfMon to be set from userspace as long as - * it doesn't promise more than what the HW gives us on the - * AArch64 side (as everything is emulated with that), and - * that this is a PMUv3. - */ - perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), val); - if ((perfmon != ID_DFR0_EL1_PerfMon_IMPDEF && perfmon > host_perfmon) || - (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3)) - return -EINVAL; - - valid_pmu = (perfmon != 0 && perfmon != ID_DFR0_EL1_PerfMon_IMPDEF); - - /* Make sure view register and PMU support do match */ - if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) - return -EINVAL; - - /* We can only differ with PerfMon, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - if (val) - return -EINVAL; - - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); - else - vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); - - return 0; -} - -/* - * cpufeature ID register user accessors - * - * For now, these registers are immutable for userspace, so no values - * are stored, and for set_id_reg() we don't allow the effective value - * to be changed. - */ -static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, - u64 *val) -{ - *val = read_id_reg(vcpu, rd); - return 0; -} - -static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, - u64 val) -{ - /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(vcpu, rd)) - return -EINVAL; - - return 0; -} - static int get_raz_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 *val) { @@ -1648,50 +1358,6 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .visibility = elx2_visibility, \ } -/* sys_reg_desc initialiser for known cpufeature ID registers */ -#define ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = id_visibility, \ -} - -/* sys_reg_desc initialiser for known cpufeature ID registers */ -#define AA32_ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = aa32_id_visibility, \ -} - -/* - * sys_reg_desc initialiser for architecturally unallocated cpufeature ID - * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2 - * (1 <= crm < 8, 0 <= Op2 < 8). - */ -#define ID_UNALLOCATED(crm, op2) { \ - Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility \ -} - -/* - * sys_reg_desc initialiser for known ID registers that we hide from guests. - * For now, these are exposed just like unallocated ID regs: they appear - * RAZ for the guest. - */ -#define ID_HIDDEN(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility, \ -} - static bool access_sp_el1(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) @@ -1782,87 +1448,6 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 }, - /* - * ID regs: all ID_SANITISED() entries here must have corresponding - * entries in arm64_ftr_regs[]. - */ - - /* AArch64 mappings of the AArch32 ID registers */ - /* CRm=1 */ - AA32_ID_SANITISED(ID_PFR0_EL1), - AA32_ID_SANITISED(ID_PFR1_EL1), - { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_dfr0_el1, - .visibility = aa32_id_visibility, }, - ID_HIDDEN(ID_AFR0_EL1), - AA32_ID_SANITISED(ID_MMFR0_EL1), - AA32_ID_SANITISED(ID_MMFR1_EL1), - AA32_ID_SANITISED(ID_MMFR2_EL1), - AA32_ID_SANITISED(ID_MMFR3_EL1), - - /* CRm=2 */ - AA32_ID_SANITISED(ID_ISAR0_EL1), - AA32_ID_SANITISED(ID_ISAR1_EL1), - AA32_ID_SANITISED(ID_ISAR2_EL1), - AA32_ID_SANITISED(ID_ISAR3_EL1), - AA32_ID_SANITISED(ID_ISAR4_EL1), - AA32_ID_SANITISED(ID_ISAR5_EL1), - AA32_ID_SANITISED(ID_MMFR4_EL1), - AA32_ID_SANITISED(ID_ISAR6_EL1), - - /* CRm=3 */ - AA32_ID_SANITISED(MVFR0_EL1), - AA32_ID_SANITISED(MVFR1_EL1), - AA32_ID_SANITISED(MVFR2_EL1), - ID_UNALLOCATED(3,3), - AA32_ID_SANITISED(ID_PFR2_EL1), - ID_HIDDEN(ID_DFR1_EL1), - AA32_ID_SANITISED(ID_MMFR5_EL1), - ID_UNALLOCATED(3,7), - - /* AArch64 ID registers */ - /* CRm=4 */ - { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, - ID_SANITISED(ID_AA64PFR1_EL1), - ID_UNALLOCATED(4,2), - ID_UNALLOCATED(4,3), - ID_SANITISED(ID_AA64ZFR0_EL1), - ID_HIDDEN(ID_AA64SMFR0_EL1), - ID_UNALLOCATED(4,6), - ID_UNALLOCATED(4,7), - - /* CRm=5 */ - { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, - ID_SANITISED(ID_AA64DFR1_EL1), - ID_UNALLOCATED(5,2), - ID_UNALLOCATED(5,3), - ID_HIDDEN(ID_AA64AFR0_EL1), - ID_HIDDEN(ID_AA64AFR1_EL1), - ID_UNALLOCATED(5,6), - ID_UNALLOCATED(5,7), - - /* CRm=6 */ - ID_SANITISED(ID_AA64ISAR0_EL1), - ID_SANITISED(ID_AA64ISAR1_EL1), - ID_SANITISED(ID_AA64ISAR2_EL1), - ID_UNALLOCATED(6,3), - ID_UNALLOCATED(6,4), - ID_UNALLOCATED(6,5), - ID_UNALLOCATED(6,6), - ID_UNALLOCATED(6,7), - - /* CRm=7 */ - ID_SANITISED(ID_AA64MMFR0_EL1), - ID_SANITISED(ID_AA64MMFR1_EL1), - ID_SANITISED(ID_AA64MMFR2_EL1), - ID_UNALLOCATED(7,3), - ID_UNALLOCATED(7,4), - ID_UNALLOCATED(7,5), - ID_UNALLOCATED(7,6), - ID_UNALLOCATED(7,7), - { SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 }, { SYS_DESC(SYS_ACTLR_EL1), access_actlr, reset_actlr, ACTLR_EL1 }, { SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 }, @@ -2540,8 +2125,8 @@ static const struct sys_reg_desc cp15_64_regs[] = { { SYS_DESC(SYS_AARCH32_CNTP_CVAL), access_arch_timer }, }; -static bool check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, - bool is_32) +bool check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, + bool is_32) { unsigned int i; @@ -2566,7 +2151,7 @@ int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu) return 1; } -static void perform_access(struct kvm_vcpu *vcpu, +void perform_access(struct kvm_vcpu *vcpu, struct sys_reg_params *params, const struct sys_reg_desc *r) { @@ -2941,6 +2526,9 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu) params = esr_sys64_to_params(esr); params.regval = vcpu_get_reg(vcpu, Rt); + if (is_id_reg(reg_to_encoding(¶ms))) + return emulate_id_reg(vcpu, ¶ms); + if (!emulate_sys_reg(vcpu, ¶ms)) return 1; @@ -3160,6 +2748,7 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { void __user *uaddr = (void __user *)(unsigned long)reg->addr; + struct sys_reg_params params; int err; if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX) @@ -3169,6 +2758,12 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (err != -ENOENT) return err; + if (!index_to_params(reg->id, ¶ms)) + return -ENOENT; + + if (is_id_reg(reg_to_encoding(¶ms))) + return kvm_arm_get_id_reg(vcpu, reg); + return kvm_sys_reg_get_user(vcpu, reg, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); } @@ -3204,6 +2799,7 @@ int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { void __user *uaddr = (void __user *)(unsigned long)reg->addr; + struct sys_reg_params params; int err; if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX) @@ -3213,6 +2809,12 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (err != -ENOENT) return err; + if (!index_to_params(reg->id, ¶ms)) + return -ENOENT; + + if (is_id_reg(reg_to_encoding(¶ms))) + return kvm_arm_set_id_reg(vcpu, reg); + return kvm_sys_reg_set_user(vcpu, reg, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); } @@ -3259,10 +2861,10 @@ static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind) return true; } -static int walk_one_sys_reg(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 __user **uind, - unsigned int *total) +int walk_one_sys_reg(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 __user **uind, + unsigned int *total) { /* * Ignore registers we trap but don't save, @@ -3303,6 +2905,7 @@ unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu) { return ARRAY_SIZE(invariant_sys_regs) + num_demux_regs() + + kvm_arm_walk_id_regs(vcpu, (u64 __user *)NULL) + walk_sys_regs(vcpu, (u64 __user *)NULL); } @@ -3318,6 +2921,11 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) uindices++; } + err = kvm_arm_walk_id_regs(vcpu, uindices); + if (err < 0) + return err; + uindices += err; + err = walk_sys_regs(vcpu, uindices); if (err < 0) return err; @@ -3332,6 +2940,7 @@ int __init kvm_sys_reg_table_init(void) unsigned int i; /* Make sure tables are unique and in order. */ + valid &= kvm_arm_check_idreg_table(); valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false); valid &= check_sysreg_table(cp14_regs, ARRAY_SIZE(cp14_regs), true); valid &= check_sysreg_table(cp14_64_regs, ARRAY_SIZE(cp14_64_regs), true); diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 6b11f2cc7146..2f7750153f8e 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -210,6 +210,19 @@ find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[], return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg); } +/* + * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is + * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. + */ +static inline bool is_id_reg(u32 id) +{ + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 && + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 && + sys_reg_CRm(id) < 8); +} + +void perform_access(struct kvm_vcpu *vcpu, struct sys_reg_params *params, + const struct sys_reg_desc *r); const struct sys_reg_desc *get_reg_by_id(u64 id, const struct sys_reg_desc table[], unsigned int num); @@ -220,6 +233,20 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, const struct sys_reg_desc table[], unsigned int num); int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, const struct sys_reg_desc table[], unsigned int num); +bool write_to_read_only(struct kvm_vcpu *vcpu, + struct sys_reg_params *params, const struct sys_reg_desc *r); +unsigned int raz_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r); +bool check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, + bool is_32); +int walk_one_sys_reg(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 __user **uind, + unsigned int *total); +int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params); +int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); +int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); +bool kvm_arm_check_idreg_table(void); +int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind); #define AA32(_x) .aarch32_map = AA32_##_x #define Op0(_x) .Op0 = _x From patchwork Sun Apr 2 18:37:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13197540 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D4440C7619A for ; Sun, 2 Apr 2023 18:38:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=4dcI8Fh0ewS0tBWHctowhdWyuxnavdO0iuYD6UJwUdY=; b=3XNZoGBzEaDzA5PdfWu/rjzdok DuV9jeue3VA+D6Tnrc0KnZdJhmhz+hPBfo/L1PcEu1jpWmimzYkMc8kRwCE7I2AEafv+GJ67xhM1/ H10sJq6rUhi8i6TeqbjJSAiwoiCOCxl58UDG6rchMuU7zeH9TFyZGc9OoMXNgadPI+XCOcMhjfmWr OcJTUSbnLQrk+/R4/U1iyUVjNyjH2JuFCFwE2yhoda8QQwyoi7VRVeGnZ8RcjB2j4VuHxNMApw3+1 H0V87xJWbssI9PO8DAcwCN7Gfvtixmvut9VBTdbwv0bvCcu3nJIntmXrK1KQbXDB+6gNzLBbYMhdH J5xj/a/w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pj2aV-00D2Lf-1J; Sun, 02 Apr 2023 18:37:59 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pj2aS-00D2J9-0A for linux-arm-kernel@lists.infradead.org; Sun, 02 Apr 2023 18:37:57 +0000 Received: by mail-pf1-x44a.google.com with SMTP id n3-20020a056a00212300b0062dfd10c477so1186710pfj.20 for ; Sun, 02 Apr 2023 11:37:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680460674; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OtN7Xgm5G0CTuhMM6fbvlwz6c/Y4ksEYoFXIt4yMqz0=; b=T2quHPgjloQllDRfAhtZ/ESZgzMUnOHiYexunpMLvGnqitUheK+sUwa+R7WdNy/LOX 7WT99mOsbl3ol/oVxuOqB7QIFX/gDggdvwYO2gLgSmSNsDezKImmFNJqpAVNnV6ZKpbr eKAt3+QGnedqmEq5eW39aovWaA2246VzVdb8YpYaLMjl7kE/0JuBRsn22L9y8CYtJyjA qrr5oiHZDVpFWreGfz7Y+Zp6dcxdgoSLCjpFL5OJNy5Q7uy8hdeThOBtbOCRSDHke8fm VuA5pkTNKIcJIJItjATh6BGUAmaLQyJbQgsjQykBMMqRKdkJSL81PngSJl+F3ZmhEY1C OtEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680460674; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OtN7Xgm5G0CTuhMM6fbvlwz6c/Y4ksEYoFXIt4yMqz0=; b=1Frr+vMO5bSsb7tso3argQe1oX92sotDs1UY74qUzn2AMU/aME7N2T5F1hu0+8dM87 DorrNk3nujp+YucEEDgpSbe1oWjRybOC/FlSoripIdWxy+Ne4Ip7OA//bL9bfWu9SddM hXLN/CVxZ1+By5XcS6Lb1VLTzpQpilpXoej1p9tZTqTrckwvpSu7CgnoOBH0xIxYDXS2 g1GszfWHCjrmjCIBIWEqdK99STj47MZw4TzyuEgt0dEM9Uzo8W8GwbzAiS1SEYcvBrH0 KQPFVz0qbEQZ/bGhjXmv7zXptSO9Ta2pfzNEUHz+I3aFtwVaU4Hyc4KzB0ypVvaYwT1+ v8uA== X-Gm-Message-State: AAQBX9dLYL1Gzx1bp69xqneXD2rplKwQ4DOKMFa3Eo3Tn8r+kItyXsK7 297StQgkRZdwQRRyNRX3Y/FuBHG/e9iOhFvvng== X-Google-Smtp-Source: AKy350bNIGMA3nUZhaIn7vZ7N1/Qrj8DtpP9x4cVKoXmsCHlYeciBBWxECPCo6cRxLUUB9pcNVmGNjvfgKANX6kvmg== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a05:6a00:21d3:b0:62c:d6ef:eb35 with SMTP id t19-20020a056a0021d300b0062cd6efeb35mr14382791pfj.2.1680460673814; Sun, 02 Apr 2023 11:37:53 -0700 (PDT) Date: Sun, 2 Apr 2023 18:37:31 +0000 In-Reply-To: <20230402183735.3011540-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230402183735.3011540-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230402183735.3011540-3-jingzhangos@google.com> Subject: [PATCH v5 2/6] KVM: arm64: Save ID registers' sanitized value per guest From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230402_113756_090355_9A10FE41 X-CRM114-Status: GOOD ( 21.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce id_regs[] in kvm_arch as a storage of guest's ID registers, and save ID registers' sanitized value in the array at KVM_CREATE_VM. Use the saved ones when ID registers are read by the guest or userspace (via KVM_GET_ONE_REG). No functional change intended. Co-developed-by: Reiji Watanabe Signed-off-by: Reiji Watanabe Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 21 +++++++++++++ arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/id_regs.c | 49 +++++++++++++++++++++++++------ arch/arm64/kvm/sys_regs.c | 2 +- arch/arm64/kvm/sys_regs.h | 1 + 5 files changed, 64 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a8e2c52b44aa..3b8a2cc0c7cd 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -177,6 +177,22 @@ struct kvm_smccc_features { unsigned long vendor_hyp_bmap; }; +/* + * Emualted CPU ID registers per VM + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. + * + * These emulated idregs are VM-wide, but accessed from the context of a vCPU. + * Updating multiple id regs with dependencies needs to be guarded by + * kvm_arch.config_lock. + */ +#define KVM_ARM_ID_REG_NUM 56 +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id)) +#define IDREG(kvm, id) kvm->arch.idregs.regs[IDREG_IDX(id)] +struct kvm_idregs { + u64 regs[KVM_ARM_ID_REG_NUM]; +}; + typedef unsigned int pkvm_handle_t; struct kvm_protected_vm { @@ -250,6 +266,9 @@ struct kvm_arch { /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; + /* Emulated CPU ID registers */ + struct kvm_idregs idregs; + /* * For an untrusted host VM, 'pkvm.handle' is used to lookup * the associated pKVM instance in the hypervisor. @@ -1025,6 +1044,8 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm, struct kvm_arm_counter_offset *offset); +void kvm_arm_init_id_regs(struct kvm *kvm); + /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 0e5a3ff8cc5a..571e24124a95 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -165,6 +165,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) set_default_spectre(kvm); kvm_arm_init_hypercalls(kvm); + kvm_arm_init_id_regs(kvm); /* * Initialise the default PMUver before there is a chance to diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index 638c0f8e4cd2..77f945c78d47 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -52,16 +52,9 @@ static u8 pmuver_to_perfmon(u8 pmuver) } } -/* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) { - u32 id = reg_to_encoding(r); - u64 val; - - if (sysreg_visible_as_raz(vcpu, r)) - return 0; - - val = read_sanitised_ftr_reg(id); + u64 val = IDREG(vcpu->kvm, id); switch (id) { case SYS_ID_AA64PFR0_EL1: @@ -126,6 +119,14 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r return val; } +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +{ + if (sysreg_visible_as_raz(vcpu, r)) + return 0; + + return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r)); +} + /* cpufeature ID register access trap handlers */ static bool access_id_reg(struct kvm_vcpu *vcpu, @@ -494,3 +495,33 @@ int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) } return total; } + +/* + * Set the guest's ID registers that are defined in id_reg_descs[] + * with ID_SANITISED() to the host's sanitized value. + */ +void kvm_arm_init_id_regs(struct kvm *kvm) +{ + int i; + u32 id; + u64 val; + + for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { + id = reg_to_encoding(&id_reg_descs[i]); + if (WARN_ON_ONCE(!is_id_reg(id))) + /* Shouldn't happen */ + continue; + + /* + * Some hidden ID registers which are not in arm64_ftr_regs[] + * would cause warnings from read_sanitised_ftr_reg(). + * Skip those ID registers to avoid the warnings. + */ + if (id_reg_descs[i].visibility == raz_visibility) + /* Hidden or reserved ID register */ + continue; + + val = read_sanitised_ftr_reg(id); + IDREG(kvm, id) = val; + } +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 40be494391fd..3af9f85aa976 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -364,7 +364,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + u64 val = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); u32 sr = reg_to_encoding(r); if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) { diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 2f7750153f8e..326ca28c1117 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -247,6 +247,7 @@ int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); bool kvm_arm_check_idreg_table(void); int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind); +u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); #define AA32(_x) .aarch32_map = AA32_##_x #define Op0(_x) .Op0 = _x From patchwork Sun Apr 2 18:37:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13197539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1D7CEC77B60 for ; Sun, 2 Apr 2023 18:38:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Rt8uNZI8mN1LqO0vETfNc4NOEkvSpMmmI0p9iHNhnaQ=; b=F7EMwzb7QB1s478omcMSdgIriG xlGpizA55HBLqU5bziSLZJWkyuMBSg3c1+dZZOsm3GSueMzq6PSFwYr5t7fWqz0Ao8MaiFSkj9ygt MFm7L0AXIPtitOC9vMboDaf+uwg9qn61sDV+DuFosLy+jBwbskJUeg2aD0QW3l+3qZn5RpuY6WL82 Ah9Ok/AWRinLtW5mpzPa26liOY76crEZosLtg01V2b4ggO5TIQHenvoUFs+T9ovkMph1MAlaSNyqk 0I4c1Tf8RthA50Rv4TYLAcBjf6zIHkx+QjtHT9pngt0WzupxEfaoyLDM3nJeuH+Srtlywk1t+6RT5 quomoJfQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pj2aY-00D2Ny-18; Sun, 02 Apr 2023 18:38:02 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pj2aU-00D2Jg-09 for linux-arm-kernel@lists.infradead.org; Sun, 02 Apr 2023 18:37:59 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id y144-20020a253296000000b00b69ce0e6f2dso26731097yby.18 for ; Sun, 02 Apr 2023 11:37:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680460675; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3eS/rWn+zyrX6OzSkjyP8lIMnfl9OqLt0l19c/f0xec=; b=qpK7JEFRQWQd7c6Gj+XZnIeLTizcdTvT3+FD2xvScbCf1y+FFhOIj5G2krL//9Mw1V en5OXcFUXnNE2jULrDf0/fJ4LJ5XHDJz/I4pQ4yIN3a1GxWzC3gm5Lxqg4tEJ65XC/21 SB2CK0Plogt0gH29QJsSn56tdViGk5h5X7z2xttVA/qFVCUnPTPS7N7bcwRAsPNNKoeW BfgxMe9uX0k8x1UDUDPmsvmA498a4XM9Y7EBofFb9264EhNxb1P+4r+wP6h9pGfYMMRO aOIV6OMs7Au1NYQbY7dV01A5YbFBeHdj8ocFpE/YiW6BWlfM7N1J+YgEg9bOVAm1bwRc jsWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680460675; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3eS/rWn+zyrX6OzSkjyP8lIMnfl9OqLt0l19c/f0xec=; b=INa0eA06wYksMDaW1ryNJ/hdbIQqBZR0DvmW5TxMA6MDVU3xCuXfxuTnxgK3Di7b1R TGSmyPhz0BQdJDVCXDI+iyYskszXh5YwfTsBSYtLyea+n3sUurlvMEbDVKMeO9prweOp Vf2hIPC8FrZbi7jztvPk6gZQIjvv7pzaYah3h2TLSNO8qgtSILqYqfFCJe76BmCgrcT/ Rs1YwNzpQjn+gi2tNCXkvHy7MdJ0y9CptBY71a4oJmdbvkRNzqkEFH+9nX9mMpA5RnYr fyPZNSF79ooUcj8wrMmgFRD2KQtExZ/QTAi1reLwCihgT7rQagPMh89yfaXkzxDaDQN+ Tlbw== X-Gm-Message-State: AAQBX9dus3bscFfE7WCVghyTGWiVDJUFyxtn5RqTL2A0llvYtg7wcsvY yzhn3uSt/Zv2Yvqip7VjOgWjzd2HfL1WJKGkJw== X-Google-Smtp-Source: AKy350aJtPEwW4uleckVcaAD15bKLeKUsgZqR6LikQOGrbxtRE1JkDmWfsgLTCn3WeI/G//fwp0P9wSJ47gzyaiIQA== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a25:abee:0:b0:b4a:3896:bc17 with SMTP id v101-20020a25abee000000b00b4a3896bc17mr16990215ybi.0.1680460675558; Sun, 02 Apr 2023 11:37:55 -0700 (PDT) Date: Sun, 2 Apr 2023 18:37:32 +0000 In-Reply-To: <20230402183735.3011540-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230402183735.3011540-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230402183735.3011540-4-jingzhangos@google.com> Subject: [PATCH v5 3/6] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3] From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230402_113758_089065_1588AB37 X-CRM114-Status: GOOD ( 17.83 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With per guest ID registers, ID_AA64PFR0_EL1.[CSV2|CSV3] settings from userspace can be stored in its corresponding ID register. The setting of CSV bits for protected VMs are removed according to the discussion from Fuad below: https://lore.kernel.org/all/CA+EHjTwXA9TprX4jeG+-D+c8v9XG+oFdU1o6TSkvVye145_OvA@mail.gmail.com Besides the removal of CSV bits setting for protected VMs, No other functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 2 -- arch/arm64/kvm/arm.c | 17 ---------------- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 7 ------- arch/arm64/kvm/id_regs.c | 31 ++++++++++++++++++++++-------- 4 files changed, 23 insertions(+), 34 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 3b8a2cc0c7cd..283e1f30cec5 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -256,8 +256,6 @@ struct kvm_arch { cpumask_var_t supported_cpus; - u8 pfr0_csv2; - u8 pfr0_csv3; struct { u8 imp:4; u8 unimp:4; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 571e24124a95..483c7377e092 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -104,22 +104,6 @@ static int kvm_arm_default_max_vcpus(void) return vgic_present ? kvm_vgic_get_max_vcpus() : KVM_MAX_VCPUS; } -static void set_default_spectre(struct kvm *kvm) -{ - /* - * The default is to expose CSV2 == 1 if the HW isn't affected. - * Although this is a per-CPU feature, we make it global because - * asymmetric systems are just a nuisance. - * - * Userspace can override this as long as it doesn't promise - * the impossible. - */ - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) - kvm->arch.pfr0_csv2 = 1; - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) - kvm->arch.pfr0_csv3 = 1; -} - /** * kvm_arch_init_vm - initializes a VM data structure * @kvm: pointer to the KVM struct @@ -163,7 +147,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->max_vcpus = kvm_arm_default_max_vcpus(); - set_default_spectre(kvm); kvm_arm_init_hypercalls(kvm); kvm_arm_init_id_regs(kvm); diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 08d2b004f4b7..edd969a1f36b 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -85,19 +85,12 @@ static u64 get_restricted_features_unsigned(u64 sys_reg_val, static u64 get_pvm_id_aa64pfr0(const struct kvm_vcpu *vcpu) { - const struct kvm *kvm = (const struct kvm *)kern_hyp_va(vcpu->kvm); u64 set_mask = 0; u64 allow_mask = PVM_ID_AA64PFR0_ALLOW; set_mask |= get_restricted_features_unsigned(id_aa64pfr0_el1_sys_val, PVM_ID_AA64PFR0_RESTRICT_UNSIGNED); - /* Spectre and Meltdown mitigation in KVM */ - set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), - (u64)kvm->arch.pfr0_csv2); - set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), - (u64)kvm->arch.pfr0_csv3); - return (id_aa64pfr0_el1_sys_val & allow_mask) | set_mask; } diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index 77f945c78d47..c6525efe8058 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -61,12 +61,6 @@ u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), - (u64)vcpu->kvm->arch.pfr0_csv2); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), - (u64)vcpu->kvm->arch.pfr0_csv3); if (kvm_vgic_global_state.type == VGIC_V3) { val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); @@ -201,6 +195,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, u64 val) { u8 csv2, csv3; + u64 sval = val; /* * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as @@ -225,8 +220,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - vcpu->kvm->arch.pfr0_csv2 = csv2; - vcpu->kvm->arch.pfr0_csv3 = csv3; + IDREG(vcpu->kvm, reg_to_encoding(rd)) = sval; return 0; } @@ -524,4 +518,25 @@ void kvm_arm_init_id_regs(struct kvm *kvm) val = read_sanitised_ftr_reg(id); IDREG(kvm, id) = val; } + + /* + * The default is to expose CSV2 == 1 if the HW isn't affected. + * Although this is a per-CPU feature, we make it global because + * asymmetric systems are just a nuisance. + * + * Userspace can override this as long as it doesn't promise + * the impossible. + */ + val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); + + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); + } + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + } + + IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; } From patchwork Sun Apr 2 18:37:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13197542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3509BC76196 for ; Sun, 2 Apr 2023 18:38:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=0HHFcnbub5rip2oDa6bu7ItzBsJtPwJ1dbk65qBy4c8=; b=h7ehqIWa4DJDo1GCsNcN7OB4ct PKsrKFiSHBqAk5DPKjOv7PSImQtEcw+37RGt3cb3pgGUc/ujSJm9IzFo03tPRb0YEnUwLxkLGeHHH cw/KxivwlN7vJLXez7ZAoscsgtOXy7GyFyQhXH6Hzmy3S1o0rSZIHlTTMsMsnbIH/FtMAqWI4TLmA 1eddTxzxIytwvt3fJ60AszYhaypFS114LBrsTRwQAXO6KgnbqqrxNi+uwBkitNhQf9QxfzfDPSW/A Q8+Wy5z1C+gHb4DZ+RzEsRLes17pe89SngzkivN4RJJBypTFHXGFMVciRuRaLbNLreyB5WLRGg8+4 ZuJ5lKJA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pj2aZ-00D2Ox-1R; Sun, 02 Apr 2023 18:38:03 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pj2aU-00D2KY-1J for linux-arm-kernel@lists.infradead.org; Sun, 02 Apr 2023 18:37:59 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5463f2d3223so96642617b3.14 for ; Sun, 02 Apr 2023 11:37:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680460677; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7wtNV4K4ZwIpWUK33TxV8u73OVvahKHvisjWHKx0+HU=; b=qCo30+Zon7/M+jNmJGM6AiB17N/0f4hrSzETL6yj9pwxPHeubnb2TVIwQz05UlCMX9 GF77ugIkcJdx/76zRnhkwJujNyoQHS+8a+yU/UxHRJIUB9GECfnyXHJ195Fy3Y2Q96X4 oaa1ADraPWiNA6httrhBtJHspbK4gKnMfynqpAJbkez4NqrHT6kVMbQiglumJm7Eq2IB tfPcdJXdtaY5KIZEuWVc/TRxF9rYI5m4BaAfRoPTESKKZ9lFXpSd1Wz44YxW62iNRN9t l26LMbYiN9PmFwv2261rbD9ywV1VA9Qq7U2jh3Hlaqlf2LGsegA4jmhgwKgkQSBweGdX C9Iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680460677; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7wtNV4K4ZwIpWUK33TxV8u73OVvahKHvisjWHKx0+HU=; b=Kz9YC/sk+k4cscfd1ZW4MWLVl/sQgESg82q3ZYSPmxFqb1f5VyZTgp6lGLVAz1DnfZ aUZZjiTMxYzjDN/np6UNKsD48qlXY+1hPWhG4uig8pK5n79IyZpMBczVzPEgQUfQyfCw mVsYZruYn9tNN3LP8g+3pHH89E+dGQYtiEWIUZH8nsKt9NtYsNigBTZrAoVjkF78HnU0 qG0hQRXUBZcriE7ArjqGWd7akCy0cOx7GAItF29ZiYp2t52oa/v1BStem+dW+5y8mZOy 1EQmMN3uoDOtZPs1PgMgAzuKZZu3n8yosVlgq3ZmS1pwtKEtzAzx+ZjtxlZPfVS+W0ag aQzg== X-Gm-Message-State: AAQBX9fcMONFdhfV6S63OnSj9z5ceJxs8q3PZUjhQgBxyG2eR5k3opnt N5u+fny5vtAzwOy3H2ahVvUUGIg4iI/wEIMovg== X-Google-Smtp-Source: AKy350ZiyV6l/P96zSakTb2JACzW+Bf7btFQhwO1yArev8ReOE6Qv+4D4Z2k84cTHx/rbaTUmiSMtlDy9vqFyy2R5Q== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a25:d784:0:b0:b76:3b21:b1dc with SMTP id o126-20020a25d784000000b00b763b21b1dcmr9741672ybg.0.1680460676989; Sun, 02 Apr 2023 11:37:56 -0700 (PDT) Date: Sun, 2 Apr 2023 18:37:33 +0000 In-Reply-To: <20230402183735.3011540-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230402183735.3011540-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230402183735.3011540-5-jingzhangos@google.com> Subject: [PATCH v5 4/6] KVM: arm64: Use per guest ID register for ID_AA64DFR0_EL1.PMUVer From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230402_113758_447294_D7F907E9 X-CRM114-Status: GOOD ( 15.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With per guest ID registers, PMUver settings from userspace can be stored in its corresponding ID register. No functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 11 ++++--- arch/arm64/kvm/arm.c | 6 ---- arch/arm64/kvm/id_regs.c | 51 +++++++++++++++++++++++++------ include/kvm/arm_pmu.h | 5 +-- 4 files changed, 50 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 283e1f30cec5..28448e53bbf3 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -244,6 +244,12 @@ struct kvm_arch { #define KVM_ARCH_FLAG_VM_COUNTER_OFFSET 6 /* Timer PPIs made immutable */ #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE 7 + /* + * AA64DFR0_EL1.PMUver was set as ID_AA64DFR0_EL1_PMUVer_IMP_DEF + * or DFR0_EL1.PerfMon was set as ID_DFR0_EL1_PerfMon_IMPDEF from + * userspace for VCPUs without PMU. + */ +#define KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU 8 unsigned long flags; @@ -256,11 +262,6 @@ struct kvm_arch { cpumask_var_t supported_cpus; - struct { - u8 imp:4; - u8 unimp:4; - } dfr0_pmuver; - /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 483c7377e092..48d738148e84 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -150,12 +150,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm_arm_init_hypercalls(kvm); kvm_arm_init_id_regs(kvm); - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit(); - return 0; err_free_cpumask: diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index c6525efe8058..e92eacb0ad32 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -21,9 +21,12 @@ static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { if (kvm_vcpu_has_pmu(vcpu)) - return vcpu->kvm->arch.dfr0_pmuver.imp; + return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)); + else if (test_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags)) + return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; - return vcpu->kvm->arch.dfr0_pmuver.unimp; + return 0; } static u8 perfmon_to_pmuver(u8 perfmon) @@ -256,10 +259,20 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; - else - vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; + if (valid_pmu) { + mutex_lock(&vcpu->kvm->arch.config_lock); + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) |= + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), pmuver); + + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) |= + FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), pmuver_to_perfmon(pmuver)); + mutex_unlock(&vcpu->kvm->arch.config_lock); + } else { + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, + pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF); + } return 0; } @@ -296,10 +309,20 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); - else - vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); + if (valid_pmu) { + mutex_lock(&vcpu->kvm->arch.config_lock); + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) |= + FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), perfmon); + + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) |= + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), perfmon_to_pmuver(perfmon)); + mutex_unlock(&vcpu->kvm->arch.config_lock); + } else { + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, + perfmon == ID_DFR0_EL1_PerfMon_IMPDEF); + } return 0; } @@ -539,4 +562,12 @@ void kvm_arm_init_id_regs(struct kvm *kvm) } IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; + + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + IDREG(kvm, SYS_ID_AA64DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + IDREG(kvm, SYS_ID_AA64DFR0_EL1) |= + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), kvm_arm_pmu_get_pmuver_limit()); } diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 628775334d5e..856ac59b6821 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -92,8 +92,9 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); /* * Evaluates as true when emulating PMUv3p5, and false otherwise. */ -#define kvm_pmu_is_3p5(vcpu) \ - (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5) +#define kvm_pmu_is_3p5(vcpu) \ + (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), \ + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)) >= ID_AA64DFR0_EL1_PMUVer_V3P5) u8 kvm_arm_pmu_get_pmuver_limit(void); From patchwork Sun Apr 2 18:37:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13197543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1F931C77B60 for ; Sun, 2 Apr 2023 18:38:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=XAx1lQwwXoqPifBlSFPC1Pn6KCRWGpb8z3BhvDPi0/w=; b=UPw8B2qYFv7iM4lXMmflHn5Pd2 Gp1l1TpX7IwwV6osvO2UJgn35S2So+lF4lIJyATE5YCkPRgv42KOE0Lfa8QozCl5pWYZ4MmxSuiGz 5TM2RLXm/BgDqAJFtVAfwBAkisB9C0DMLP8SIDQQBVWKNqt3cHhQirJxbgcmfulM5AOxPEgLg85rO dvtxETHxKO/PxnePBYBy9bdNQsp8PG4+OuSV8aEEPXHQqPLCW1clLwoiw9PaqkeApMlSoy86G9owS dy8zWR5nPY4/HvPpJNVDcrXPYuqtfOH5IHlz7M5vnRce0Gzu25dKl2Q62GUjX48wyRWs50TXyCkrD 7KM4BYGQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pj2aa-00D2PW-1s; Sun, 02 Apr 2023 18:38:04 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pj2aW-00D2Li-1d for linux-arm-kernel@lists.infradead.org; Sun, 02 Apr 2023 18:38:02 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-545d027103eso226153227b3.5 for ; Sun, 02 Apr 2023 11:37:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680460679; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9+3V14Sr2+1lCIh3sH1nXbek8TQF8lGQd6t0XW5kdx0=; b=W74IGaA6QUP8FKkEcjGqUzRt+vWF7gCkK0EAIcHTziobmcyarumGHKyu8BNeLE3Kq0 7Be9z4vnIzSz5s6otL6NI4AJasE7J9mhUjMKhtF7Zjj2fVg5Y/u+qairdP5q+FvCpery oV9WeT+dc58Lnw0u7ESqbWA+RfhPcNqnzKhW+gNY9Tku3M9MV8VFKOw2J38tU71WsRdW dKuq0ZGHccdjO61depB3wDcKVwkJ4pxLbV6JI5DmGuzOOu46xmdtONtPk1BPibVNw2xA OzN2AGLe2xo9Fb/4F+cc2HUWuqpBisbfbHAbaGdFXqI8JVs0kK2hzW46c271Spt1XVQB VIrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680460679; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9+3V14Sr2+1lCIh3sH1nXbek8TQF8lGQd6t0XW5kdx0=; b=MC8kPM6o690ydIGoa0flX7KfV7TvjujOsfbPwK0M4j9p4TyqpJ/DS9OkVl4B8HffQO z4y3M6hT/JS8qxNH38XFk0Es/UJ/vubO7Lwdp2ZsGl6iKlnwqu+AhJfoHlg+MiRni7E1 8EoTQb8f0F7KRM3HF1O0FSL8gEeCAI/7ygj6gW3C0NRDudJo8JC447TR+IG8bWusZwB1 L1K2r0xqHdd2sGjRn/6n1rL7Os1aDrIsiEcZaQG9uHLI+XBIIWvT/Q2BL8gqRVNm+QRg QvtvbpGdvyWkLEwA5Q8QwDybMn3RzZKQXtz5+jwIFi4AJt6oU2REh9IOoXP7PhC4xjf5 YI6w== X-Gm-Message-State: AAQBX9c6Qmo0xoH2i9S6iTwDFFVOeq5cSPUDS8VnI1n6OP/07UEENg5/ bCAJB5hSzpYQmImALJ1jCU7mKjHsRe2zCkLZCQ== X-Google-Smtp-Source: AKy350ZRbPyLsghYCHvOMRh1Ua+yVMkWfHoSelFjdtLZ5hlHjkJrbfTAstEdGGOlLhumftRMOAkSa1Zmi4OThk/OfQ== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a25:344:0:b0:b27:4632:f651 with SMTP id 65-20020a250344000000b00b274632f651mr16006329ybd.3.1680460679072; Sun, 02 Apr 2023 11:37:59 -0700 (PDT) Date: Sun, 2 Apr 2023 18:37:34 +0000 In-Reply-To: <20230402183735.3011540-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230402183735.3011540-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230402183735.3011540-6-jingzhangos@google.com> Subject: [PATCH v5 5/6] KVM: arm64: Introduce ID register specific descriptor From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230402_113800_547163_08B5771F X-CRM114-Status: GOOD ( 24.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce an ID feature register specific descriptor to include ID register specific fields and callbacks besides its corresponding general system register descriptor. No functional change intended. Co-developed-by: Reiji Watanabe Signed-off-by: Reiji Watanabe Signed-off-by: Jing Zhang --- arch/arm64/kvm/id_regs.c | 233 ++++++++++++++++++++++++++++---------- arch/arm64/kvm/sys_regs.c | 2 +- arch/arm64/kvm/sys_regs.h | 1 + 3 files changed, 178 insertions(+), 58 deletions(-) diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index e92eacb0ad32..af86001e2686 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -18,6 +18,27 @@ #include "sys_regs.h" +struct id_reg_desc { + const struct sys_reg_desc reg_desc; + /* + * ftr_bits points to the feature bits array defined in cpufeature.c for + * writable CPU ID feature register. + */ + const struct arm64_ftr_bits *ftr_bits; + /* + * Only bits with 1 are writable from userspace. + * This mask might not be necessary in the future whenever all ID + * registers are enabled as writable from userspace. + */ + const u64 writable_mask; + /* + * This function returns the KVM sanitised register value. + * The value would be the same as the host kernel sanitised value if + * there is no KVM sanitisation for this id register. + */ + u64 (*read_kvm_sanitised_reg)(const struct id_reg_desc *idr); +}; + static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { if (kvm_vcpu_has_pmu(vcpu)) @@ -193,6 +214,11 @@ static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu, return id_visibility(vcpu, r); } +static u64 general_read_kvm_sanitised_reg(const struct id_reg_desc *idr) +{ + return read_sanitised_ftr_reg(reg_to_encoding(&idr->reg_desc)); +} + static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) @@ -328,21 +354,31 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, } /* sys_reg_desc initialiser for known cpufeature ID registers */ -#define ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = id_visibility, \ +#define ID_SANITISED(name) { \ + .reg_desc = { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = id_visibility, \ + }, \ + .ftr_bits = NULL, \ + .writable_mask = 0, \ + .read_kvm_sanitised_reg = general_read_kvm_sanitised_reg, \ } /* sys_reg_desc initialiser for known cpufeature ID registers */ -#define AA32_ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = aa32_id_visibility, \ +#define AA32_ID_SANITISED(name) { \ + .reg_desc = { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = aa32_id_visibility, \ + }, \ + .ftr_bits = NULL, \ + .writable_mask = 0, \ + .read_kvm_sanitised_reg = general_read_kvm_sanitised_reg, \ } /* @@ -350,12 +386,17 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2 * (1 <= crm < 8, 0 <= Op2 < 8). */ -#define ID_UNALLOCATED(crm, op2) { \ - Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility \ +#define ID_UNALLOCATED(crm, op2) { \ + .reg_desc = { \ + Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility \ + }, \ + .ftr_bits = NULL, \ + .writable_mask = 0, \ + .read_kvm_sanitised_reg = NULL, \ } /* @@ -363,15 +404,20 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, * For now, these are exposed just like unallocated ID regs: they appear * RAZ for the guest. */ -#define ID_HIDDEN(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility, \ +#define ID_HIDDEN(name) { \ + .reg_desc = { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility, \ + }, \ + .ftr_bits = NULL, \ + .writable_mask = 0, \ + .read_kvm_sanitised_reg = NULL, \ } -static const struct sys_reg_desc id_reg_descs[] = { +static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { /* * ID regs: all ID_SANITISED() entries here must have corresponding * entries in arm64_ftr_regs[]. @@ -381,9 +427,13 @@ static const struct sys_reg_desc id_reg_descs[] = { /* CRm=1 */ AA32_ID_SANITISED(ID_PFR0_EL1), AA32_ID_SANITISED(ID_PFR1_EL1), - { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_dfr0_el1, - .visibility = aa32_id_visibility, }, + { .reg_desc = { + SYS_DESC(SYS_ID_DFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_dfr0_el1, + .visibility = aa32_id_visibility, }, + }, ID_HIDDEN(ID_AFR0_EL1), AA32_ID_SANITISED(ID_MMFR0_EL1), AA32_ID_SANITISED(ID_MMFR1_EL1), @@ -412,8 +462,12 @@ static const struct sys_reg_desc id_reg_descs[] = { /* AArch64 ID registers */ /* CRm=4 */ - { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + { .reg_desc = { + SYS_DESC(SYS_ID_AA64PFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_aa64pfr0_el1, }, + }, ID_SANITISED(ID_AA64PFR1_EL1), ID_UNALLOCATED(4, 2), ID_UNALLOCATED(4, 3), @@ -423,8 +477,12 @@ static const struct sys_reg_desc id_reg_descs[] = { ID_UNALLOCATED(4, 7), /* CRm=5 */ - { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + { .reg_desc = { + SYS_DESC(SYS_ID_AA64DFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_aa64dfr0_el1, }, + }, ID_SANITISED(ID_AA64DFR1_EL1), ID_UNALLOCATED(5, 2), ID_UNALLOCATED(5, 3), @@ -454,6 +512,17 @@ static const struct sys_reg_desc id_reg_descs[] = { ID_UNALLOCATED(7, 7), }; +static const struct sys_reg_desc *id_params_to_desc(struct sys_reg_params *params) +{ + u32 id; + + id = reg_to_encoding(params); + if (is_id_reg(id)) + return &id_reg_descs[IDREG_IDX(id)].reg_desc; + + return NULL; +} + /** * emulate_id_reg - Emulate a guest access to an AArch64 CPU ID feature register * @vcpu: The VCPU pointer @@ -465,9 +534,9 @@ int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params) { const struct sys_reg_desc *r; - r = find_reg(params, id_reg_descs, ARRAY_SIZE(id_reg_descs)); + r = id_params_to_desc(params); - if (likely(r)) { + if (r) { perform_access(vcpu, params, r); } else { print_sys_reg_msg(params, @@ -481,32 +550,91 @@ int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params) int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { - return kvm_sys_reg_get_user(vcpu, reg, - id_reg_descs, ARRAY_SIZE(id_reg_descs)); + u64 __user *uaddr = (u64 __user *)(unsigned long)reg->addr; + const struct sys_reg_desc *r; + struct sys_reg_params params; + u64 val; + int ret; + + if (!index_to_params(reg->id, ¶ms)) + return -ENOENT; + + r = id_params_to_desc(¶ms); + if (!r) + return -ENOENT; + + if (r->get_user) { + ret = (r->get_user)(vcpu, r, &val); + } else { + ret = 0; + val = IDREG(vcpu->kvm, reg_to_encoding(r)); + } + + if (!ret) + ret = put_user(val, uaddr); + + return ret; } int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { - return kvm_sys_reg_set_user(vcpu, reg, - id_reg_descs, ARRAY_SIZE(id_reg_descs)); + u64 __user *uaddr = (u64 __user *)(unsigned long)reg->addr; + const struct sys_reg_desc *r; + struct sys_reg_params params; + u64 val; + int ret; + + if (!index_to_params(reg->id, ¶ms)) + return -ENOENT; + + r = id_params_to_desc(¶ms); + if (!r) + return -ENOENT; + + if (get_user(val, uaddr)) + return -EFAULT; + + if (sysreg_user_write_ignore(vcpu, r)) + return 0; + + if (r->set_user) { + ret = (r->set_user)(vcpu, r, val); + } else { + WARN_ONCE(1, "ID register set_user callback is NULL\n"); + ret = 0; + } + + return ret; } bool kvm_arm_check_idreg_table(void) { - return check_sysreg_table(id_reg_descs, ARRAY_SIZE(id_reg_descs), false); + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { + const struct sys_reg_desc *r = &id_reg_descs[i].reg_desc; + + if (!is_id_reg(reg_to_encoding(r))) { + kvm_err("id_reg table %pS entry %d not set correctly\n", + &id_reg_descs[i].reg_desc, i); + return false; + } + } + + return true; } int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) { - const struct sys_reg_desc *i2, *end2; + const struct id_reg_desc *i2, *end2; unsigned int total = 0; int err; i2 = id_reg_descs; end2 = id_reg_descs + ARRAY_SIZE(id_reg_descs); - while (i2 != end2) { - err = walk_one_sys_reg(vcpu, i2++, &uind, &total); + for (; i2 != end2; i2++) { + err = walk_one_sys_reg(vcpu, &(i2->reg_desc), &uind, &total); if (err) return err; } @@ -524,21 +652,12 @@ void kvm_arm_init_id_regs(struct kvm *kvm) u64 val; for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { - id = reg_to_encoding(&id_reg_descs[i]); - if (WARN_ON_ONCE(!is_id_reg(id))) - /* Shouldn't happen */ - continue; - - /* - * Some hidden ID registers which are not in arm64_ftr_regs[] - * would cause warnings from read_sanitised_ftr_reg(). - * Skip those ID registers to avoid the warnings. - */ - if (id_reg_descs[i].visibility == raz_visibility) - /* Hidden or reserved ID register */ - continue; - - val = read_sanitised_ftr_reg(id); + id = reg_to_encoding(&id_reg_descs[i].reg_desc); + + val = 0; + if (id_reg_descs[i].read_kvm_sanitised_reg) + val = id_reg_descs[i].read_kvm_sanitised_reg(&id_reg_descs[i]); + IDREG(kvm, id) = val; } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 3af9f85aa976..1d7fb0acf154 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2541,7 +2541,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu) * Userspace API *****************************************************************************/ -static bool index_to_params(u64 id, struct sys_reg_params *params) +bool index_to_params(u64 id, struct sys_reg_params *params) { switch (id & KVM_REG_SIZE_MASK) { case KVM_REG_SIZE_U64: diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 326ca28c1117..15a0a1e2fe99 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -223,6 +223,7 @@ static inline bool is_id_reg(u32 id) void perform_access(struct kvm_vcpu *vcpu, struct sys_reg_params *params, const struct sys_reg_desc *r); +bool index_to_params(u64 id, struct sys_reg_params *params); const struct sys_reg_desc *get_reg_by_id(u64 id, const struct sys_reg_desc table[], unsigned int num); From patchwork Sun Apr 2 18:37:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13197545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 31F3CC77B62 for ; Sun, 2 Apr 2023 18:39:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=V3M+QnCYyOQqmCOoXFf0YKE7OOUMIzVIRcFJD1pBabQ=; b=upVtiqIOaAseAiyckWrs/UU3CP 3ql5PNOTWPoMoR6fjFgWdtBKQc4yN/AceyRvyVCS2hfAT9I4pufOWTU7x2NmomIwh4/FVQ8aboUR5 w7BFEd8cYVBMUd/Zk1WyzfaAdP8l09MKNqkbsmWGjt7MGvikV2OJeRup09nDZShSmRZ1MFFw3sK0p w5MRoUOUCQFGwWyIgUC1s335erqpdwqi5FOWSdQB6y6glzxEhCRvW5s9oKQaD+IrOalxPGrlnO5Sq APXf0ndnTula2wJIyeV3zKfDy4yR7HveVIUy7KpyCmEbC1Psbj0TzkPLNYTNr1SSVBpQYDyGq5moc i7bLvDWg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pj2ad-00D2RM-0v; Sun, 02 Apr 2023 18:38:07 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pj2aY-00D2N9-2z for linux-arm-kernel@lists.infradead.org; Sun, 02 Apr 2023 18:38:05 +0000 Received: by mail-pl1-x64a.google.com with SMTP id s4-20020a170902ea0400b001a1f4137086so16272617plg.14 for ; Sun, 02 Apr 2023 11:38:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680460681; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tkHV26ZKa37Z+YGJb0NT+bZ5b4d4C9f2N15vUl43rfs=; b=U2xi8W86uDVgDTnFOyETl2Eu41ck9YV+zUkkO2MihMtr0IwGo4/uv/NDnX9qFYsXvA PJgl1Os2p8W/NS7rrbwewwO0a9cN2HTfq1fUha7K4Ow7nGif4Wmrduk0KoTbtJ+avhtT MODf/vWeyNasOC7gbEZppAf/CF4EugggpiU8WvyAd+qZaTKEskY0Yy67K80Tv8dTloOd XW/fi6DXuuGLFXvCmBZeKP8aSbL7a8C2zfNqTebPdHKNy3+mAIZIt7mZA8hRUlvDifa9 UIOc581cMVYuzSAwko5YPjZM+266zX16lzsOXqEww/hUtrlLABLSyn7b9TRWEWr8Fa8t Hulw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680460681; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tkHV26ZKa37Z+YGJb0NT+bZ5b4d4C9f2N15vUl43rfs=; b=1TYn0qkox6TWNT9df6IyWS0jyOs8hfq7+ZMlq+VysJyQF/HJCcGEMLYwXAxuUbXJGG n1SlcKV1X/k9wwTcSLPK2/2/MJ7HFJW7POEaPGrD0RKT5WYUwvajzLhBwjHyCzIzfubw y8KvLzpFy7RZXZ+SJr/W/W4JXd5zRvPO8l5dGv+QpA0OnCVkRh4npBO3OyG+liAezErX myvTYgjJ+yq8fTHBft31qX38IC7fDEC7/A8qtJYbMm9buiBVovvsFVRDdqfh2Re0t3HI 9/aN/9lp3qa3iYfMcnRWOc2M/CqZbJxSXeNCsbljIPpLravOOP8uTcWe8DMJBZSz8056 4AUw== X-Gm-Message-State: AAQBX9exGAjUmsyj5TQgqI5uKJ6fXVEpezK2drh8gFI/m3/InuupyADq z/GbfOR13Lpgy1bd79UW7fQutVwqOFcZaOoJUQ== X-Google-Smtp-Source: AKy350YAHCSnVbZ13dFv8vMX/cE7Mmt0Pdbn1pu9lGkqh7T9FYV5ozB3WPnS1+q85/0CvlA6q+3RieDxn6bnKkqRyQ== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:903:186:b0:1a1:f70c:c800 with SMTP id z6-20020a170903018600b001a1f70cc800mr11862627plg.8.1680460680791; Sun, 02 Apr 2023 11:38:00 -0700 (PDT) Date: Sun, 2 Apr 2023 18:37:35 +0000 In-Reply-To: <20230402183735.3011540-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230402183735.3011540-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230402183735.3011540-7-jingzhangos@google.com> Subject: [PATCH v5 6/6] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3 From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230402_113802_976053_C460C507 X-CRM114-Status: GOOD ( 30.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Refactor writings for ID_AA64PFR0_EL1.[CSV2|CSV3], ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon based on utilities introduced by ID register descriptor. No functional change intended. Co-developed-by: Reiji Watanabe Signed-off-by: Reiji Watanabe Signed-off-by: Jing Zhang --- arch/arm64/include/asm/cpufeature.h | 5 + arch/arm64/kernel/cpufeature.c | 8 +- arch/arm64/kvm/id_regs.c | 262 +++++++++++++++++++--------- arch/arm64/kvm/sys_regs.c | 3 +- arch/arm64/kvm/sys_regs.h | 2 +- 5 files changed, 191 insertions(+), 89 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 6bf013fb110d..f17e74afe3e9 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -915,6 +915,7 @@ static inline unsigned int get_vmid_bits(u64 mmfr1) return 8; } +s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur); struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id); extern struct arm64_ftr_override id_aa64mmfr1_override; @@ -925,6 +926,10 @@ extern struct arm64_ftr_override id_aa64smfr0_override; extern struct arm64_ftr_override id_aa64isar1_override; extern struct arm64_ftr_override id_aa64isar2_override; +extern const struct arm64_ftr_bits ftr_id_dfr0[]; +extern const struct arm64_ftr_bits ftr_id_aa64pfr0[]; +extern const struct arm64_ftr_bits ftr_id_aa64dfr0[]; + u32 get_kvm_ipa_limit(void); void dump_cpu_features(void); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c331c49a7d19..5b0e3379e5f8 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -225,7 +225,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar2[] = { ARM64_FTR_END, }; -static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { +const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_CSV3_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_CSV2_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_DIT_SHIFT, 4, 0), @@ -426,7 +426,7 @@ static const struct arm64_ftr_bits ftr_id_mmfr0[] = { ARM64_FTR_END, }; -static const struct arm64_ftr_bits ftr_id_aa64dfr0[] = { +const struct arm64_ftr_bits ftr_id_aa64dfr0[] = { S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_DoubleLock_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_PMSVer_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_EL1_CTX_CMPs_SHIFT, 4, 0), @@ -578,7 +578,7 @@ static const struct arm64_ftr_bits ftr_id_pfr2[] = { ARM64_FTR_END, }; -static const struct arm64_ftr_bits ftr_id_dfr0[] = { +const struct arm64_ftr_bits ftr_id_dfr0[] = { /* [31:28] TraceFilt */ S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_DFR0_EL1_PerfMon_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_EL1_MProfDbg_SHIFT, 4, 0), @@ -791,7 +791,7 @@ static u64 arm64_ftr_set_value(const struct arm64_ftr_bits *ftrp, s64 reg, return reg; } -static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, +s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur) { s64 ret = 0; diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index af86001e2686..395eaf84a0ab 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -39,6 +39,64 @@ struct id_reg_desc { u64 (*read_kvm_sanitised_reg)(const struct id_reg_desc *idr); }; +static struct id_reg_desc id_reg_descs[]; + +/** + * arm64_check_features() - Check if a feature register value constitutes + * a subset of features indicated by @limit. + * + * @ftrp: Pointer to an array of arm64_ftr_bits. It must be terminated by + * an item whose width field is zero. + * @writable_mask: Indicates writable feature bits. + * @val: The feature register value to check + * @limit: The limit value of the feature register + * + * This function will check if each feature field of @val is the "safe" value + * against @limit based on @ftrp[], each of which specifies the target field + * (shift, width), whether or not the field is for a signed value (sign), + * how the field is determined to be "safe" (type), and the safe value + * (safe_val) when type == FTR_EXACT (safe_val won't be used by this + * function when type != FTR_EXACT). Any other fields in arm64_ftr_bits + * won't be used by this function. If a field value in @val is the same + * as the one in @limit, it is always considered the safe value regardless + * of the type. For register fields that are not in writable, only the value + * in @limit is considered the safe value. + * + * Return: 0 if all the fields are safe. Otherwise, return negative errno. + */ +static int arm64_check_features(const struct arm64_ftr_bits *ftrp, + u64 writable_mask, u64 val, u64 limit) +{ + u64 mask = 0; + + for (; ftrp && ftrp->width; ftrp++) { + s64 f_val, f_lim, safe_val; + u64 ftr_mask; + + ftr_mask = arm64_ftr_mask(ftrp); + if ((ftr_mask & writable_mask) != ftr_mask) + continue; + + f_val = arm64_ftr_value(ftrp, val); + f_lim = arm64_ftr_value(ftrp, limit); + mask |= ftr_mask; + + if (f_val == f_lim) + safe_val = f_val; + else + safe_val = arm64_ftr_safe_value(ftrp, f_val, f_lim); + + if (safe_val != f_val) + return -E2BIG; + } + + /* For fields that are not writable, values in limit are the safe values. */ + if ((val & ~mask) != (limit & ~mask)) + return -E2BIG; + + return 0; +} + static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { if (kvm_vcpu_has_pmu(vcpu)) @@ -84,7 +142,6 @@ u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) case SYS_ID_AA64PFR0_EL1: if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); if (kvm_vgic_global_state.type == VGIC_V3) { val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); @@ -111,15 +168,10 @@ u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); break; case SYS_ID_AA64DFR0_EL1: - /* Limit debug to ARMv8.0 */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); /* Set PMUver to the required version */ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), vcpu_pmuver(vcpu)); - /* Hide SPE from guests */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); break; case SYS_ID_DFR0_EL1: val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); @@ -178,9 +230,16 @@ static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { - /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(vcpu, rd)) - return -EINVAL; + int ret; + int id = reg_to_encoding(rd); + const struct id_reg_desc *idr = &id_reg_descs[IDREG_IDX(id)]; + + ret = arm64_check_features(idr->ftr_bits, idr->writable_mask, val, + idr->read_kvm_sanitised_reg ? idr->read_kvm_sanitised_reg(idr) : 0); + if (ret) + return ret; + + IDREG(vcpu->kvm, id) = val; return 0; } @@ -219,12 +278,39 @@ static u64 general_read_kvm_sanitised_reg(const struct id_reg_desc *idr) return read_sanitised_ftr_reg(reg_to_encoding(&idr->reg_desc)); } +static u64 read_sanitised_id_aa64pfr0_el1(const struct id_reg_desc *idr) +{ + u64 val; + u32 id = reg_to_encoding(&idr->reg_desc); + + val = read_sanitised_ftr_reg(id); + /* + * The default is to expose CSV2 == 1 if the HW isn't affected. + * Although this is a per-CPU feature, we make it global because + * asymmetric systems are just a nuisance. + * + * Userspace can override this as long as it doesn't promise + * the impossible. + */ + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); + } + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + } + + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); + + return val; +} + static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { u8 csv2, csv3; - u64 sval = val; /* * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as @@ -232,26 +318,37 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, * guest could otherwise be covered in ectoplasmic residue). */ csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT); - if (csv2 > 1 || - (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) + if (csv2 > 1 || (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) return -EINVAL; /* Same thing for CSV3 */ csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT); - if (csv3 > 1 || - (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) + if (csv3 > 1 || (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) return -EINVAL; - /* We can only differ with CSV[23], and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); - if (val) - return -EINVAL; + return set_id_reg(vcpu, rd, val); +} - IDREG(vcpu->kvm, reg_to_encoding(rd)) = sval; +static u64 read_sanitised_id_aa64dfr0_el1(const struct id_reg_desc *idr) +{ + u64 val; + u32 id = reg_to_encoding(&idr->reg_desc); - return 0; + val = read_sanitised_ftr_reg(id); + /* Limit debug to ARMv8.0 */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + kvm_arm_pmu_get_pmuver_limit()); + /* Hide SPE from guests */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); + + return val; } static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, @@ -260,6 +357,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, { u8 pmuver, host_pmuver; bool valid_pmu; + int ret; host_pmuver = kvm_arm_pmu_get_pmuver_limit(); @@ -279,23 +377,25 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) return -EINVAL; - /* We can only differ with PMUver, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - if (val) - return -EINVAL; - if (valid_pmu) { mutex_lock(&vcpu->kvm->arch.config_lock); - IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) |= - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), pmuver); + ret = set_id_reg(vcpu, rd, val); + if (ret) { + mutex_unlock(&vcpu->kvm->arch.config_lock); + return ret; + } IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), pmuver_to_perfmon(pmuver)); mutex_unlock(&vcpu->kvm->arch.config_lock); } else { + /* We can only differ with PMUver, and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + if (val) + return -EINVAL; + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF); } @@ -303,12 +403,29 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, return 0; } +static u64 read_sanitised_id_dfr0_el1(const struct id_reg_desc *idr) +{ + u64 val; + u32 id = reg_to_encoding(&idr->reg_desc); + + val = read_sanitised_ftr_reg(id); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), kvm_arm_pmu_get_pmuver_limit()); + + return val; +} + static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { u8 perfmon, host_perfmon; bool valid_pmu; + int ret; host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); @@ -329,23 +446,25 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) return -EINVAL; - /* We can only differ with PerfMon, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - if (val) - return -EINVAL; - if (valid_pmu) { mutex_lock(&vcpu->kvm->arch.config_lock); - IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) |= - FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), perfmon); + ret = set_id_reg(vcpu, rd, val); + if (ret) { + mutex_unlock(&vcpu->kvm->arch.config_lock); + return ret; + } IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), perfmon_to_pmuver(perfmon)); mutex_unlock(&vcpu->kvm->arch.config_lock); } else { + /* We can only differ with PerfMon, and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + if (val) + return -EINVAL; + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, perfmon == ID_DFR0_EL1_PerfMon_IMPDEF); } @@ -354,14 +473,16 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, } /* sys_reg_desc initialiser for known cpufeature ID registers */ +#define SYS_DESC_SANITISED(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = id_visibility, \ +} + #define ID_SANITISED(name) { \ - .reg_desc = { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = id_visibility, \ - }, \ + .reg_desc = SYS_DESC_SANITISED(name), \ .ftr_bits = NULL, \ .writable_mask = 0, \ .read_kvm_sanitised_reg = general_read_kvm_sanitised_reg, \ @@ -417,7 +538,7 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, .read_kvm_sanitised_reg = NULL, \ } -static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { +static struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { /* * ID regs: all ID_SANITISED() entries here must have corresponding * entries in arm64_ftr_regs[]. @@ -433,6 +554,9 @@ static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { .get_user = get_id_reg, .set_user = set_id_dfr0_el1, .visibility = aa32_id_visibility, }, + .ftr_bits = ftr_id_dfr0, + .writable_mask = ID_DFR0_EL1_PerfMon_MASK, + .read_kvm_sanitised_reg = read_sanitised_id_dfr0_el1, }, ID_HIDDEN(ID_AFR0_EL1), AA32_ID_SANITISED(ID_MMFR0_EL1), @@ -467,6 +591,9 @@ static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { .access = access_id_reg, .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + .ftr_bits = ftr_id_aa64pfr0, + .writable_mask = ID_AA64PFR0_EL1_CSV2_MASK | ID_AA64PFR0_EL1_CSV3_MASK, + .read_kvm_sanitised_reg = read_sanitised_id_aa64pfr0_el1, }, ID_SANITISED(ID_AA64PFR1_EL1), ID_UNALLOCATED(4, 2), @@ -482,6 +609,9 @@ static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { .access = access_id_reg, .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + .ftr_bits = ftr_id_aa64dfr0, + .writable_mask = ID_AA64DFR0_EL1_PMUVer_MASK, + .read_kvm_sanitised_reg = read_sanitised_id_aa64dfr0_el1, }, ID_SANITISED(ID_AA64DFR1_EL1), ID_UNALLOCATED(5, 2), @@ -607,7 +737,7 @@ int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return ret; } -bool kvm_arm_check_idreg_table(void) +bool kvm_arm_idreg_table_init(void) { unsigned int i; @@ -641,10 +771,7 @@ int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) return total; } -/* - * Set the guest's ID registers that are defined in id_reg_descs[] - * with ID_SANITISED() to the host's sanitized value. - */ +/* Initialize the guest's ID registers with KVM sanitised values. */ void kvm_arm_init_id_regs(struct kvm *kvm) { int i; @@ -660,33 +787,4 @@ void kvm_arm_init_id_regs(struct kvm *kvm) IDREG(kvm, id) = val; } - - /* - * The default is to expose CSV2 == 1 if the HW isn't affected. - * Although this is a per-CPU feature, we make it global because - * asymmetric systems are just a nuisance. - * - * Userspace can override this as long as it doesn't promise - * the impossible. - */ - val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); - - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); - } - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); - } - - IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; - - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - IDREG(kvm, SYS_ID_AA64DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - IDREG(kvm, SYS_ID_AA64DFR0_EL1) |= - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), kvm_arm_pmu_get_pmuver_limit()); } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1d7fb0acf154..27cff8022754 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2940,14 +2940,13 @@ int __init kvm_sys_reg_table_init(void) unsigned int i; /* Make sure tables are unique and in order. */ - valid &= kvm_arm_check_idreg_table(); valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false); valid &= check_sysreg_table(cp14_regs, ARRAY_SIZE(cp14_regs), true); valid &= check_sysreg_table(cp14_64_regs, ARRAY_SIZE(cp14_64_regs), true); valid &= check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true); valid &= check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true); valid &= check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false); - + valid &= kvm_arm_idreg_table_init(); if (!valid) return -EINVAL; diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 15a0a1e2fe99..df8d26df93ec 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -246,7 +246,7 @@ int walk_one_sys_reg(const struct kvm_vcpu *vcpu, int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params); int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); -bool kvm_arm_check_idreg_table(void); +bool kvm_arm_idreg_table_init(void); int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind); u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);