From patchwork Fri Jun 2 00:51:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13264623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D076FC77B7A for ; Fri, 2 Jun 2023 00:51:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233209AbjFBAv3 (ORCPT ); Thu, 1 Jun 2023 20:51:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232157AbjFBAvZ (ORCPT ); Thu, 1 Jun 2023 20:51:25 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25771E4 for ; Thu, 1 Jun 2023 17:51:24 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id d2e1a72fcca58-64d4454614bso1700311b3a.0 for ; Thu, 01 Jun 2023 17:51:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685667083; x=1688259083; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aqqMvtdgU14/NtzieAzXaY8mJ7w2IxxoClOvSCaQaJg=; b=s+McyxOhaxJJL+i7Vyyo/somnZMM0z/0zYSWSeKRQc3/Q3YbBtEtpGL4MC81r2wmPb /QzHL7YuQxpsCKLwLdQ84uLtPbf3UlKB6pWP68n8HmMm9cvkdn+hrimpWNBHOsu2Nyjr K/nkk7vQSha3k7GVl7G9y+BtyHkH24gGnG/qgbkyY4fVP7QESOxHfYOAknmuvn+xrVT1 V0mVITjr8wEN+yA9rqxflYjmuuYKUwqFBZUALck7YtDi5bGZNJ7bQub0wgpWE2SpQl1p WDPsD3Rqwu+juu6VqgLmNp6SoWRGLW1tEzMcU5AoFqWQFg+Oq2nzyDqj7JmT3fusEZay qrgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685667083; x=1688259083; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aqqMvtdgU14/NtzieAzXaY8mJ7w2IxxoClOvSCaQaJg=; b=MOP0EGb7/tDFY21ZNlm4oiTLh5txlNc95768FkH0b5UPS5v51GZZJUEDL+mEK2e3zl 7Zrdnz4SIl1pKyQlSpMzc+AqNrZlBIYDegaP0kRh6xAxeYe6rV9WDienZsNgyZ4uSEii EKhWI+JpKP1/VNrOxbCxu0D+9j8RG1gNfa07LdDXiuF+5E3t2Xc1Sw61ICsBNm6njZo3 9yjMZYfGPl+/RVDlmkRmhN3C0c9CEUygXx/pzRH0imRPUqtaXYRl9P01wPkbpntwywkb PXqSejWAFaxzJNWQMh5+14g17wEHU8/NLmEve+uw+z5bToFzkJBlqOKYHUcn3GLZVjLe erFQ== X-Gm-Message-State: AC+VfDzZed64EyGF0TT9N36RNcKgYcO9KBk6xYZIhuBZV3Axy09jMbZv aJTZBqtna8p83AWY2arHvUZaxsDSgnmOOA8JqwpAe4l9TErHS8XuYjDp4UemZDoCgkZ2X309kjg kZXajixA8DeAY3njzbIoqIaPfDfMs0+s+PP069AWyTxYb5JnrkIHm5VzzP0xiZ5ERTn1btRs= X-Google-Smtp-Source: ACHHUZ6lmnQi5Dmdp+/y8rYelG42D+OCJubc0W0T2dyXOjAscoq8QNNcX+lSp+L6/PJtbKsKvn3I5N4J5EcEvPgGzA== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a05:6a00:1d25:b0:64d:4191:6f39 with SMTP id a37-20020a056a001d2500b0064d41916f39mr775807pfx.0.1685667083328; Thu, 01 Jun 2023 17:51:23 -0700 (PDT) Date: Fri, 2 Jun 2023 00:51:13 +0000 In-Reply-To: <20230602005118.2899664-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230602005118.2899664-1-jingzhangos@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602005118.2899664-2-jingzhangos@google.com> Subject: [PATCH v11 1/5] KVM: arm64: Save ID registers' sanitized value per guest From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce id_regs[] in kvm_arch as a storage of guest's ID registers, and save ID registers' sanitized value in the array at KVM_CREATE_VM. Use the saved ones when ID registers are read by the guest or userspace (via KVM_GET_ONE_REG). No functional change intended. Co-developed-by: Reiji Watanabe Signed-off-by: Reiji Watanabe Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 20 +++++++++ arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/sys_regs.c | 72 +++++++++++++++++++++++++------ arch/arm64/kvm/sys_regs.h | 7 +++ 4 files changed, 87 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7e7e19ef6993..069606170c82 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -178,6 +178,21 @@ struct kvm_smccc_features { unsigned long vendor_hyp_bmap; }; +/* + * Emulated CPU ID registers per VM + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. + * + * These emulated idregs are VM-wide, but accessed from the context of a vCPU. + * Atomic access to multiple idregs are guarded by kvm_arch.config_lock. + */ +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id)) +#define IDREG(kvm, id) ((kvm)->arch.idregs.regs[IDREG_IDX(id)]) +#define KVM_ARM_ID_REG_NUM (IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1) +struct kvm_idregs { + u64 regs[KVM_ARM_ID_REG_NUM]; +}; + typedef unsigned int pkvm_handle_t; struct kvm_protected_vm { @@ -253,6 +268,9 @@ struct kvm_arch { struct kvm_smccc_features smccc_feat; struct maple_tree smccc_filter; + /* Emulated CPU ID registers */ + struct kvm_idregs idregs; + /* * For an untrusted host VM, 'pkvm.handle' is used to lookup * the associated pKVM instance in the hypervisor. @@ -1045,6 +1063,8 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm, struct kvm_arm_counter_offset *offset); +void kvm_arm_init_id_regs(struct kvm *kvm); + /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 14391826241c..774656a0718d 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -163,6 +163,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) set_default_spectre(kvm); kvm_arm_init_hypercalls(kvm); + kvm_arm_init_id_regs(kvm); /* * Initialise the default PMUver before there is a chance to diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 71b12094d613..40a9315015af 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -41,6 +41,7 @@ * 64bit interface. */ +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 encoding); static u64 sys_reg_to_index(const struct sys_reg_desc *reg); static bool read_from_write_only(struct kvm_vcpu *vcpu, @@ -364,7 +365,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + u64 val = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); u32 sr = reg_to_encoding(r); if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) { @@ -1208,18 +1209,11 @@ static u8 pmuver_to_perfmon(u8 pmuver) } } -/* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 encoding) { - u32 id = reg_to_encoding(r); - u64 val; + u64 val = IDREG(vcpu->kvm, encoding); - if (sysreg_visible_as_raz(vcpu, r)) - return 0; - - val = read_sanitised_ftr_reg(id); - - switch (id) { + switch (encoding) { case SYS_ID_AA64PFR0_EL1: if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); @@ -1280,6 +1274,26 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r return val; } +/* Read a sanitised cpufeature ID register by sys_reg_desc */ +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +{ + if (sysreg_visible_as_raz(vcpu, r)) + return 0; + + return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r)); +} + +/* + * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is + * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. + */ +static inline bool is_id_reg(u32 id) +{ + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 && + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 && + sys_reg_CRm(id) < 8); +} + static unsigned int id_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { @@ -2237,6 +2251,8 @@ static const struct sys_reg_desc sys_reg_descs[] = { EL2_REG(SP_EL2, NULL, reset_unknown, 0), }; +static const struct sys_reg_desc *first_idreg; + static bool trap_dbgdidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) @@ -2244,8 +2260,8 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu, if (p->is_write) { return ignore_write(vcpu, p); } else { - u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); - u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); + u64 dfr = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); + u64 pfr = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL1_EL3_SHIFT); p->regval = ((((dfr >> ID_AA64DFR0_EL1_WRPs_SHIFT) & 0xf) << 28) | @@ -3343,8 +3359,32 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) return write_demux_regids(uindices); } +/* + * Set the guest's ID registers with ID_SANITISED() to the host's sanitized value. + */ +void kvm_arm_init_id_regs(struct kvm *kvm) +{ + const struct sys_reg_desc *idreg = first_idreg; + u32 id = reg_to_encoding(idreg); + + /* Initialize all idregs */ + while (is_id_reg(id)) { + /* + * Some hidden ID registers which are not in arm64_ftr_regs[] + * would cause warnings from read_sanitised_ftr_reg(). + * Skip those ID registers to avoid the warnings. + */ + if (idreg->visibility != raz_visibility) + IDREG(kvm, id) = read_sanitised_ftr_reg(id); + + idreg++; + id = reg_to_encoding(idreg); + } +} + int __init kvm_sys_reg_table_init(void) { + struct sys_reg_params params; bool valid = true; unsigned int i; @@ -3363,5 +3403,11 @@ int __init kvm_sys_reg_table_init(void) for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++) invariant_sys_regs[i].reset(NULL, &invariant_sys_regs[i]); + /* Find the first idreg (SYS_ID_PFR0_EL1) in sys_reg_descs. */ + params = encoding_to_params(SYS_ID_PFR0_EL1); + first_idreg = find_reg(¶ms, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); + if (!first_idreg) + return -EINVAL; + return 0; } diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 6b11f2cc7146..eba10de2e7ae 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -27,6 +27,13 @@ struct sys_reg_params { bool is_write; }; +#define encoding_to_params(reg) \ + ((struct sys_reg_params){ .Op0 = sys_reg_Op0(reg), \ + .Op1 = sys_reg_Op1(reg), \ + .CRn = sys_reg_CRn(reg), \ + .CRm = sys_reg_CRm(reg), \ + .Op2 = sys_reg_Op2(reg) }) + #define esr_sys64_to_params(esr) \ ((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3, \ .Op1 = ((esr) >> 14) & 0x7, \ From patchwork Fri Jun 2 00:51:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13264624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23A27C7EE29 for ; Fri, 2 Jun 2023 00:51:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233197AbjFBAva (ORCPT ); Thu, 1 Jun 2023 20:51:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233168AbjFBAv0 (ORCPT ); Thu, 1 Jun 2023 20:51:26 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF257F2 for ; Thu, 1 Jun 2023 17:51:25 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-256a45c6389so580753a91.0 for ; Thu, 01 Jun 2023 17:51:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685667085; x=1688259085; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Kg7uaoAK4bQT2tKsp4ZqVoj31EN4GRxWyNGcj0TnQnY=; b=7nq1AAr8pDfixD3Hq0HFhNUQi1tJr0lCjdpHWTF6jK8GZcjxMnmKEk5FPw3Qo19HwX D2msppwMoGgtqgLn8lC3PSP4CE9tIH/z7klQskyuICkGgNqYcfgUdOvHNtywQbFG9/1t VjrJ3AvBRXAEnN59DnXnwkS44gKFU7Bks+oJZFd4bP8aBAs9FginJDMgpzoJy8cUKpPb YqSoyVbRB0JCCBHXo1dB15VreJObljE7bMhS8DrREKykEuiVb5EwIg5/8Xm6xwsl04oB ux8cC3WkcxeUZExg2kkKp7FY69VNSaiywuyU4eG7G7E1KK0SzoVuX4QS5p0b9Y0PJBOe Qxug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685667085; x=1688259085; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Kg7uaoAK4bQT2tKsp4ZqVoj31EN4GRxWyNGcj0TnQnY=; b=SjYo4tJCpatB1SnFtOxkPg4YCeZwkyHs9oYr3pNehCNv4v+455EIauGBfH+/mAy8yK s650iXmSbhGNDUFeXOdHUl/YvsZg3gl5Tly5eQus8QizvMcHc4A+sI1oIPDKCVbZJ10h mNJtsyJxH6xvaGraOLi19eJcmClfl5auN+x2m2as7i/rs9rJ2cuMNkskHnoIelnik3QG xUfqmHHiEfnuu6fziriALqOu7S4XcJKuRIruSNYznd4fDxVJvUwLmhSlX3OhBti54rpa k+s5IS2nK3LsCxLop3aC3Brd6nY/3jQYbbh4nVeYTMmJZoxuBJzgXrihYOuHsYkwaG0i oXDA== X-Gm-Message-State: AC+VfDxI2x+Imcc/Sa5Yq5WYMMxawhB3gn8j8vcKijqI0Gsv7pgRkaWc 8OmwGhRk2e+1FPTZAlxFSw/unEUGZ86dS66o07S4YwrWNluH04tWbKFfjZlC/PLtudORIMAEHsT 0GQ1rgW5UoAUsqArFvY1bxmIjc2SxdjTyXyHdNRCcfHAUup9WUAoWLhZNlJCm+XIMHnUVuOM= X-Google-Smtp-Source: ACHHUZ5uf09YxU3CLVC0EfBn09+vMrrcdNwwyvHYI+ZrH5XfcCUWhzbUNdU3p/t5W2WpwavTeRyXM6+mVNL5QxJOmw== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:90a:df08:b0:250:7ed2:b61a with SMTP id gp8-20020a17090adf0800b002507ed2b61amr210142pjb.9.1685667085128; Thu, 01 Jun 2023 17:51:25 -0700 (PDT) Date: Fri, 2 Jun 2023 00:51:14 +0000 In-Reply-To: <20230602005118.2899664-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230602005118.2899664-1-jingzhangos@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602005118.2899664-3-jingzhangos@google.com> Subject: [PATCH v11 2/5] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3] From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org With per guest ID registers, ID_AA64PFR0_EL1.[CSV2|CSV3] settings from userspace can be stored in its corresponding ID register. The setting of CSV bits for protected VMs are removed according to the discussion from Fuad below: https://lore.kernel.org/all/CA+EHjTwXA9TprX4jeG+-D+c8v9XG+oFdU1o6TSkvVye145_OvA@mail.gmail.com Besides the removal of CSV bits setting for protected VMs and using kvm_arch.config_lock to guard VM-scope idreg accesses, no other functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 2 -- arch/arm64/kvm/arm.c | 17 --------- arch/arm64/kvm/sys_regs.c | 57 +++++++++++++++++++++++++------ 3 files changed, 47 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 069606170c82..8a2fde6c04c4 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -257,8 +257,6 @@ struct kvm_arch { cpumask_var_t supported_cpus; - u8 pfr0_csv2; - u8 pfr0_csv3; struct { u8 imp:4; u8 unimp:4; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 774656a0718d..5114521ace60 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -102,22 +102,6 @@ static int kvm_arm_default_max_vcpus(void) return vgic_present ? kvm_vgic_get_max_vcpus() : KVM_MAX_VCPUS; } -static void set_default_spectre(struct kvm *kvm) -{ - /* - * The default is to expose CSV2 == 1 if the HW isn't affected. - * Although this is a per-CPU feature, we make it global because - * asymmetric systems are just a nuisance. - * - * Userspace can override this as long as it doesn't promise - * the impossible. - */ - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) - kvm->arch.pfr0_csv2 = 1; - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) - kvm->arch.pfr0_csv3 = 1; -} - /** * kvm_arch_init_vm - initializes a VM data structure * @kvm: pointer to the KVM struct @@ -161,7 +145,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->max_vcpus = kvm_arm_default_max_vcpus(); - set_default_spectre(kvm); kvm_arm_init_hypercalls(kvm); kvm_arm_init_id_regs(kvm); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 40a9315015af..f043811a6725 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1218,10 +1218,6 @@ static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 encoding) if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3); if (kvm_vgic_global_state.type == VGIC_V3) { val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); @@ -1359,6 +1355,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { + u64 new_val = val; u8 csv2, csv3; /* @@ -1384,9 +1381,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - vcpu->kvm->arch.pfr0_csv2 = csv2; - vcpu->kvm->arch.pfr0_csv3 = csv3; - + IDREG(vcpu->kvm, reg_to_encoding(rd)) = new_val; return 0; } @@ -1472,9 +1467,9 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, /* * cpufeature ID register user accessors * - * For now, these registers are immutable for userspace, so no values - * are stored, and for set_id_reg() we don't allow the effective value - * to be changed. + * For now, only some registers or some part of registers are mutable for + * userspace. For those registers immutable for userspace, in set_id_reg() + * we don't allow the effective value to be changed. */ static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 *val) @@ -3177,6 +3172,9 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, if (!r || sysreg_hidden_user(vcpu, r)) return -ENOENT; + if (is_id_reg(reg_to_encoding(r))) + mutex_lock(&vcpu->kvm->arch.config_lock); + if (r->get_user) { ret = (r->get_user)(vcpu, r, &val); } else { @@ -3184,6 +3182,9 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, ret = 0; } + if (is_id_reg(reg_to_encoding(r))) + mutex_unlock(&vcpu->kvm->arch.config_lock); + if (!ret) ret = put_user(val, uaddr); @@ -3221,9 +3222,20 @@ int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, if (!r || sysreg_hidden_user(vcpu, r)) return -ENOENT; + /* Only allow userspace to change the idregs before VM running */ + if (is_id_reg(reg_to_encoding(r)) && kvm_vm_has_ran_once(vcpu->kvm)) { + if (val == read_id_reg(vcpu, r)) + return 0; + return -EBUSY; + } + if (sysreg_user_write_ignore(vcpu, r)) return 0; + /* ID regs are global to the VM and cannot be updated concurrently */ + if (is_id_reg(reg_to_encoding(r))) + mutex_lock(&vcpu->kvm->arch.config_lock); + if (r->set_user) { ret = (r->set_user)(vcpu, r, val); } else { @@ -3231,6 +3243,9 @@ int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, ret = 0; } + if (is_id_reg(reg_to_encoding(r))) + mutex_unlock(&vcpu->kvm->arch.config_lock); + return ret; } @@ -3366,6 +3381,7 @@ void kvm_arm_init_id_regs(struct kvm *kvm) { const struct sys_reg_desc *idreg = first_idreg; u32 id = reg_to_encoding(idreg); + u64 val; /* Initialize all idregs */ while (is_id_reg(id)) { @@ -3380,6 +3396,27 @@ void kvm_arm_init_id_regs(struct kvm *kvm) idreg++; id = reg_to_encoding(idreg); } + + /* + * The default is to expose CSV2 == 1 if the HW isn't affected. + * Although this is a per-CPU feature, we make it global because + * asymmetric systems are just a nuisance. + * + * Userspace can override this as long as it doesn't promise + * the impossible. + */ + val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); + + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); + } + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + } + + IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; } int __init kvm_sys_reg_table_init(void) From patchwork Fri Jun 2 00:51:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13264625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5657DC7EE23 for ; Fri, 2 Jun 2023 00:51:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233214AbjFBAvb (ORCPT ); Thu, 1 Jun 2023 20:51:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233204AbjFBAv2 (ORCPT ); Thu, 1 Jun 2023 20:51:28 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F5F3132 for ; Thu, 1 Jun 2023 17:51:27 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id d2e1a72fcca58-64d67a12befso824514b3a.3 for ; Thu, 01 Jun 2023 17:51:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685667087; x=1688259087; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KjUVKn3WM1t8CFRIsXlW7hvHuGxqxUApduT+GCm19kQ=; b=ZoZdevxvKJCdxbbhQAuDEHEU0MJaJUiZg9vQb5YUp7RKRbK3fzBIaKrprs/LacF2kX +oQiAHkLXfWrKamZOJYmYhRzYEPoUfsi7El/Y3GeCDYO3Oz6EyE4MXVnXeGaYAg/cihF M/GnNz0hSkmz0diOh7vtMwMOR7s0Yy3oNmUFkhha7NkCxc/w+twZBQ8z4MssuuabdxAw UzS2d53ecjZ8bDLOo6MypB0i7g615tgd3U35fDxb9Ds9WXAfrxkf6T262SOwzJwqJ5cj cZYqGGeL+0IL9FdQ6sQOCRb9scao1140gZNbeKw1FC0X1q52UtohVATi8BeLt9Megcuw nlpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685667087; x=1688259087; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KjUVKn3WM1t8CFRIsXlW7hvHuGxqxUApduT+GCm19kQ=; b=eAaxfplc6wiSoEscrrnJtUzX3nqbmNuxI8po81J03K0QL8ns4K/GRc2dhxD4A1jMc0 9NpqEwLUikroYOonM2tL76K59KnAW5iVup2lwjVRNnR4eQOF15bsova/KqpPdBqImDXU yRcXiEcvIz4AhZrFRi15meGrwXtXB2Aae0Lw1Ce3CqEJJfzdC4ievoTIlpxdz/AwQOyz 9eTl20ibpjUOIK1vPiGoCOMLcqtKHwg/5ihAWU5NQl8ORo+M0AUi/rtvIoUi+CHRB3xD QURsbnXNXn4g7gnURqVnpdHXuXvud9N0tcyJm1ZUAIgic0xUhiUyWladxICKMDPkfTcm dZuQ== X-Gm-Message-State: AC+VfDymgT4sqhV83YtSSC3zSLRjYqbloEBlNbkwa2X7dBWW8upXlWII 0LQ0wAwsfGWbMmfH8ydinzXpWzEz3F+MRed1CwObeo6rHXHBcj+FPgrSY5tPgJXBGrqIFEr/3dJ dLEdzlz35KGgaE4ojXTjq6hTboskfg7H/XswbfVonIb4SeSysUfi3jwCE1fzTgsBJiHj6GJo= X-Google-Smtp-Source: ACHHUZ5UFZT209LQPbKD0VJCWRUJlePpZTIe7Uu8xjezghs90E8vZEIgeCw8S/KYUyV0ZfCuLBO1KvQk2ih1FVjmUQ== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a05:6a00:2e24:b0:64d:2cb0:c60c with SMTP id fc36-20020a056a002e2400b0064d2cb0c60cmr4022318pfb.5.1685667086955; Thu, 01 Jun 2023 17:51:26 -0700 (PDT) Date: Fri, 2 Jun 2023 00:51:15 +0000 In-Reply-To: <20230602005118.2899664-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230602005118.2899664-1-jingzhangos@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602005118.2899664-4-jingzhangos@google.com> Subject: [PATCH v11 3/5] KVM: arm64: Use per guest ID register for ID_AA64DFR0_EL1.PMUVer From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org With per guest ID registers, PMUver settings from userspace can be stored in its corresponding ID register. No functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 12 ++++---- arch/arm64/kvm/arm.c | 6 ---- arch/arm64/kvm/sys_regs.c | 50 +++++++++++++++++++++++-------- include/kvm/arm_pmu.h | 9 ++++-- 4 files changed, 52 insertions(+), 25 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 8a2fde6c04c4..7b0f43373dbe 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -246,6 +246,13 @@ struct kvm_arch { #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE 7 /* SMCCC filter initialized for the VM */ #define KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED 8 + /* + * AA64DFR0_EL1.PMUver was set as ID_AA64DFR0_EL1_PMUVer_IMP_DEF + * or DFR0_EL1.PerfMon was set as ID_DFR0_EL1_PerfMon_IMPDEF from + * userspace for VCPUs without PMU. + */ +#define KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU 9 + unsigned long flags; /* @@ -257,11 +264,6 @@ struct kvm_arch { cpumask_var_t supported_cpus; - struct { - u8 imp:4; - u8 unimp:4; - } dfr0_pmuver; - /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; struct maple_tree smccc_filter; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 5114521ace60..ca18c09ccf82 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -148,12 +148,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm_arm_init_hypercalls(kvm); kvm_arm_init_id_regs(kvm); - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit(); - return 0; err_free_cpumask: diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index f043811a6725..0179df50fcf5 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1178,9 +1178,12 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { if (kvm_vcpu_has_pmu(vcpu)) - return vcpu->kvm->arch.dfr0_pmuver.imp; + return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)); + else if (test_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags)) + return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; - return vcpu->kvm->arch.dfr0_pmuver.unimp; + return 0; } static u8 perfmon_to_pmuver(u8 perfmon) @@ -1209,6 +1212,26 @@ static u8 pmuver_to_perfmon(u8 pmuver) } } +static void pmuver_update(struct kvm_vcpu *vcpu, u8 pmuver, bool valid_pmu) +{ + u64 val; + + if (valid_pmu) { + val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); + val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; + val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver); + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val; + + val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1); + val &= ~ID_DFR0_EL1_PerfMon_MASK; + val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver)); + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val; + } else { + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, + pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF); + } +} + static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 encoding) { u64 val = IDREG(vcpu->kvm, encoding); @@ -1416,11 +1439,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; - else - vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; - + pmuver_update(vcpu, pmuver, valid_pmu); return 0; } @@ -1456,11 +1475,7 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); - else - vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); - + pmuver_update(vcpu, perfmon_to_pmuver(perfmon), valid_pmu); return 0; } @@ -3417,6 +3432,17 @@ void kvm_arm_init_id_regs(struct kvm *kvm) } IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val = IDREG(kvm, SYS_ID_AA64DFR0_EL1); + + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + kvm_arm_pmu_get_pmuver_limit()); + + IDREG(kvm, SYS_ID_AA64DFR0_EL1) = val; } int __init kvm_sys_reg_table_init(void) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 1a6a695ca67a..5300d91b1e9b 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -92,8 +92,13 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); /* * Evaluates as true when emulating PMUv3p5, and false otherwise. */ -#define kvm_pmu_is_3p5(vcpu) \ - (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5) +#define kvm_pmu_is_3p5(vcpu) ({ \ + u64 val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); \ + u8 v; \ + \ + v = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val); \ + v >= ID_AA64DFR0_EL1_PMUVer_V3P5; \ +}) u8 kvm_arm_pmu_get_pmuver_limit(void); From patchwork Fri Jun 2 00:51:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13264626 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 011F5C7EE29 for ; Fri, 2 Jun 2023 00:51:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233204AbjFBAvd (ORCPT ); Thu, 1 Jun 2023 20:51:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233219AbjFBAvb (ORCPT ); Thu, 1 Jun 2023 20:51:31 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96097F2 for ; Thu, 1 Jun 2023 17:51:29 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-bacd408046cso2110470276.3 for ; Thu, 01 Jun 2023 17:51:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685667089; x=1688259089; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UIUDwv4uXBQqeuankcviKzdoEYgwjSk1vg/GMI2E9kU=; b=kwepsSVcQ1ZdllZMR+MuqRSRsxwF4VhSpgm7WIv0Bcs1azSUeortlbVmeG/zK9PNJb zvOIhOtG1OpxvjnC0tRq4DFSnuyzuJ0Q6Z+UXjSTWvMpGWtoYpm+grtLPNL+PNVHNZmz 5ihyyFwazCkYN45GFrCCD6Q8d/ivtGDTMrzfx/aaPq5YmDJPmxGmVlAFl1ud2RlcAQLp r/ZNiyOmzLCFNPP4dOzzaRYER1TeoSkPKV0PZtjVnHtIIFJInzVZQFgU6AvL4prJ0GFZ e9xUe1aMkKX0RpiCFCUuY/5iX7OcoAeq1s6iQtIoESkCJL5TVOxYN25I+4NEQlzgrx55 X0hQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685667089; x=1688259089; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UIUDwv4uXBQqeuankcviKzdoEYgwjSk1vg/GMI2E9kU=; b=akmzPCQb/J8n+FkJXZuZRyOOA0qpYy5uOVjzeDx+iQruBEQRd1vgB591+SzGZpftPW cHhy9xmAUwrrn2RomiJTKjIjD/R4wdSYvCQHRgFdwWPbdZQFyTGoEj9aUO33UCxPHVhB tD7dlO3QfPZuFvQt8/gfMld5ZWj+MIL/mzPqBeZ39LtCkx89J4LvAcXUJItb6wpm4PYn fY2lG3/8B4/ie3uBgcOILKT5EezMHoo2DpJaRm7OQLfJnvHP5Pr16n8H7fsi7mOakWyr 96KHXOMB6v5kqGszHTnRQ1zCl/XD6Jd4ltuBygnn3ftlRlD8IOYzZtgidpol1yX8kCT6 kN/g== X-Gm-Message-State: AC+VfDz2rQrzPTfBVKQ5Rd60V+iz52uaOFRygFkGfM/NdndpzGYBMf0/ 7s/1e31xR7079M664lL2SnnjU8cvo2xHhy1ZENgWabMMlflrmQjXDWKVjouvYO/agsNErfyUSVk +pF/w4RuuZljaJTNFL0ITPUAcIwrc0MVbujKwFZeORFf0CFTOlYXJskSlJa3I+dqCBYcIBqs= X-Google-Smtp-Source: ACHHUZ6L8I9kweh9tNyfLEgcfmyI9zHXG+ige0ZlkfN1WFwSPkKcyEaomEafF3my9cCaL2N4ry6YaqlVe3cPwMG3ww== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a05:6902:1547:b0:bac:bae6:b363 with SMTP id r7-20020a056902154700b00bacbae6b363mr997620ybu.3.1685667088732; Thu, 01 Jun 2023 17:51:28 -0700 (PDT) Date: Fri, 2 Jun 2023 00:51:16 +0000 In-Reply-To: <20230602005118.2899664-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230602005118.2899664-1-jingzhangos@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602005118.2899664-5-jingzhangos@google.com> Subject: [PATCH v11 4/5] KVM: arm64: Reuse fields of sys_reg_desc for idreg From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org sys_reg_desc::{reset, val} are presently unused for ID register descriptors. Repurpose these fields to support user-configurable ID registers. Use the ::reset() function pointer to return the sanitised value of a given ID register, optionally with KVM-specific feature sanitisation. Additionally, keep a mask of writable register fields in ::val. Signed-off-by: Jing Zhang --- arch/arm64/kvm/sys_regs.c | 101 +++++++++++++++++++++++++++----------- arch/arm64/kvm/sys_regs.h | 15 ++++-- 2 files changed, 82 insertions(+), 34 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 0179df50fcf5..1a534e0fc4ca 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -541,10 +541,11 @@ static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, return 0; } -static void reset_bvr(struct kvm_vcpu *vcpu, +static u64 reset_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm] = rd->val; + return rd->val; } static bool trap_bcr(struct kvm_vcpu *vcpu, @@ -577,10 +578,11 @@ static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, return 0; } -static void reset_bcr(struct kvm_vcpu *vcpu, +static u64 reset_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm] = rd->val; + return rd->val; } static bool trap_wvr(struct kvm_vcpu *vcpu, @@ -614,10 +616,11 @@ static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, return 0; } -static void reset_wvr(struct kvm_vcpu *vcpu, +static u64 reset_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm] = rd->val; + return rd->val; } static bool trap_wcr(struct kvm_vcpu *vcpu, @@ -650,25 +653,28 @@ static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, return 0; } -static void reset_wcr(struct kvm_vcpu *vcpu, +static u64 reset_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm] = rd->val; + return rd->val; } -static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 amair = read_sysreg(amair_el1); vcpu_write_sys_reg(vcpu, amair, AMAIR_EL1); + return amair; } -static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 actlr = read_sysreg(actlr_el1); vcpu_write_sys_reg(vcpu, actlr, ACTLR_EL1); + return actlr; } -static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 mpidr; @@ -682,7 +688,10 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0); mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1); mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2); - vcpu_write_sys_reg(vcpu, (1ULL << 31) | mpidr, MPIDR_EL1); + mpidr |= (1ULL << 31); + vcpu_write_sys_reg(vcpu, mpidr, MPIDR_EL1); + + return mpidr; } static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, @@ -694,13 +703,13 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } -static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX); /* No PMU available, any PMU reg may UNDEF... */ if (!kvm_arm_support_pmu_v3()) - return; + return 0; n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT; n &= ARMV8_PMU_PMCR_N_MASK; @@ -709,33 +718,41 @@ static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) reset_unknown(vcpu, r); __vcpu_sys_reg(vcpu, r->reg) &= mask; + + return __vcpu_sys_reg(vcpu, r->reg); } -static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { reset_unknown(vcpu, r); __vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0); + + return __vcpu_sys_reg(vcpu, r->reg); } -static void reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { reset_unknown(vcpu, r); __vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK; + + return __vcpu_sys_reg(vcpu, r->reg); } -static void reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { reset_unknown(vcpu, r); __vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_COUNTER_MASK; + + return __vcpu_sys_reg(vcpu, r->reg); } -static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 pmcr; /* No PMU available, PMCR_EL0 may UNDEF... */ if (!kvm_arm_support_pmu_v3()) - return; + return 0; /* Only preserve PMCR_EL0.N, and reset the rest to 0 */ pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); @@ -743,6 +760,8 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) pmcr |= ARMV8_PMU_PMCR_LC; __vcpu_sys_reg(vcpu, r->reg) = pmcr; + + return __vcpu_sys_reg(vcpu, r->reg); } static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) @@ -1232,6 +1251,11 @@ static void pmuver_update(struct kvm_vcpu *vcpu, u8 pmuver, bool valid_pmu) } } +static u64 general_read_kvm_sanitised_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) +{ + return read_sanitised_ftr_reg(reg_to_encoding(rd)); +} + static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 encoding) { u64 val = IDREG(vcpu->kvm, encoding); @@ -1540,7 +1564,7 @@ static bool access_clidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, * Fabricate a CLIDR_EL1 value instead of using the real value, which can vary * by the physical CPU which the vcpu currently resides in. */ -static void reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 ctr_el0 = read_sanitised_ftr_reg(SYS_CTR_EL0); u64 clidr; @@ -1588,6 +1612,8 @@ static void reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) clidr |= 2 << CLIDR_TTYPE_SHIFT(loc); __vcpu_sys_reg(vcpu, r->reg) = clidr; + + return __vcpu_sys_reg(vcpu, r->reg); } static int set_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, @@ -1687,6 +1713,17 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .visibility = elx2_visibility, \ } +/* + * Since reset() callback and field val are not used for idregs, they will be + * used for specific purposes for idregs. + * The reset() would return KVM sanitised register value. The value would be the + * same as the host kernel sanitised value if there is no KVM sanitisation. + * The val would be used as a mask indicating writable fields for the idreg. + * Only bits with 1 are writable from userspace. This mask might not be + * necessary in the future whenever all ID registers are enabled as writable + * from userspace. + */ + /* sys_reg_desc initialiser for known cpufeature ID registers */ #define ID_SANITISED(name) { \ SYS_DESC(SYS_##name), \ @@ -1694,6 +1731,8 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .get_user = get_id_reg, \ .set_user = set_id_reg, \ .visibility = id_visibility, \ + .reset = general_read_kvm_sanitised_reg,\ + .val = 0, \ } /* sys_reg_desc initialiser for known cpufeature ID registers */ @@ -1703,6 +1742,8 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .get_user = get_id_reg, \ .set_user = set_id_reg, \ .visibility = aa32_id_visibility, \ + .reset = general_read_kvm_sanitised_reg,\ + .val = 0, \ } /* @@ -1715,7 +1756,9 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .access = access_id_reg, \ .get_user = get_id_reg, \ .set_user = set_id_reg, \ - .visibility = raz_visibility \ + .visibility = raz_visibility, \ + .reset = NULL, \ + .val = 0, \ } /* @@ -1729,6 +1772,8 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .get_user = get_id_reg, \ .set_user = set_id_reg, \ .visibility = raz_visibility, \ + .reset = NULL, \ + .val = 0, \ } static bool access_sp_el1(struct kvm_vcpu *vcpu, @@ -3067,19 +3112,21 @@ id_to_sys_reg_desc(struct kvm_vcpu *vcpu, u64 id, */ #define FUNCTION_INVARIANT(reg) \ - static void get_##reg(struct kvm_vcpu *v, \ + static u64 get_##reg(struct kvm_vcpu *v, \ const struct sys_reg_desc *r) \ { \ ((struct sys_reg_desc *)r)->val = read_sysreg(reg); \ + return ((struct sys_reg_desc *)r)->val; \ } FUNCTION_INVARIANT(midr_el1) FUNCTION_INVARIANT(revidr_el1) FUNCTION_INVARIANT(aidr_el1) -static void get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r) +static u64 get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r) { ((struct sys_reg_desc *)r)->val = read_sanitised_ftr_reg(SYS_CTR_EL0); + return ((struct sys_reg_desc *)r)->val; } /* ->val is filled in by kvm_sys_reg_table_init() */ @@ -3389,9 +3436,7 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) return write_demux_regids(uindices); } -/* - * Set the guest's ID registers with ID_SANITISED() to the host's sanitized value. - */ +/* Initialize the guest's ID registers with KVM sanitised values. */ void kvm_arm_init_id_regs(struct kvm *kvm) { const struct sys_reg_desc *idreg = first_idreg; @@ -3400,13 +3445,11 @@ void kvm_arm_init_id_regs(struct kvm *kvm) /* Initialize all idregs */ while (is_id_reg(id)) { - /* - * Some hidden ID registers which are not in arm64_ftr_regs[] - * would cause warnings from read_sanitised_ftr_reg(). - * Skip those ID registers to avoid the warnings. - */ - if (idreg->visibility != raz_visibility) - IDREG(kvm, id) = read_sanitised_ftr_reg(id); + val = 0; + /* Read KVM sanitised register value if available */ + if (idreg->reset) + val = idreg->reset(NULL, idreg); + IDREG(kvm, id) = val; idreg++; id = reg_to_encoding(idreg); diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index eba10de2e7ae..c65c129b3500 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -71,13 +71,16 @@ struct sys_reg_desc { struct sys_reg_params *, const struct sys_reg_desc *); - /* Initialization for vcpu. */ - void (*reset)(struct kvm_vcpu *, const struct sys_reg_desc *); + /* + * Initialization for vcpu. Return initialized value, or KVM + * sanitized value for ID registers. + */ + u64 (*reset)(struct kvm_vcpu *, const struct sys_reg_desc *); /* Index into sys_reg[], or 0 if we don't need to save it. */ int reg; - /* Value (usually reset value) */ + /* Value (usually reset value), or write mask for idregs */ u64 val; /* Custom get/set_user functions, fallback to generic if NULL */ @@ -130,19 +133,21 @@ static inline bool read_zero(struct kvm_vcpu *vcpu, } /* Reset functions */ -static inline void reset_unknown(struct kvm_vcpu *vcpu, +static inline u64 reset_unknown(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { BUG_ON(!r->reg); BUG_ON(r->reg >= NR_SYS_REGS); __vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL; + return __vcpu_sys_reg(vcpu, r->reg); } -static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static inline u64 reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { BUG_ON(!r->reg); BUG_ON(r->reg >= NR_SYS_REGS); __vcpu_sys_reg(vcpu, r->reg) = r->val; + return __vcpu_sys_reg(vcpu, r->reg); } static inline unsigned int sysreg_visibility(const struct kvm_vcpu *vcpu, From patchwork Fri Jun 2 00:51:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13264627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAB65C77B7A for ; Fri, 2 Jun 2023 00:51:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233222AbjFBAve (ORCPT ); Thu, 1 Jun 2023 20:51:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233221AbjFBAvd (ORCPT ); Thu, 1 Jun 2023 20:51:33 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11F94195 for ; Thu, 1 Jun 2023 17:51:31 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-64d138bd759so777406b3a.0 for ; Thu, 01 Jun 2023 17:51:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685667090; x=1688259090; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qytCj1yDieZphoiGJtfZ7vshJfgLOvMVIy+kGR/q7+c=; b=VfpWAoJirMFcwtqL1ZrZr5frBsCaTdg+FnXcDrbmh2yR4of7HuLpLZW9hvZkv209wI 5EpzcjU57tAEjU6bcRX/yLAP1EYAyFVFhOrxbYcNFDq/2hp+Idz5Elkt/RxzE1Q3po7C mQ7FokP5QKafmHexOr0cencSlJgGJe1VFyKZ8KKk6VIKkysdu52R5ZxZoLqlfwfsO5ND RyRp6StQHLv884YGnDuQX1bo4bWjrCy4UuekW1mUjBrE2FzgGxvcmh4mwcIBrTvy7rJx dj1YpkU4sKK4zmigC4+3vWDpydT6htZ3NQGS7cj5UAZh1jZcLDWqYN4rO9CYqZOwkWcP IxYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685667090; x=1688259090; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qytCj1yDieZphoiGJtfZ7vshJfgLOvMVIy+kGR/q7+c=; b=V5ICHulbvsfvg4LQ6xHwSYboaZZe5AoetXqjoJaV8DX7mTsxbYrTx9xbrVlwW+SESn x3F+EP1gITrnrcCl7gvD2wlk8LZrXswRI4EueQnU1nel7Te+c3nCW7cT65NGZW038FgM JFaOUpStvPI0v7lSFoqBUPoSojEh0EJzcttQC5jPpHgSvptKFSY771zRsDUIIk9DRBv4 J5RQ7UbtHCtUwcVoKH7f1S2dwO1st01ejeCviQgMskhrbcSd18unlOI2X5lQolzpGnLM 6JVzvqaBrB8nxexX27xprBGRZut5RmiYOLwwf/X3Yj7YCLssxdkpZmyNYlEZ7BiIMFIz PrTQ== X-Gm-Message-State: AC+VfDzIhRKsv/rCjrupLtGP6TgkzNq8usOsamznspvSSiW9knSGzwBD aGeDHebI9A+7W1mR1yuJXztpDjDVw4csGsYrII9v/4MM4uPmI58l7lZNF9CAzstdugHB017aWFx Jxj7LXW5DtX+tDu20mDbBYefbEHjVdPRD891dub0PCxDOYYUj2w8r5IJBGFwWuxWthIn3a0k= X-Google-Smtp-Source: ACHHUZ4tlxqmk+ETGs9kCH5k8TrnotXROYu6lP+ZMcHetBnnzW7ytUneMJY+D0N8wc2c57Fuc7ypsfrCCYcGDAGJRQ== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a05:6a00:1ad4:b0:650:1a63:cd4a with SMTP id f20-20020a056a001ad400b006501a63cd4amr1944113pfv.0.1685667090512; Thu, 01 Jun 2023 17:51:30 -0700 (PDT) Date: Fri, 2 Jun 2023 00:51:17 +0000 In-Reply-To: <20230602005118.2899664-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230602005118.2899664-1-jingzhangos@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602005118.2899664-6-jingzhangos@google.com> Subject: [PATCH v11 5/5] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3 From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor writings for ID_AA64PFR0_EL1.[CSV2|CSV3], ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon based on utilities specific to ID register. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kvm/sys_regs.c | 291 +++++++++++++++++++--------- 3 files changed, 203 insertions(+), 91 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 6bf013fb110d..dc769c2eb7a4 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -915,6 +915,7 @@ static inline unsigned int get_vmid_bits(u64 mmfr1) return 8; } +s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur); struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id); extern struct arm64_ftr_override id_aa64mmfr1_override; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 7d7128c65161..3317a7b6deac 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -798,7 +798,7 @@ static u64 arm64_ftr_set_value(const struct arm64_ftr_bits *ftrp, s64 reg, return reg; } -static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, +s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur) { s64 ret = 0; diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1a534e0fc4ca..50d4e25f42d3 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -41,6 +41,7 @@ * 64bit interface. */ +static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val); static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 encoding); static u64 sys_reg_to_index(const struct sys_reg_desc *reg); @@ -1194,6 +1195,86 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, return true; } +static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp, + s64 new, s64 cur) +{ + struct arm64_ftr_bits kvm_ftr = *ftrp; + + /* Some features have different safe value type in KVM than host features */ + switch (id) { + case SYS_ID_AA64DFR0_EL1: + if (kvm_ftr.shift == ID_AA64DFR0_EL1_PMUVer_SHIFT) + kvm_ftr.type = FTR_LOWER_SAFE; + break; + case SYS_ID_DFR0_EL1: + if (kvm_ftr.shift == ID_DFR0_EL1_PerfMon_SHIFT) + kvm_ftr.type = FTR_LOWER_SAFE; + break; + } + + return arm64_ftr_safe_value(&kvm_ftr, new, cur); +} + +/** + * arm64_check_features() - Check if a feature register value constitutes + * a subset of features indicated by the idreg's KVM sanitised limit. + * + * This function will check if each feature field of @val is the "safe" value + * against idreg's KVM sanitised limit return from reset() callback. + * If a field value in @val is the same as the one in limit, it is always + * considered the safe value regardless For register fields that are not in + * writable, only the value in limit is considered the safe value. + * + * Return: 0 if all the fields are safe. Otherwise, return negative errno. + */ +static int arm64_check_features(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + const struct arm64_ftr_reg *ftr_reg; + const struct arm64_ftr_bits *ftrp = NULL; + u32 id = reg_to_encoding(rd); + u64 writable_mask = rd->val; + u64 limit = 0; + u64 mask = 0; + + /* For hidden and unallocated idregs without reset, only val = 0 is allowed. */ + if (rd->reset) { + limit = rd->reset(vcpu, rd); + ftr_reg = get_arm64_ftr_reg(id); + if (!ftr_reg) + return -EINVAL; + ftrp = ftr_reg->ftr_bits; + } + + for (; ftrp && ftrp->width; ftrp++) { + s64 f_val, f_lim, safe_val; + u64 ftr_mask; + + ftr_mask = arm64_ftr_mask(ftrp); + if ((ftr_mask & writable_mask) != ftr_mask) + continue; + + f_val = arm64_ftr_value(ftrp, val); + f_lim = arm64_ftr_value(ftrp, limit); + mask |= ftr_mask; + + if (f_val == f_lim) + safe_val = f_val; + else + safe_val = kvm_arm64_ftr_safe_value(id, ftrp, f_val, f_lim); + + if (safe_val != f_val) + return -E2BIG; + } + + /* For fields that are not writable, values in limit are the safe values. */ + if ((val & ~mask) != (limit & ~mask)) + return -E2BIG; + + return 0; +} + static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { if (kvm_vcpu_has_pmu(vcpu)) @@ -1231,9 +1312,17 @@ static u8 pmuver_to_perfmon(u8 pmuver) } } -static void pmuver_update(struct kvm_vcpu *vcpu, u8 pmuver, bool valid_pmu) +static int pmuver_update(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val, + u8 pmuver, + bool valid_pmu) { - u64 val; + int ret; + + ret = set_id_reg(vcpu, rd, val); + if (ret) + return ret; if (valid_pmu) { val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); @@ -1249,6 +1338,8 @@ static void pmuver_update(struct kvm_vcpu *vcpu, u8 pmuver, bool valid_pmu) assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF); } + + return 0; } static u64 general_read_kvm_sanitised_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) @@ -1264,7 +1355,6 @@ static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 encoding) case SYS_ID_AA64PFR0_EL1: if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); if (kvm_vgic_global_state.type == VGIC_V3) { val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); @@ -1291,15 +1381,10 @@ static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 encoding) val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); break; case SYS_ID_AA64DFR0_EL1: - /* Limit debug to ARMv8.0 */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); /* Set PMUver to the required version */ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), vcpu_pmuver(vcpu)); - /* Hide SPE from guests */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); break; case SYS_ID_DFR0_EL1: val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); @@ -1398,38 +1483,56 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } -static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) +static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) { - u64 new_val = val; - u8 csv2, csv3; + u64 val; + u32 id = reg_to_encoding(rd); + val = read_sanitised_ftr_reg(id); /* - * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as - * it doesn't promise more than what is actually provided (the - * guest could otherwise be covered in ectoplasmic residue). + * The default is to expose CSV2 == 1 if the HW isn't affected. + * Although this is a per-CPU feature, we make it global because + * asymmetric systems are just a nuisance. + * + * Userspace can override this as long as it doesn't promise + * the impossible. */ - csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT); - if (csv2 > 1 || - (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) - return -EINVAL; + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); + } + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + } - /* Same thing for CSV3 */ - csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT); - if (csv3 > 1 || - (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) - return -EINVAL; + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); - /* We can only differ with CSV[23], and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); - if (val) - return -EINVAL; + return val; +} - IDREG(vcpu->kvm, reg_to_encoding(rd)) = new_val; - return 0; +static u64 read_sanitised_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + u64 val; + u32 id = reg_to_encoding(rd); + + val = read_sanitised_ftr_reg(id); + /* Limit debug to ARMv8.0 */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + kvm_arm_pmu_get_pmuver_limit()); + /* Hide SPE from guests */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); + + return val; } static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, @@ -1457,14 +1560,35 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) return -EINVAL; - /* We can only differ with PMUver, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - if (val) - return -EINVAL; + if (!valid_pmu) { + /* + * Ignore the PMUVer field in @val. The PMUVer would be determined + * by arch flags bit KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, + */ + pmuver = FIELD_GET(ID_AA64DFR0_EL1_PMUVer_MASK, + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)); + val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; + val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver); + } - pmuver_update(vcpu, pmuver, valid_pmu); - return 0; + return pmuver_update(vcpu, rd, val, pmuver, valid_pmu); +} + +static u64 read_sanitised_id_dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + u64 val; + u32 id = reg_to_encoding(rd); + + val = read_sanitised_ftr_reg(id); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), kvm_arm_pmu_get_pmuver_limit()); + + return val; } static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, @@ -1493,14 +1617,18 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) return -EINVAL; - /* We can only differ with PerfMon, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - if (val) - return -EINVAL; + if (!valid_pmu) { + /* + * Ignore the PerfMon field in @val. The PerfMon would be determined + * by arch flags bit KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, + */ + perfmon = FIELD_GET(ID_DFR0_EL1_PerfMon_MASK, + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1)); + val &= ~ID_DFR0_EL1_PerfMon_MASK; + val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, perfmon); + } - pmuver_update(vcpu, perfmon_to_pmuver(perfmon), valid_pmu); - return 0; + return pmuver_update(vcpu, rd, val, perfmon_to_pmuver(perfmon), valid_pmu); } /* @@ -1520,11 +1648,14 @@ static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { - /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(vcpu, rd)) - return -EINVAL; + u32 id = reg_to_encoding(rd); + int ret = 0; - return 0; + ret = arm64_check_features(vcpu, rd, val); + if (!ret) + IDREG(vcpu->kvm, id) = val; + + return ret; } static int get_raz_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, @@ -1875,9 +2006,13 @@ static const struct sys_reg_desc sys_reg_descs[] = { /* CRm=1 */ AA32_ID_SANITISED(ID_PFR0_EL1), AA32_ID_SANITISED(ID_PFR1_EL1), - { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_dfr0_el1, - .visibility = aa32_id_visibility, }, + { SYS_DESC(SYS_ID_DFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_dfr0_el1, + .visibility = aa32_id_visibility, + .reset = read_sanitised_id_dfr0_el1, + .val = ID_DFR0_EL1_PerfMon_MASK, }, ID_HIDDEN(ID_AFR0_EL1), AA32_ID_SANITISED(ID_MMFR0_EL1), AA32_ID_SANITISED(ID_MMFR1_EL1), @@ -1906,8 +2041,12 @@ static const struct sys_reg_desc sys_reg_descs[] = { /* AArch64 ID registers */ /* CRm=4 */ - { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + { SYS_DESC(SYS_ID_AA64PFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_reg, + .reset = read_sanitised_id_aa64pfr0_el1, + .val = ID_AA64PFR0_EL1_CSV2_MASK | ID_AA64PFR0_EL1_CSV3_MASK, }, ID_SANITISED(ID_AA64PFR1_EL1), ID_UNALLOCATED(4,2), ID_UNALLOCATED(4,3), @@ -1917,8 +2056,12 @@ static const struct sys_reg_desc sys_reg_descs[] = { ID_UNALLOCATED(4,7), /* CRm=5 */ - { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + { SYS_DESC(SYS_ID_AA64DFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_aa64dfr0_el1, + .reset = read_sanitised_id_aa64dfr0_el1, + .val = ID_AA64DFR0_EL1_PMUVer_MASK, }, ID_SANITISED(ID_AA64DFR1_EL1), ID_UNALLOCATED(5,2), ID_UNALLOCATED(5,3), @@ -3454,38 +3597,6 @@ void kvm_arm_init_id_regs(struct kvm *kvm) idreg++; id = reg_to_encoding(idreg); } - - /* - * The default is to expose CSV2 == 1 if the HW isn't affected. - * Although this is a per-CPU feature, we make it global because - * asymmetric systems are just a nuisance. - * - * Userspace can override this as long as it doesn't promise - * the impossible. - */ - val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); - - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); - } - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); - } - - IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - val = IDREG(kvm, SYS_ID_AA64DFR0_EL1); - - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), - kvm_arm_pmu_get_pmuver_limit()); - - IDREG(kvm, SYS_ID_AA64DFR0_EL1) = val; } int __init kvm_sys_reg_table_init(void)