From patchwork Fri Oct 20 21:40:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13431211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6ABDC00A8F for ; Fri, 20 Oct 2023 21:41:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345482AbjJTVld (ORCPT ); Fri, 20 Oct 2023 17:41:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345176AbjJTVlQ (ORCPT ); Fri, 20 Oct 2023 17:41:16 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD095D6C for ; Fri, 20 Oct 2023 14:41:04 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a839b31a0dso23806467b3.0 for ; Fri, 20 Oct 2023 14:41:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1697838064; x=1698442864; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eWANe28YHA3Ik3VAhdhi3PR1V7Sp1rEPcSjU6me6VYQ=; b=a3L9/DSrY46+hFW6OWnAMjHUl4Pp3cU/gldU1GxusgQ6LJZAVnFAKj8P7cV8PiRBoH QO3CafgU6K9E/fmFMzorI7Upx/9HslLzXUbJ9TLf/M658DNiO2TugLhdGkfXdukXqfLi 4dbKUFX1XitYSuG6FOioOvb90SHtxW/0XTu68g+nNw+GHlnW8JrZzu4b2ElItAqqm7m7 F6271x05jMLSN8+IRRFzkMsZ81ltdgHty7NoFJblpLXrSx9yCy4BAW6KVW/t7xjGNlyk q/KjOENjWK/QYiIllmoBIUflDvTVXRXvGxeyZz4KU1kA2c7/k/HKMJy0nQN/9NusqyqK iulQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697838064; x=1698442864; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eWANe28YHA3Ik3VAhdhi3PR1V7Sp1rEPcSjU6me6VYQ=; b=wbTJKRXmzh99BQ4k3tuheSaT4Rea9IMOmrjp5kIjIRZKabN7bMS2SlNkG2w/OYmPYN p+JGVC2WtQEpltcXz49jMefxZvq+fBxzTOp3B3Fn1nOfS+XdK9gXtGIpGRHCO7BUPGxQ Ud3L9ScbEHxlE/r+gD/OvjMJMePfpWDc/uIK0osbcTjYXC4beFhh3V/Zo9l7bOyCLhl5 HeJ5Ypx9MKTCVrFDPWqENJClf6EWfi+DAHlcjGLmt0nWUvmV9qv4dA3W2VilZeNhlWUG 2dGIyWf7X64WjnYzXXq4Y3O3iMh3Ev/g9yL4j8ppc1Vg6dZDAfpqM6l2SoX38oY0l29M IKQw== X-Gm-Message-State: AOJu0YwEILZjuPdo/qtWbYoSP6StSnSM3CpDgfhHan834jvTXn7YGIuY oL1psGstpXN94EHEfOybXOLWH6Dy61/J X-Google-Smtp-Source: AGHT+IGdgdjVHKqx6Bu9LPm2rggnnmwoBhDoh8QKCazmacm9yfZUPsDbAbEK8fK3ImY6VtPJpzzcreKswMJJ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:20a1]) (user=rananta job=sendgmr) by 2002:a05:6902:1083:b0:d9a:c946:c18c with SMTP id v3-20020a056902108300b00d9ac946c18cmr92087ybu.6.1697838063913; Fri, 20 Oct 2023 14:41:03 -0700 (PDT) Date: Fri, 20 Oct 2023 21:40:47 +0000 In-Reply-To: <20231020214053.2144305-1-rananta@google.com> Mime-Version: 1.0 References: <20231020214053.2144305-1-rananta@google.com> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog Message-ID: <20231020214053.2144305-8-rananta@google.com> Subject: [PATCH v8 07/13] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Reiji Watanabe KVM does not yet support userspace modifying PMCR_EL0.N (With the previous patch, KVM ignores what is written by userspace). Add support userspace limiting PMCR_EL0.N. Disallow userspace to set PMCR_EL0.N to a value that is greater than the host value as KVM doesn't support more event counters than what the host HW implements. Also, make this register immutable after the VM has started running. To maintain the existing expectations, instead of returning an error, KVM returns a success for these two cases. Finally, ignore writes to read-only bits that are cleared on vCPU reset, and RES{0,1} bits (including writable bits that KVM doesn't support yet), as those bits shouldn't be modified (at least with the current KVM). Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/sys_regs.c | 57 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 55 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 2e5d497596ef8..a2c5f210b3d6b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1176,6 +1176,59 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, return 0; } +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, + u64 val) +{ + struct kvm *kvm = vcpu->kvm; + u64 new_n, mutable_mask; + + mutex_lock(&kvm->arch.config_lock); + + /* + * Make PMCR immutable once the VM has started running, but + * do not return an error to meet the existing expectations. + */ + if (kvm_vm_has_ran_once(vcpu->kvm)) { + mutex_unlock(&kvm->arch.config_lock); + return 0; + } + + new_n = (val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; + if (new_n != kvm->arch.pmcr_n) { + u8 pmcr_n_limit = kvm_arm_pmu_get_max_counters(kvm); + + /* + * The vCPU can't have more counters than the PMU hardware + * implements. Ignore this error to maintain compatibility + * with the existing KVM behavior. + */ + if (new_n <= pmcr_n_limit) + kvm->arch.pmcr_n = new_n; + } + mutex_unlock(&kvm->arch.config_lock); + + /* + * Ignore writes to RES0 bits, read only bits that are cleared on + * vCPU reset, and writable bits that KVM doesn't support yet. + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace) + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU. + * But, we leave the bit as it is here, as the vCPU's PMUver might + * be changed later (NOTE: the bit will be cleared on first vCPU run + * if necessary). + */ + mutable_mask = (ARMV8_PMU_PMCR_MASK | + (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT)); + val &= mutable_mask; + val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask); + + /* The LC bit is RES1 when AArch32 is not supported */ + if (!kvm_supports_32bit_el0()) + val |= ARMV8_PMU_PMCR_LC; + + __vcpu_sys_reg(vcpu, r->reg) = val; + return 0; +} + /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ { SYS_DESC(SYS_DBGBVRn_EL1(n)), \ @@ -2309,8 +2362,8 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_CTR_EL0), access_ctr }, { SYS_DESC(SYS_SVCR), undef_access }, - { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, - .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr }, + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr, + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr }, { PMU_SYS_REG(PMCNTENSET_EL0), .access = access_pmcnten, .reg = PMCNTENSET_EL0, .get_user = get_pmcnten, .set_user = set_pmcnten },