From patchwork Fri Jul 28 18:19:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13332274 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 36462C0015E for ; Fri, 28 Jul 2023 18:20:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=qY6uGbt8OJ55NZ92YlCQsMWS8CV415PrBYbwsionrCE=; b=ChzqukBMaqurrgW5ADWNxHG9jk Q87m/leCaig76gws0vfQs22ZsagSbsuqqupG62QJDOEXgpXESeXHuhE7+P1tUPOP38DvOjCiVBZjg vjl/z/Oyvqhs35X7wUaIOlBRjWHIU1GgQhxTHutJNcTjQEmhiP3dymsZfXS51ENSRKuJxGrjREQbh /GKSg21/ZA6iUp9/tzQTNDYhiHv1nQyOgGx875iq2/8gaC1yRJI1VzaQwBE6H/iyir6eL2qY4Z80E M4vK73uOyW8SmB8dTUCah/fcSvN9bJJh4tiR0kj7WKbd7iH0OsdA2RCdGAu/A9UpEHsRB3bLo8eNw OtU9pIsQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qPS4F-004T7x-19; Fri, 28 Jul 2023 18:19:59 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qPS4C-004T6y-2X for linux-arm-kernel@lists.infradead.org; Fri, 28 Jul 2023 18:19:58 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d11f35a0d5cso2270366276.1 for ; Fri, 28 Jul 2023 11:19:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690568395; x=1691173195; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aAekjaTFXjEweeDH/Mb/ZF43LZdu52WOafvdUDqv0bc=; b=x2Oe9AirNgS7MogCIu5oJEqq/52wGFhjSA2r8zXGXeSPnwQQ56qElMbIeSrctyuSqn VhgLSHooyJxBubxB/sekpoO36br0Rw+VItm+aildhM69+/WctF+kNs6r9HGT6OYO23Te Mlr0ZD5d538njjfjNMnVNE352zXLPAAv44OXzX8WamqjuGJWQMfwUU1qjYZB+l6gksTk 4E/FO7lA60doM+lD8gYzmS+QNLXiHt69J6ffxgwXmnzn3M9sjEvlGbAlMlGLAKF0ki+O zw7xZ8/SQiHGQjZhP0EXB1qy6o25gtwhaZb/BL0DU46WRCDv1/ATWDApKWQAKxTYnXiB A54A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690568395; x=1691173195; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aAekjaTFXjEweeDH/Mb/ZF43LZdu52WOafvdUDqv0bc=; b=b6K1HP7PqDglEjNkvZQjKbM6p4CG4RKJ8MNkdMdbExuYWNdSj1kdeUe2EUzLceJwZp LryyTwyLqWaw9dcqq2hNDrIg+Nq5IDAn1WY8V/zyLSeaTpuTrMZTC3TbdbrTcGUN9v3Q RGJWMp3On8h71U5XkPQ6At5epenPQn8ksZC5CpXPtwUET0Fcv5tkP1VA+pQtepd4sWpE 8wv4VGIVHlw2+86M++KEIqsfMRgnDIBRrKBaIvuaKRjPzSk/rcoHROP+3K1vzbaIODUM A8ZyrxmXO0q9sjr4JYJL8sv9OypknRLu3YmtcfsLLrOi51rYfs7uCS51vYEJHvFD1v6J AKQw== X-Gm-Message-State: ABy/qLbpt94LrOLe58Qo5XEjpoRcFCVRgTp5W60PANEhrk8KDqBMXrR6 ccZuXK4oxR8eYh41ulxme0qlrUvmdWI= X-Google-Smtp-Source: APBJJlEiuMt+iQwxWAR3z0dbeeYVal00IsgPQQYapNBLmQrDJITZt7qPg6Q37+JqAmmMLNr4R/yVTSp7USo= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:b28b:0:b0:d0c:e37:b749 with SMTP id k11-20020a25b28b000000b00d0c0e37b749mr13192ybj.10.1690568395653; Fri, 28 Jul 2023 11:19:55 -0700 (PDT) Date: Fri, 28 Jul 2023 11:19:05 -0700 In-Reply-To: <20230728181907.1759513-1-reijiw@google.com> Mime-Version: 1.0 References: <20230728181907.1759513-1-reijiw@google.com> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog Message-ID: <20230728181907.1759513-4-reijiw@google.com> Subject: [PATCH v2 3/5] KVM: arm64: PMU: Avoid inappropriate use of host's PMUVer From: Reiji Watanabe To: Marc Zyngier , Oliver Upton , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230728_111956_823437_0D0A95F9 X-CRM114-Status: GOOD ( 16.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Avoid using the PMUVer of the host's PMU hardware to determine the PMU event mask, except in one case, as the value of host's PMUVer may differ from the value of ID_AA64DFR0_EL1.PMUVer for the guest. The exception case is when using the PMUVer to determine the valid range of events for KVM_ARM_VCPU_PMU_V3_FILTER, as it has been allowing userspace to specify events that are valid for the PMU hardware, regardless of the value of the guest's ID_AA64DFR0_EL1.PMUVer. KVM will use a valid range of events based on the value of the guest's ID_AA64DFR0_EL1.PMUVer, in order to effectively filter events that the guest attempts to program though. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 6fb5c59948a8..f0cbc9024bb7 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -36,12 +36,8 @@ static struct kvm_pmc *kvm_vcpu_idx_to_pmc(struct kvm_vcpu *vcpu, int cnt_idx) return &vcpu->arch.pmu.pmc[cnt_idx]; } -static u32 kvm_pmu_event_mask(struct kvm *kvm) +static u32 __kvm_pmu_event_mask(unsigned int pmuver) { - unsigned int pmuver; - - pmuver = kvm->arch.arm_pmu->pmuver; - switch (pmuver) { case ID_AA64DFR0_EL1_PMUVer_IMP: return GENMASK(9, 0); @@ -56,6 +52,14 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm) } } +static u32 kvm_pmu_event_mask(struct kvm *kvm) +{ + u64 dfr0 = IDREG(kvm, SYS_ID_AA64DFR0_EL1); + u8 pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0); + + return __kvm_pmu_event_mask(pmuver); +} + /** * kvm_pmc_is_64bit - determine if counter is 64bit * @pmc: counter context @@ -947,11 +951,17 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) return 0; } case KVM_ARM_VCPU_PMU_V3_FILTER: { + u8 pmuver = kvm_arm_pmu_get_pmuver_limit(); struct kvm_pmu_event_filter __user *uaddr; struct kvm_pmu_event_filter filter; int nr_events; - nr_events = kvm_pmu_event_mask(kvm) + 1; + /* + * Allow userspace to specify an event filter for the entire + * event range supported by PMUVer of the hardware, rather + * than the guest's PMUVer for KVM backward compatibility. + */ + nr_events = __kvm_pmu_event_mask(pmuver) + 1; uaddr = (struct kvm_pmu_event_filter __user *)(long)attr->addr;