From patchwork Mon Jun 17 19:01:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 11000273 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 489D814DB for ; Mon, 17 Jun 2019 19:05:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B9E7288C5 for ; Mon, 17 Jun 2019 19:05:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2F6FC288E6; Mon, 17 Jun 2019 19:05:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BFE9A288D9 for ; Mon, 17 Jun 2019 19:05:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Lf5RG8Is7M5b/AdhBptp5aD67llZNk4OK9U2enS2xkY=; b=fxUW5B9HXPYrpt declb6vfksd/xWIe7g6PWNWD8SlGC7wyQSazMUG58GMIz47YJlx4af/EMqhuTpVnRG6VklXfBQ/+P ybcggI2c32VSz+3jhGsdTeC8jzlTdDNfbgEphE5c3R/rF/glY3nCOpH8taooTT3/GoUgmE53H4hN5 dSm2FiXWr0a95CfrXvPU771XNY3xFwYmiEY+Tyb3CCQQekmjSGarp2SIhxsscSj+OsO0zk0QiLF3g Qc14LIYwuPkEiL+PA26maUNF6ZQYbaxwQKqKUoQtcujafVDy5t+6HlteBpo3I2HH9g93+8Ahhf8vb 298chkI1RdSl0kfPuNwQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hcwwk-0000Kf-Sf; Mon, 17 Jun 2019 19:05:38 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hcwsb-0003sb-Mi for linux-arm-kernel@lists.infradead.org; Mon, 17 Jun 2019 19:01:23 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2E8402B; Mon, 17 Jun 2019 12:01:21 -0700 (PDT) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.20]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C468A3F246; Mon, 17 Jun 2019 12:01:19 -0700 (PDT) From: Andrew Murray To: Christoffer Dall , Marc Zyngier Subject: [PATCH v10 4/5] KVM: arm/arm64: remove pmc->bitmask Date: Mon, 17 Jun 2019 20:01:04 +0100 Message-Id: <20190617190105.4662-5-andrew.murray@arm.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617190105.4662-1-andrew.murray@arm.com> References: <20190617190105.4662-1-andrew.murray@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190617_120121_855138_0A5A0E5F X-CRM114-Status: GOOD ( 13.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Pouloze , James Morse , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We currently use pmc->bitmask to determine the width of the pmc - however it's superfluous as the pmc index already describes if the pmc is a cycle counter or event counter. The architecture clearly describes the widths of these counters. Let's remove the bitmask to simplify the code. Signed-off-by: Andrew Murray --- include/kvm/arm_pmu.h | 1 - virt/kvm/arm/pmu.c | 30 ++++++++++++++++++++---------- 2 files changed, 20 insertions(+), 11 deletions(-) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index b73f31baca52..2f0e28dc5a9e 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -28,7 +28,6 @@ struct kvm_pmc { u8 idx; /* index into the pmu->pmc array */ struct perf_event *perf_event; - u64 bitmask; }; struct kvm_pmu { diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index ae1e886d4a1a..0588ad0ddb77 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -25,6 +25,18 @@ #include static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx); + +/** + * kvm_pmu_idx_is_64bit - determine if select_idx is a 64bit counter + * @vcpu: The vcpu pointer + * @select_idx: The counter index + */ +static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx) +{ + return (select_idx == ARMV8_PMU_CYCLE_IDX && + __vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC); +} + /** * kvm_pmu_get_counter_value - get PMU counter value * @vcpu: The vcpu pointer @@ -47,7 +59,10 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) counter += perf_event_read_value(pmc->perf_event, &enabled, &running); - return counter & pmc->bitmask; + if (!kvm_pmu_idx_is_64bit(vcpu, select_idx)) + counter = lower_32_bits(counter); + + return counter; } /** @@ -113,7 +128,6 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]); pmu->pmc[i].idx = i; - pmu->pmc[i].bitmask = 0xffffffffUL; } } @@ -348,8 +362,6 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) */ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) { - struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc; u64 mask; int i; @@ -368,11 +380,6 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) kvm_pmu_set_counter_value(vcpu, i, 0); } - - if (val & ARMV8_PMU_PMCR_LC) { - pmc = &pmu->pmc[ARMV8_PMU_CYCLE_IDX]; - pmc->bitmask = 0xffffffffffffffffUL; - } } static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) @@ -420,7 +427,10 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) counter = kvm_pmu_get_counter_value(vcpu, select_idx); /* The initial sample period (overflow count) of an event. */ - attr.sample_period = (-counter) & pmc->bitmask; + if (kvm_pmu_idx_is_64bit(vcpu, pmc->idx)) + attr.sample_period = (-counter) & GENMASK(63, 0); + else + attr.sample_period = (-counter) & GENMASK(31, 0); event = perf_event_create_kernel_counter(&attr, -1, current, kvm_pmu_perf_overflow, pmc);