From patchwork Mon Jul 6 02:17:37 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 6719211 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 036289F38C for ; Mon, 6 Jul 2015 02:18:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CF5C020642 for ; Mon, 6 Jul 2015 02:18:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 905572049C for ; Mon, 6 Jul 2015 02:18:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753570AbbGFCS2 (ORCPT ); Sun, 5 Jul 2015 22:18:28 -0400 Received: from mail-pa0-f43.google.com ([209.85.220.43]:35296 "EHLO mail-pa0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753568AbbGFCS1 (ORCPT ); Sun, 5 Jul 2015 22:18:27 -0400 Received: by pactm7 with SMTP id tm7so86953332pac.2 for ; Sun, 05 Jul 2015 19:18:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=U8mxIgaL7H1pI7+U3TSf351oRyhMqVLaUbkZ8s5g7O0=; b=D4FR6nvaEnQkbKvZpuyTWVWicqBJlWJAyiGymwtybnt5mw/lhhKmsRaMUagpeBqzym we5ATCkTYtU6L8yA9sA0EaTvLI9cBqF10MmVQt4LT2NAyRmy5vCYmzF0vhDG82WLifLs /pWizb8JjksPYbZmbAdpxpRfAzvS9VU9VbQb2iA2WfyFFqFwG9Ltdi2VfCJIHd+pphWO K8YjaxoIFh61zV775w79vlyMCe8ltJRXQDerAh2OBjqZkFtUtnq0CAZSGgcwm/qgDrCf fDUVvSi/1psTFSzzrW91Yc/DOVbqncFtJDvziKKz3Ip0CsSfdpiOpAo3iLWSqqJhmEb0 Ctdw== X-Gm-Message-State: ALoCoQnytk1SLSYplSg9ywQPHG4LvhAVXoJ3iKyKtfr3NREW4/+4z+5ehV57/XDkwYTa0tqMb4eF X-Received: by 10.67.5.231 with SMTP id cp7mr100371578pad.36.1436149107173; Sun, 05 Jul 2015 19:18:27 -0700 (PDT) Received: from localhost ([120.136.34.248]) by mx.google.com with ESMTPSA id vl1sm16269056pab.21.2015.07.05.19.18.25 (version=TLSv1 cipher=RC4-SHA bits=128/128); Sun, 05 Jul 2015 19:18:26 -0700 (PDT) From: shannon.zhao@linaro.org To: kvmarm@lists.cs.columbia.edu Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, christoffer.dall@linaro.org, will.deacon@arm.com, marc.zyngier@arm.com, alex.bennee@linaro.org, shannon.zhao@linaro.org, zhaoshenglong@huawei.com Subject: [PATCH 07/18] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function Date: Mon, 6 Jul 2015 10:17:37 +0800 Message-Id: <1436149068-3784-8-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1436149068-3784-1-git-send-email-shannon.zhao@linaro.org> References: <1436149068-3784-1-git-send-email-shannon.zhao@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Shannon Zhao When we use tools like perf on host, perf passes the event type and the id in this type category to kernel, then kernel will map them to event number and write this number to PMU PMEVTYPER_EL0 register. While we're trapping and emulating guest accesses to PMU registers, we get the event number and map it to the event type and the id reversely. Check whether the event type is same with the one to be set. If not, stop counter to monitor current event and find the event type map id. According to the bits of data to configure this perf_event attr and set exclude_host to 1 for guest. Then call perf_event API to create the corresponding event and save the event pointer. Signed-off-by: Shannon Zhao --- include/kvm/arm_pmu.h | 4 ++ virt/kvm/arm/pmu.c | 173 ++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 177 insertions(+) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 27d14ca..1050b24 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -45,9 +45,13 @@ struct kvm_pmu { #ifdef CONFIG_KVM_ARM_PMU void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu); +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, + unsigned long select_idx); void kvm_pmu_init(struct kvm_vcpu *vcpu); #else static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {} +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, + unsigned long select_idx) {} static inline void kvm_pmu_init(struct kvm_vcpu *vcpu) {} #endif diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index dc252d0..50a3c82 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -18,8 +18,68 @@ #include #include #include +#include #include +/* PMU HW events mapping. */ +static struct kvm_pmu_hw_event_map { + unsigned eventsel; + unsigned event_type; +} kvm_pmu_hw_events[] = { + [0] = { 0x11, PERF_COUNT_HW_CPU_CYCLES }, + [1] = { 0x08, PERF_COUNT_HW_INSTRUCTIONS }, + [2] = { 0x04, PERF_COUNT_HW_CACHE_REFERENCES }, + [3] = { 0x03, PERF_COUNT_HW_CACHE_MISSES }, + [4] = { 0x10, PERF_COUNT_HW_BRANCH_MISSES }, +}; + +/* PMU HW cache events mapping. */ +static struct kvm_pmu_hw_cache_event_map { + unsigned eventsel; + unsigned cache_type; + unsigned cache_op; + unsigned cache_result; +} kvm_pmu_hw_cache_events[] = { + [0] = { 0x04, PERF_COUNT_HW_CACHE_L1D, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [1] = { 0x03, PERF_COUNT_HW_CACHE_L1D, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_MISS }, + [2] = { 0x04, PERF_COUNT_HW_CACHE_L1D, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [3] = { 0x03, PERF_COUNT_HW_CACHE_L1D, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_MISS }, + [4] = { 0x12, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [5] = { 0x10, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_READ, + PERF_COUNT_HW_CACHE_RESULT_MISS }, + [6] = { 0x12, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_ACCESS }, + [7] = { 0x10, PERF_COUNT_HW_CACHE_BPU, PERF_COUNT_HW_CACHE_OP_WRITE, + PERF_COUNT_HW_CACHE_RESULT_MISS }, +}; + +/** + * kvm_pmu_stop_counter - stop PMU counter for the selected counter + * @vcpu: The vcpu pointer + * @select_idx: The counter index + * + * If this counter has been configured to monitor some event, disable and + * release it. + */ +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, + unsigned long select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + + if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); + perf_event_release_kernel(pmc->perf_event); + } + pmc->perf_event = NULL; + pmc->eventsel = 0xff; +} + /** * kvm_pmu_vcpu_reset - reset pmu state for cpu * @vcpu: The vcpu pointer @@ -27,12 +87,125 @@ */ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) { + int i; struct kvm_pmu *pmu = &vcpu->arch.pmu; + for (i = 0; i < ARMV8_MAX_COUNTERS; i++) + kvm_pmu_stop_counter(vcpu, i); + pmu->overflow_status = 0; pmu->irq_pending = false; } /** + * kvm_pmu_find_hw_event - find hardware event + * @pmu: The pmu pointer + * @event_select: The number of selected event type + * + * Based on the number of selected event type, find out whether it belongs to + * PERF_TYPE_HARDWARE. If so, return the corresponding event id. + */ +static unsigned kvm_pmu_find_hw_event(struct kvm_pmu *pmu, + unsigned long event_select) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(kvm_pmu_hw_events); i++) + if (kvm_pmu_hw_events[i].eventsel == event_select) + break; + + if (i == ARRAY_SIZE(kvm_pmu_hw_events)) + return PERF_COUNT_HW_MAX; + + return kvm_pmu_hw_events[i].event_type; +} + +/** + * kvm_pmu_find_hw_cache_event - find hardware cache event + * @pmu: The pmu pointer + * @event_select: The number of selected event type + * + * Based on the number of selected event type, find out whether it belongs to + * PERF_TYPE_HW_CACHE. If so, return the corresponding event id. + */ +static unsigned kvm_pmu_find_hw_cache_event(struct kvm_pmu *pmu, + unsigned long event_select) +{ + int i; + unsigned config; + + for (i = 0; i < ARRAY_SIZE(kvm_pmu_hw_cache_events); i++) + if (kvm_pmu_hw_cache_events[i].eventsel == event_select) + break; + + if (i == ARRAY_SIZE(kvm_pmu_hw_cache_events)) + return PERF_COUNT_HW_CACHE_MAX; + + config = (kvm_pmu_hw_cache_events[i].cache_type & 0xff) + | ((kvm_pmu_hw_cache_events[i].cache_op & 0xff) << 8) + | ((kvm_pmu_hw_cache_events[i].cache_result & 0xff) << 16); + + return config; +} + +/** + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event + * @vcpu: The vcpu pointer + * @data: The data guest writes to PMXEVTYPER_EL0 + * @select_idx: The number of selected counter + * + * Firstly check whether the event type is same with the one to be set. + * If not, stop counter to monitor current event and find the event type map id. + * According to the bits of data to configure this perf_event attr and set + * exclude_host to 1 for guest. Then call perf_event API to create the + * corresponding event and save the event pointer. + */ +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, unsigned long data, + unsigned long select_idx) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; + struct perf_event *event; + struct perf_event_attr attr; + unsigned config, type = PERF_TYPE_RAW; + + if ((data & ARMV8_EVTYPE_EVENT) == pmc->eventsel) + return; + + kvm_pmu_stop_counter(vcpu, select_idx); + pmc->eventsel = data & ARMV8_EVTYPE_EVENT; + + config = kvm_pmu_find_hw_event(pmu, pmc->eventsel); + if (config != PERF_COUNT_HW_MAX) { + type = PERF_TYPE_HARDWARE; + } else { + config = kvm_pmu_find_hw_cache_event(pmu, pmc->eventsel); + if (config != PERF_COUNT_HW_CACHE_MAX) + type = PERF_TYPE_HW_CACHE; + } + + if (type == PERF_TYPE_RAW) + config = pmc->eventsel; + + attr.type = type; + attr.size = sizeof(attr); + attr.pinned = true; + attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0; + attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0; + attr.exclude_hv = data & ARMV8_INCLUDE_EL2 ? 0 : 1; + attr.exclude_host = 1; + attr.config = config; + attr.sample_period = (-pmc->counter) & (((u64)1 << 32) - 1); + + event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc); + if (IS_ERR(event)) { + kvm_err("kvm: pmu event creation failed %ld\n", + PTR_ERR(event)); + return; + } + pmc->perf_event = event; +} + +/** * kvm_pmu_init - Initialize global PMU state for per vcpu * @vcpu: The vcpu pointer *