From patchwork Wed Feb 24 14:33:37 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 8408491 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6D1D9C0553 for ; Wed, 24 Feb 2016 14:35:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 66D6C202E6 for ; Wed, 24 Feb 2016 14:35:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 285EF201DD for ; Wed, 24 Feb 2016 14:35:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754348AbcBXOfK (ORCPT ); Wed, 24 Feb 2016 09:35:10 -0500 Received: from mail-pa0-f53.google.com ([209.85.220.53]:35344 "EHLO mail-pa0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753737AbcBXOfH (ORCPT ); Wed, 24 Feb 2016 09:35:07 -0500 Received: by mail-pa0-f53.google.com with SMTP id ho8so14022130pac.2 for ; Wed, 24 Feb 2016 06:35:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=u9bDRrP2eCVbLdz5+XfoiGtpiVm1TmRk+Sm8/qer4lk=; b=XF2PTiLGFalrcAP/qwGi6AkpOVsHoDDIFLwcrS37q4dRJba3YvY30rKQ4BX9ftWhhr oCVCjvx9tMiKrUNO3KT/Y5A8owB10bV7Fi/xG/n3SeuVbtxgMWSvT8Rx/uaUlHWjrfrk tRx52DRsmjto/gk971iswIDFTDGKHO3QEJDiU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=u9bDRrP2eCVbLdz5+XfoiGtpiVm1TmRk+Sm8/qer4lk=; b=MqkdkcFgdbuauVNdBpYt4uOMzkzJWgdVFyA8vT/N2jW0UyG5cle283K/6MVzJgltt3 QSVkjP/MnxUM1/VjsIOf/g1wALBEt9PucV+wWdCN2yzjoDiA+rH5gKSacioOgwDa+iOW BsU6op3K5gtXe7PPTSdIasgS7HWui9p0jy2L8W1N7ImiuMEpG6khGc+V+jI5eKkCreoa vTQV0ro9ZZQwKPg4tIrsvS5Mr3u4vzL48bcvKzN7oGuJjjPS1raTUMvQq0VWdGoYxZ5r 7Ho3VAKWuC0HSe8hnM4Gt1EfnCqX6dGrNIVRqHt5kXDXFS1TFC+rc0OIH7FkkPjwAroK KZtQ== X-Gm-Message-State: AG10YORU01y4TftwX13FLI/58HLHRsbOx6C0LOp9hVRRSvdurhpLImmxHcw7DBg2aAMykUmR X-Received: by 10.66.252.73 with SMTP id zq9mr54773993pac.16.1456324507238; Wed, 24 Feb 2016 06:35:07 -0800 (PST) Received: from localhost.localdomain ([104.237.91.197]) by smtp.gmail.com with ESMTPSA id 82sm5584425pfn.89.2016.02.24.06.35.02 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 24 Feb 2016 06:35:06 -0800 (PST) From: Shannon Zhao To: kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, christoffer.dall@linaro.org Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, will.deacon@arm.com, shannon.zhao@linaro.org, peter.huangpeng@huawei.com, hangaohuai@huawei.com, zhaoshenglong@huawei.com Subject: [PATCH v14 15/20] KVM: ARM64: Add PMU overflow interrupt routing Date: Wed, 24 Feb 2016 22:33:37 +0800 Message-Id: <1456324417-18992-1-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1456290520-10012-16-git-send-email-zhaoshenglong@huawei.com> References: <1456290520-10012-16-git-send-email-zhaoshenglong@huawei.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When calling perf_event_create_kernel_counter to create perf_event, assign a overflow handler. Then when the perf event overflows, set the corresponding bit of guest PMOVSSET register. If this counter is enabled and its interrupt is enabled as well, kick the vcpu to sync the interrupt. On VM entry, if there is counter overflowed and interrupt level is changed, inject the interrupt with corresponding level. On VM exit, sync the interrupt level as well if it has been changed. Signed-off-by: Shannon Zhao Reviewed-by: Marc Zyngier Reviewed-by: Andrew Jones Reviewed-by: Christoffer Dall --- arch/arm/kvm/arm.c | 5 ++++ include/kvm/arm_pmu.h | 5 ++++ virt/kvm/arm/pmu.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++- 3 files changed, 78 insertions(+), 1 deletion(-) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index dda1959..5c133ac 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -28,6 +28,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include "trace.h" @@ -577,6 +578,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) * non-preemptible context. */ preempt_disable(); + kvm_pmu_flush_hwstate(vcpu); kvm_timer_flush_hwstate(vcpu); kvm_vgic_flush_hwstate(vcpu); @@ -593,6 +595,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) if (ret <= 0 || need_new_vmid_gen(vcpu->kvm) || vcpu->arch.power_off || vcpu->arch.pause) { local_irq_enable(); + kvm_pmu_sync_hwstate(vcpu); kvm_timer_sync_hwstate(vcpu); kvm_vgic_sync_hwstate(vcpu); preempt_enable(); @@ -641,6 +644,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) kvm_guest_exit(); trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); + kvm_pmu_sync_hwstate(vcpu); + /* * We must sync the timer state before the vgic state so that * the vgic can properly sample the updated state of the diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 8bc92d1..9c184ed 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -35,6 +35,7 @@ struct kvm_pmu { int irq_num; struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS]; bool ready; + bool irq_level; }; #define kvm_arm_pmu_v3_ready(v) ((v)->arch.pmu.ready) @@ -44,6 +45,8 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu); void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val); void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val); void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val); +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu); +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu); void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val); void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val); void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, @@ -67,6 +70,8 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {} static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {} static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {} +static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {} +static inline void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {} static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {} static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {} static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index cda869c..74e858c 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -21,6 +21,7 @@ #include #include #include +#include /** * kvm_pmu_get_counter_value - get PMU counter value @@ -180,6 +181,71 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) kvm_vcpu_kick(vcpu); } +static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu = &vcpu->arch.pmu; + bool overflow; + + if (!kvm_arm_pmu_v3_ready(vcpu)) + return; + + overflow = !!kvm_pmu_overflow_status(vcpu); + if (pmu->irq_level != overflow) { + pmu->irq_level = overflow; + kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, + pmu->irq_num, overflow); + } +} + +/** + * kvm_pmu_flush_hwstate - flush pmu state to cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the host, and inject + * an interrupt if that was the case. + */ +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +/** + * kvm_pmu_sync_hwstate - sync pmu state from cpu + * @vcpu: The vcpu pointer + * + * Check if the PMU has overflowed while we were running in the guest, and + * inject an interrupt if that was the case. + */ +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) +{ + kvm_pmu_update_state(vcpu); +} + +static inline struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc) +{ + struct kvm_pmu *pmu; + struct kvm_vcpu_arch *vcpu_arch; + + pmc -= pmc->idx; + pmu = container_of(pmc, struct kvm_pmu, pmc[0]); + vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu); + return container_of(vcpu_arch, struct kvm_vcpu, arch); +} + +/** + * When perf event overflows, call kvm_pmu_overflow_set to set overflow status. + */ +static void kvm_pmu_perf_overflow(struct perf_event *perf_event, + struct perf_sample_data *data, + struct pt_regs *regs) +{ + struct kvm_pmc *pmc = perf_event->overflow_handler_context; + struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc); + int idx = pmc->idx; + + kvm_pmu_overflow_set(vcpu, BIT(idx)); +} + /** * kvm_pmu_software_increment - do software increment * @vcpu: The vcpu pointer @@ -291,7 +357,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, /* The initial sample period (overflow count) of an event. */ attr.sample_period = (-counter) & pmc->bitmask; - event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc); + event = perf_event_create_kernel_counter(&attr, -1, current, + kvm_pmu_perf_overflow, pmc); if (IS_ERR(event)) { pr_err_once("kvm: pmu event creation failed %ld\n", PTR_ERR(event));