From patchwork Tue Nov 20 14:15:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10690779 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E44113AD for ; Tue, 20 Nov 2018 16:41:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A58C2AA98 for ; Tue, 20 Nov 2018 16:41:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7E9852AA17; Tue, 20 Nov 2018 16:41:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0FD172AA98 for ; Tue, 20 Nov 2018 16:41:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=8LZVQHZ9BwvJGoEI8iL7AlFGOD6XQElYqBJgd5RIC2Y=; b=AZrAaFD5lsihbH2ONeLUYK93rF 4UGHWlPsy92H+tvhdQyaKMU2AAztpl+zUIfmm3ZsghIxjOO4GFztFW7LUMGVZ51ZPuH2/Vo7FRGrj zAvzSby9fta09/YOVqQcaJJlt8pySfsmhZKmdspoveDLJeJ31SFZNxG/PgG4njxMIOukBplYMlAGG HdMlVEdoB+9mN8CcoSMT+oAuAvne8Am7Pmp3Nw7KLLAz+PcCJLw2QdmEDlMmbB5EVHMb/YTq9GVL+ 62n8/KXrmJFQuAEM54Fi00ZnE89q3GsDrEka6+2YmfIdZ4HgL5zhDi8juhIkGDncy7x2ymiBVEQQm PDtkj7Dw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gP95C-0005MR-Ma; Tue, 20 Nov 2018 16:41:02 +0000 Received: from casper.infradead.org ([85.118.1.10]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1gP8kp-0001i9-KC for linux-arm-kernel@bombadil.infradead.org; Tue, 20 Nov 2018 16:19:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=jW8u1VNj6I5LeZ/K4mnVrYUvbIwanCFyXO4xyf4qvC0=; b=iLioOwsgTUVB9G1mHr3+oWP4t rqLkI0MRKazhA1lwb1kWGiC1itQ5R7+XLCOQl2V5VAV87c7CT6UZrwHl7CvJ+gJ5hXwDYD+bJQxMm +qKvKPdkL3xEeXvNGIsoQ+Iv6Wfc4mA/wvmL6HcSVU1N7wjJw6zt/WyfV+JokcpC9Y3enoCmj6NSd pgM6KB9V1If4mogVbPoaGNRgv0/s2DvtRfUFdcZyMMkjs7I4BqvLJYqokbLO4xy1F6PygalexrDwn o6QUqeRucCO7nqGyrP9z79QZ1eRPYqO/FdgyhQeEKV10ngmc+uy00+VfndXYVMRLMGAEIJnZ3kON2 zYhfyD4OA==; Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by casper.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gP6oo-0000as-2A for linux-arm-kernel@lists.infradead.org; Tue, 20 Nov 2018 14:15:59 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0FD1D1650; Tue, 20 Nov 2018 06:15:47 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.11]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1A3F93F5A0; Tue, 20 Nov 2018 06:15:44 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH v2 4/4] arm64: KVM: Enable support for :G/:H perf event modifiers Date: Tue, 20 Nov 2018 14:15:22 +0000 Message-Id: <1542723322-42536-5-git-send-email-andrew.murray@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1542723322-42536-1-git-send-email-andrew.murray@arm.com> References: <1542723322-42536-1-git-send-email-andrew.murray@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181120_141558_377340_F178E71E X-CRM114-Status: GOOD ( 12.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Suzuki K Poulose MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Enable/disable event counters as appropriate when entering and exiting the guest to enable support for guest or host only event counting. For both VHE and non-VHE we switch the counters between host/guest at EL2. EL2 is filtered out by the PMU when we are using the :G modifier. The PMU may be on when we change which counters are enabled however we avoid adding an isb as we instead rely on existing context synchronisation events: the isb in kvm_arm_vhe_guest_exit for VHE and the eret from the hvc in kvm_call_hyp. Signed-off-by: Andrew Murray --- arch/arm64/kvm/hyp/switch.c | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index d496ef5..ebf0aac 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -373,6 +373,32 @@ static bool __hyp_text __hyp_switch_fpsimd(struct kvm_vcpu *vcpu) return true; } +static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) +{ + u32 host_only = host_ctxt->events_host_only; + u32 guest_only = host_ctxt->events_guest_only; + + if (host_only) + write_sysreg(host_only, pmcntenclr_el0); + + if (guest_only) + write_sysreg(guest_only, pmcntenset_el0); + + return (host_only || guest_only); +} + +static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) +{ + u32 host_only = host_ctxt->events_host_only; + u32 guest_only = host_ctxt->events_guest_only; + + if (guest_only) + write_sysreg(guest_only, pmcntenclr_el0); + + if (host_only) + write_sysreg(host_only, pmcntenset_el0); +} + /* * Return true when we were able to fixup the guest exit and should return to * the guest, false when we should restore the host state and return to the @@ -488,12 +514,15 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; + bool pmu_switch_needed; u64 exit_code; host_ctxt = vcpu->arch.host_cpu_context; host_ctxt->__hyp_running_vcpu = vcpu; guest_ctxt = &vcpu->arch.ctxt; + pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); + sysreg_save_host_state_vhe(host_ctxt); __activate_traps(vcpu); @@ -524,6 +553,9 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __debug_switch_to_host(vcpu); + if (pmu_switch_needed) + __pmu_switch_to_host(host_ctxt); + return exit_code; } @@ -532,6 +564,7 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; + bool pmu_switch_needed; u64 exit_code; vcpu = kern_hyp_va(vcpu); @@ -540,6 +573,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) host_ctxt->__hyp_running_vcpu = vcpu; guest_ctxt = &vcpu->arch.ctxt; + pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); + __sysreg_save_state_nvhe(host_ctxt); __activate_traps(vcpu); @@ -586,6 +621,9 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) */ __debug_switch_to_host(vcpu); + if (pmu_switch_needed) + __pmu_switch_to_host(host_ctxt); + return exit_code; }