From patchwork Mon Jul 17 14:27:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 9845311 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4425C6037F for ; Mon, 17 Jul 2017 14:31:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 338A0284B5 for ; Mon, 17 Jul 2017 14:31:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 27151284FF; Mon, 17 Jul 2017 14:31:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 59CF128509 for ; Mon, 17 Jul 2017 14:31:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jVq3RHysjVUfr/S5VCirIbNevmU7pgarV30cQf9E25o=; b=dq6AckSluLd3P1tVr2lt3cUvMU derQgs7Hsqe28QHK4Dw6LNF5JE6wgC4rlt/zKxPFgFlIuJpA5n/4HQcnD1DdyriGI244G8E0GVyZb TjBH6jRgNJESK891npTfXWGyl3dsS+/ui8rJYnB+jcXRL/V2S/Q0wBWw5ZwiXeN5VFvQxnyfth2OI WHsVP3XdV5XBnCS0RsNdDSKV4n6YKvdKXppIiLq+p2HcR+UyTKczhezGB/j6/L6IXJbpIsJOTaUDE /dhsSC8pmsAXZ5NXP+JsLqfCL4nJJ3GIu2G/0/R7PF6sVDnx66pqDxkzYkPEOLRqDdwXy0f7bT3BR TWt1qvuQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dX73t-0006O1-AQ; Mon, 17 Jul 2017 14:31:49 +0000 Received: from mail-wm0-x22e.google.com ([2a00:1450:400c:c09::22e]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dX708-0001MM-Lo for linux-arm-kernel@lists.infradead.org; Mon, 17 Jul 2017 14:28:10 +0000 Received: by mail-wm0-x22e.google.com with SMTP id b134so49978128wma.0 for ; Mon, 17 Jul 2017 07:27:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Hl2ZDFPvNUSboyog3WHNNynmPTUqnq2coG1p9C3zbmw=; b=JZFVNC5u13JQ1MtnB0d9qix4jmdRGxBQ746bltjhnU8H7jERaTCWfgFsHnGPESwUyN YmfyE+SLv35JKTsxlxI0SbrC4ATIA1slHKeFvQOtlzR7bKYvlILXmKKE70l+DWi2qs7u Wynr5k0WCnz0Cs5IeU38K8fWCBTzjiFgmhiYo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Hl2ZDFPvNUSboyog3WHNNynmPTUqnq2coG1p9C3zbmw=; b=tRMmclf2zxjIOIIOFWbU3o4uY+bSTPaabq+eAPMnDiH02puFXK97Vylf1Bh+wRK1Zk tvznVSf/G1nQeYxtTx1RGqiViP4bxoinc53JxADCpNLwjEpqsj30a0BZ0hE2mxEc/fF4 RzBo0wBDt+HOb82W4SESaKpGLEAsj9wG1y9RNkZJ3A7QJ/v86Pnhfc3PCEW3/7poaxKP VA75UDmsxit1QzxQqCaQTuhry4AM2aXTG/iB5J/SIOz8lHXVoDB9aaP1zaPsVmeJ9OhC 9xdT7AljeTZ0NYSZ2YcY0TkUhdKb3olK9rdt6yQgAVjB8UGGpCCGgx9x8yi1rO7Z5C2w f/xA== X-Gm-Message-State: AIVw110KjKdmOosCZa5TESO26MuzdbWXBXa/cP4yvmX4FIRLbyvyUlaW sJSADVzEoagtsQ2N X-Received: by 10.80.207.203 with SMTP id i11mr17345876edk.112.1500301654775; Mon, 17 Jul 2017 07:27:34 -0700 (PDT) Received: from localhost.localdomain (xd93ddc2d.cust.hiper.dk. [217.61.220.45]) by smtp.gmail.com with ESMTPSA id b30sm9428952edd.6.2017.07.17.07.27.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 17 Jul 2017 07:27:34 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH v2 10/19] KVM: arm/arm64: Move timer save/restore out of the hyp code Date: Mon, 17 Jul 2017 16:27:09 +0200 Message-Id: <20170717142718.13853-11-cdall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170717142718.13853-1-cdall@linaro.org> References: <20170717142718.13853-1-cdall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170717_072757_793018_649AF811 X-CRM114-Status: GOOD ( 20.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Christoffer Dall , kvm@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP As we are about to be lazy with saving and restoring the timer registers, we prepare by moving all possible timer configuration logic out of the hyp code. All virtual timer registers can be programmed from EL1 and since the arch timer is always a level triggered interrupt we can safely do this with interrupts disabled in the host kernel on the way to the guest without taking vtimer interrupts in the host kernel (yet). The downside is that the cntvoff register can only be programmed from hyp mode, so we jump into hyp mode and back to program it. This is also safe, because the host kernel doesn't use the virtual timer in the KVM code. It may add a little performance performance penalty, but only until following commits where we move this operation to vcpu load/put. Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_asm.h | 2 ++ arch/arm/include/asm/kvm_hyp.h | 4 +-- arch/arm/kvm/hyp/switch.c | 7 ++-- arch/arm64/include/asm/kvm_asm.h | 2 ++ arch/arm64/include/asm/kvm_hyp.h | 4 +-- arch/arm64/kvm/hyp/switch.c | 6 ++-- virt/kvm/arm/arch_timer.c | 40 ++++++++++++++++++++++ virt/kvm/arm/hyp/timer-sr.c | 74 +++++++++++++++++----------------------- 8 files changed, 87 insertions(+), 52 deletions(-) diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 14d68a4..36dd296 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -68,6 +68,8 @@ extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); +extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); + extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); extern void __init_stage2_translation(void); diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h index 14b5903..ab20ffa 100644 --- a/arch/arm/include/asm/kvm_hyp.h +++ b/arch/arm/include/asm/kvm_hyp.h @@ -98,8 +98,8 @@ #define cntvoff_el2 CNTVOFF #define cnthctl_el2 CNTHCTL -void __timer_save_state(struct kvm_vcpu *vcpu); -void __timer_restore_state(struct kvm_vcpu *vcpu); +void __timer_enable_traps(struct kvm_vcpu *vcpu); +void __timer_disable_traps(struct kvm_vcpu *vcpu); void __vgic_v2_save_state(struct kvm_vcpu *vcpu); void __vgic_v2_restore_state(struct kvm_vcpu *vcpu); diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c index ebd2dd4..330c9ce 100644 --- a/arch/arm/kvm/hyp/switch.c +++ b/arch/arm/kvm/hyp/switch.c @@ -174,7 +174,7 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) __activate_vm(vcpu); __vgic_restore_state(vcpu); - __timer_restore_state(vcpu); + __timer_enable_traps(vcpu); __sysreg_restore_state(guest_ctxt); __banked_restore_state(guest_ctxt); @@ -191,7 +191,8 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) __banked_save_state(guest_ctxt); __sysreg_save_state(guest_ctxt); - __timer_save_state(vcpu); + __timer_disable_traps(vcpu); + __vgic_save_state(vcpu); __deactivate_traps(vcpu); @@ -237,7 +238,7 @@ void __hyp_text __noreturn __hyp_panic(int cause) vcpu = (struct kvm_vcpu *)read_sysreg(HTPIDR); host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - __timer_save_state(vcpu); + __timer_disable_traps(vcpu); __deactivate_traps(vcpu); __deactivate_vm(vcpu); __banked_restore_state(host_ctxt); diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 26a64d0..ab4d0a9 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -55,6 +55,8 @@ extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); +extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); + extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); extern u64 __vgic_v3_get_ich_vtr_el2(void); diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 4572a9b..08d3bb6 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -129,8 +129,8 @@ void __vgic_v3_save_state(struct kvm_vcpu *vcpu); void __vgic_v3_restore_state(struct kvm_vcpu *vcpu); int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); -void __timer_save_state(struct kvm_vcpu *vcpu); -void __timer_restore_state(struct kvm_vcpu *vcpu); +void __timer_enable_traps(struct kvm_vcpu *vcpu); +void __timer_disable_traps(struct kvm_vcpu *vcpu); void __sysreg_save_host_state(struct kvm_cpu_context *ctxt); void __sysreg_restore_host_state(struct kvm_cpu_context *ctxt); diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 945e79c..4994f4b 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -298,7 +298,7 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) __activate_vm(vcpu); __vgic_restore_state(vcpu); - __timer_restore_state(vcpu); + __timer_enable_traps(vcpu); /* * We must restore the 32-bit state before the sysregs, thanks @@ -368,7 +368,7 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) __sysreg_save_guest_state(guest_ctxt); __sysreg32_save_state(vcpu); - __timer_save_state(vcpu); + __timer_disable_traps(vcpu); __vgic_save_state(vcpu); __deactivate_traps(vcpu); @@ -436,7 +436,7 @@ void __hyp_text __noreturn __hyp_panic(void) vcpu = (struct kvm_vcpu *)read_sysreg(tpidr_el2); host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - __timer_save_state(vcpu); + __timer_disable_traps(vcpu); __deactivate_traps(vcpu); __deactivate_vm(vcpu); __sysreg_restore_host_state(host_ctxt); diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c index 7f87099..4254f88 100644 --- a/virt/kvm/arm/arch_timer.c +++ b/virt/kvm/arm/arch_timer.c @@ -276,6 +276,20 @@ static void phys_timer_emulate(struct kvm_vcpu *vcpu, soft_timer_start(&timer->phys_timer, kvm_timer_compute_delta(timer_ctx)); } +static void timer_save_state(struct kvm_vcpu *vcpu) +{ + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; + struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); + + if (timer->enabled) { + vtimer->cnt_ctl = read_sysreg_el0(cntv_ctl); + vtimer->cnt_cval = read_sysreg_el0(cntv_cval); + } + + /* Disable the virtual timer */ + write_sysreg_el0(0, cntv_ctl); +} + /* * Schedule the background timer before calling kvm_vcpu_block, so that this * thread is removed from its waitqueue and made runnable when there's a timer @@ -312,6 +326,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu) soft_timer_start(&timer->bg_timer, kvm_timer_earliest_exp(vcpu)); } +static void timer_restore_state(struct kvm_vcpu *vcpu) +{ + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; + struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); + + if (timer->enabled) { + write_sysreg_el0(vtimer->cnt_cval, cntv_cval); + isb(); + write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl); + } +} + void kvm_timer_unschedule(struct kvm_vcpu *vcpu) { struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; @@ -320,6 +346,13 @@ void kvm_timer_unschedule(struct kvm_vcpu *vcpu) timer->armed = false; } +static void set_cntvoff(u64 cntvoff) +{ + u32 low = cntvoff & GENMASK(31, 0); + u32 high = (cntvoff >> 32) & GENMASK(31, 0); + kvm_call_hyp(__kvm_timer_set_cntvoff, low, high); +} + static void kvm_timer_flush_hwstate_vgic(struct kvm_vcpu *vcpu) { struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); @@ -423,6 +456,7 @@ static void kvm_timer_flush_hwstate_user(struct kvm_vcpu *vcpu) void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu) { struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; + struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); if (unlikely(!timer->enabled)) return; @@ -436,6 +470,9 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu) kvm_timer_flush_hwstate_user(vcpu); else kvm_timer_flush_hwstate_vgic(vcpu); + + set_cntvoff(vtimer->cntvoff); + timer_restore_state(vcpu); } /** @@ -455,6 +492,9 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu) */ soft_timer_cancel(&timer->phys_timer, NULL); + timer_save_state(vcpu); + set_cntvoff(0); + /* * The guest could have modified the timer registers or the timer * could have expired, update the timer state. diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c index 4734915..a6c3b10 100644 --- a/virt/kvm/arm/hyp/timer-sr.c +++ b/virt/kvm/arm/hyp/timer-sr.c @@ -21,58 +21,48 @@ #include -/* vcpu is already in the HYP VA space */ -void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu) +void __hyp_text __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high) +{ + u64 cntvoff = (u64)cntvoff_high << 32 | cntvoff_low; + write_sysreg(cntvoff, cntvoff_el2); +} + +void __hyp_text enable_phys_timer(void) { - struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; - struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); u64 val; - if (timer->enabled) { - vtimer->cnt_ctl = read_sysreg_el0(cntv_ctl); - vtimer->cnt_cval = read_sysreg_el0(cntv_cval); - } + /* Allow physical timer/counter access for the host */ + val = read_sysreg(cnthctl_el2); + val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN; + write_sysreg(val, cnthctl_el2); +} - /* Disable the virtual timer */ - write_sysreg_el0(0, cntv_ctl); +void __hyp_text disable_phys_timer(void) +{ + u64 val; /* + * Disallow physical timer access for the guest + * Physical counter access is allowed + */ + val = read_sysreg(cnthctl_el2); + val &= ~CNTHCTL_EL1PCEN; + val |= CNTHCTL_EL1PCTEN; + write_sysreg(val, cnthctl_el2); +} + +void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) +{ + /* * We don't need to do this for VHE since the host kernel runs in EL2 * with HCR_EL2.TGE ==1, which makes those bits have no impact. */ - if (!has_vhe()) { - /* Allow physical timer/counter access for the host */ - val = read_sysreg(cnthctl_el2); - val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN; - write_sysreg(val, cnthctl_el2); - } - - /* Clear cntvoff for the host */ - write_sysreg(0, cntvoff_el2); + if (!has_vhe()) + enable_phys_timer(); } -void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu) +void __hyp_text __timer_enable_traps(struct kvm_vcpu *vcpu) { - struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; - struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); - u64 val; - - /* Those bits are already configured at boot on VHE-system */ - if (!has_vhe()) { - /* - * Disallow physical timer access for the guest - * Physical counter access is allowed - */ - val = read_sysreg(cnthctl_el2); - val &= ~CNTHCTL_EL1PCEN; - val |= CNTHCTL_EL1PCTEN; - write_sysreg(val, cnthctl_el2); - } - - if (timer->enabled) { - write_sysreg(vtimer->cntvoff, cntvoff_el2); - write_sysreg_el0(vtimer->cnt_cval, cntv_cval); - isb(); - write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl); - } + if (!has_vhe()) + disable_phys_timer(); }