From patchwork Thu Oct 12 10:41:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10001655 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 10ABA603B5 for ; Thu, 12 Oct 2017 10:54:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 020D328D77 for ; Thu, 12 Oct 2017 10:54:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EADD628D74; Thu, 12 Oct 2017 10:53:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7EE8328D72 for ; Thu, 12 Oct 2017 10:53:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=watCL6xISML3g4ylBdipfwIhOrgrSrxBwsaXsds2nAA=; b=ZRAceSuLMX9vWSE5A7BvCzRly5 27+hYgOOH3o6Zhvulpzesti4U7BPu/wDzP4oi5+gjMX8GEXhxxZ+C5L17oGib+bpEW7koK1W1Xx7s qsd8D4JW23uBwmywFdu0+QTbgQEPmTFQBx1jpHzU5CZVJiPD7XrDFBjRe6emYM3IwSStvvrTSswCO o6Tvo0YOnHljhsVoowQsbWufIq3JFnJb19FHidWONY67R9409A+TEnd2wEooeGZha1xZiq2PM9ROn dcSstD8A+iF0dxgPnSaMLw69dreoKmb+zn1+CXteywQviiuZ1anoGxZGcJ1YCw/lOHaNmIgYreYHB CV98mvnA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1e2b7Q-00032E-9s; Thu, 12 Oct 2017 10:53:36 +0000 Received: from mail-wm0-x236.google.com ([2a00:1450:400c:c09::236]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1e2awU-0007XF-5z for linux-arm-kernel@lists.infradead.org; Thu, 12 Oct 2017 10:42:31 +0000 Received: by mail-wm0-x236.google.com with SMTP id l68so12073804wmd.5 for ; Thu, 12 Oct 2017 03:42:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IzDp0Jtc6mQPRXgB/4A5mHX8LSiy9LSlMvJ+/+P0PTw=; b=Wa6YKGYCWM6RKrXMchomv82b/WG1Y8w4Y/KfaYeUd/huIxmXRbLadZlXd/djaKhz2S LxRyYOtqTPkrG60/b6ANVwoEeKc6cKLl2DStZZybL8MssZi5eYtTlcIDMupRhHQpBO2b VHDZ/8a6HAphDs/7B4gpei/vNXnfqGEn+T4IY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IzDp0Jtc6mQPRXgB/4A5mHX8LSiy9LSlMvJ+/+P0PTw=; b=YG29bJKHz+Hg0NZD7QcWUfFyxnNGCiEGCyL/ODMVWHY9Y7GbjB6n9tUKQUC9f4j0LF n+KLmR7e65S2dlnTwsNBcf4e8nZVMivcsJwFAq+KGE6CfTagT35enn33OsUzDBBte9n9 5Tdv1J8ssOu+DITVzVwAEIsJGtkxVKv4vBT4QZgZt8fqV8ZN3EK5rrZ/kgz3yfHOFl8x iRxJv/sAZ1SWkmB5gr5fd8tW1KJYby5dlkeS5Ept4NdGlgL7O6/hCyqvejlcTjkMnuSw 5U3YQjUsk8HO7+jWFT75khUhE4NdwcRj8YYghPzdCnyrW1N8UeT46+QvUX9Mu6B+JSsv 1Vlw== X-Gm-Message-State: AMCzsaU3+eWFAaeFW2yVF97ySg1iN0pPtkYV/xsi8y0COaO5LWICpxJN /sNWDGkJCGnKIIAPdwOtUonEyw== X-Google-Smtp-Source: AOwi7QAHBueS3mYavd66ytlEa37I6n7sbVHb3W78NPJUB2JYhcFQbatXa1dophAQ3dCCSthBHd3QPQ== X-Received: by 10.80.178.36 with SMTP id o33mr2477280edd.116.1507804919356; Thu, 12 Oct 2017 03:41:59 -0700 (PDT) Received: from localhost.localdomain (xd93dd96b.cust.hiper.dk. [217.61.217.107]) by smtp.gmail.com with ESMTPSA id g49sm4798603edc.31.2017.10.12.03.41.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 12 Oct 2017 03:41:58 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH 13/37] KVM: arm64: Introduce VHE-specific kvm_vcpu_run Date: Thu, 12 Oct 2017 12:41:17 +0200 Message-Id: <20171012104141.26902-14-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20171012104141.26902-1-christoffer.dall@linaro.org> References: <20171012104141.26902-1-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171012_034219_332706_FD57E143 X-CRM114-Status: GOOD ( 15.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Christoffer Dall , Shih-Wei Li , kvm@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP So far this is just a copy of the legacy non-VHE switch function, where we only change the existing calls to has_vhe() in both the original and new functions. Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_asm.h | 4 +++ arch/arm64/include/asm/kvm_asm.h | 2 ++ arch/arm64/kvm/hyp/switch.c | 57 ++++++++++++++++++++++++++++++++++++++++ virt/kvm/arm/arm.c | 5 +++- 4 files changed, 67 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 36dd296..1a7bc5f 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -70,8 +70,12 @@ extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); +/* no VHE on 32-bit :( */ +static inline int kvm_vcpu_run(struct kvm_vcpu *vcpu) { return 0; } + extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); + extern void __init_stage2_translation(void); extern u64 __vgic_v3_get_ich_vtr_el2(void); diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 7e48a39..2eb5b23 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -57,6 +57,8 @@ extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); +extern int kvm_vcpu_run(struct kvm_vcpu *vcpu); + extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); extern u64 __vgic_v3_get_ich_vtr_el2(void); diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index ed30af5..8a0f38f 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -319,6 +319,63 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) return false; } +/* Switch to the guest for VHE systems running in EL2 */ +int kvm_vcpu_run(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + u64 exit_code; + + vcpu = kern_hyp_va(vcpu); + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + host_ctxt->__hyp_running_vcpu = vcpu; + guest_ctxt = &vcpu->arch.ctxt; + + __sysreg_save_host_state(host_ctxt); + + __activate_traps(vcpu); + __activate_vm(vcpu); + + __vgic_restore_state(vcpu); + __timer_enable_traps(vcpu); + + /* + * We must restore the 32-bit state before the sysregs, thanks + * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). + */ + __sysreg32_restore_state(vcpu); + __sysreg_restore_guest_state(guest_ctxt); + __debug_switch_to_guest(vcpu); + + /* Jump in the fire! */ +again: + exit_code = __guest_enter(vcpu, host_ctxt); + /* And we're baaack! */ + + if (fixup_guest_exit(vcpu, &exit_code)) + goto again; + + __sysreg_save_guest_state(guest_ctxt); + __sysreg32_save_state(vcpu); + __timer_disable_traps(vcpu); + __vgic_save_state(vcpu); + + __deactivate_traps(vcpu); + __deactivate_vm(vcpu); + + __sysreg_restore_host_state(host_ctxt); + + /* + * This must come after restoring the host sysregs, since a non-VHE + * system may enable SPE here and make use of the TTBRs. + */ + __debug_switch_to_host(vcpu); + + return exit_code; +} + +/* Switch to the guest for legacy non-VHE systems */ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index cf121b2..b11647a 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -706,7 +706,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) trace_kvm_entry(*vcpu_pc(vcpu)); guest_enter_irqoff(); - ret = kvm_call_hyp(__kvm_vcpu_run, vcpu); + if (has_vhe()) + ret = kvm_vcpu_run(vcpu); + else + ret = kvm_call_hyp(__kvm_vcpu_run, vcpu); vcpu->mode = OUTSIDE_GUEST_MODE; vcpu->stat.exits++;