From patchwork Thu Feb 15 21:03:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10223665 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 107C4602CB for ; Thu, 15 Feb 2018 21:17:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0009E29529 for ; Thu, 15 Feb 2018 21:17:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E8FD12953A; Thu, 15 Feb 2018 21:17:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6090B29529 for ; Thu, 15 Feb 2018 21:17:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=d7DGIfH17g6p15v4jBTl631EDvtMKixY+crsJhlQtuE=; b=Fnwb8rGkbBIZd8KAJgG3ZQfjCl vN+N1UxHL5A0S7sMX8M3Vpa0iBM71QrS06kzgBFvL1MqVKtQ+dwNNShralP84ygHqNRopCZu8DLMH BLxHeHKJlppHRrp5Wnw5YIwlhptDhvyQjcSgQqLYHlBlPb4EIpcqZNS9UPBxsGonKipsEHbOW4Lpe gRzETHo/Pjtg7Hw7sr280hzCc5kcUmwP6bNwLOVsDmtqDLqTbt5d5rUSId2afIVRu4+SCpQO/B91e F2Z2W0jUWgwFgJA6kzKlcTxHp3VBoAHWCr4R1F5yRZBqSMTr+nKLvO6HYiuZtDKXCk63DOefz28Wx FkJG4ICA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1emQtq-0003tz-JB; Thu, 15 Feb 2018 21:17:02 +0000 Received: from mail-wm0-x242.google.com ([2a00:1450:400c:c09::242]) by bombadil.infradead.org with esmtps (Exim 4.89 #1 (Red Hat Linux)) id 1emQhW-0008P0-AE for linux-arm-kernel@lists.infradead.org; Thu, 15 Feb 2018 21:04:53 +0000 Received: by mail-wm0-x242.google.com with SMTP id h74so3266601wme.5 for ; Thu, 15 Feb 2018 13:04:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=SnFNaMcgMMc+BntJmk96rXueGEUhq/J3GW5qO/Wxsv8=; b=JYzRvN9+BuP4m2iZFSlGz2UgsiwJDOl7Zy+WWTC2/4reNBuM95yhRwNlA8r/l/WP0w d/TYkYdwtR2pY22NTVhcvyhk8Fv3yJp+bJFC/4/SfRLT1h35oK3nVw2g1iIS2Rp8ZWIu N+EYHuw8vch16l+6L68WJJrmSCS6bvbW6gsGY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=SnFNaMcgMMc+BntJmk96rXueGEUhq/J3GW5qO/Wxsv8=; b=bTdhEFi/0lN71C9BSlpHZ7sjuCDqfyqbeR2jNpMx398JH6Av1jSEzhvIs4yKUqv1Js waX4DXol9jYmFr9eFSx97t8ePiEN+44y2rh+62hwNwsm9PNoStH7jqUHNSPEkitV0PH1 JCP5Ap4x3/TH/O7eqNBcWc0bMshb8HReOz+Al1f0g3LSnZ/EYxyhRp9YjU6xQmWYSK0G MnJKRenQptEfDIItElMjcSqXGLjNTUycg+Yw5QqzC6PxQR+MZQz/ulq33idt58uIg+a1 BokENgLdCLH6rgqoBmSX/Z4xz2jLJlDVMtMfA4j2R2SloR/NnpHqKf6tyA8VulogHzSe uIFQ== X-Gm-Message-State: APf1xPA4mbMZiXDbJQEHGKarf0slKJSF1Yvv829PV4Mj11xw/11fFiFZ N7N1bCgsbvMsLIKxUAlyLcaVkg== X-Google-Smtp-Source: AH8x225Q9gnxSyYal3Bo5ha01FA9UfIHT/zBmW81ry6LVNsvRLyG7vefev1grDuTEHc7CvXoBaUTMg== X-Received: by 10.80.153.143 with SMTP id m15mr5042774edb.145.1518728646455; Thu, 15 Feb 2018 13:04:06 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id p55sm8220030edc.15.2018.02.15.13.04.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 15 Feb 2018 13:04:05 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 13/40] KVM: arm64: Introduce VHE-specific kvm_vcpu_run Date: Thu, 15 Feb 2018 22:03:05 +0100 Message-Id: <20180215210332.8648-14-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180215210332.8648-1-christoffer.dall@linaro.org> References: <20180215210332.8648-1-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180215_130418_770854_62C7D048 X-CRM114-Status: GOOD ( 16.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Andrew Jones , kvm@vger.kernel.org, Marc Zyngier , Tomasz Nowicki , Julien Grall , Yury Norov , Christoffer Dall , Dave Martin , Shih-Wei Li MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP So far this is mostly (see below) a copy of the legacy non-VHE switch function, but we will start reworking these functions in separate directions to work on VHE and non-VHE in the most optimal way in later patches. The only difference after this patch between the VHE and non-VHE run functions is that we omit the branch-predictor variant-2 hardening for QC Falkor CPUs, because this workaround is specific to a series of non-VHE ARMv8.0 CPUs. Reviewed-by: Marc Zyngier Signed-off-by: Christoffer Dall --- Notes: Changes since v3: - Added BUG() to 32-bit ARM VHE run function - Omitted QC Falkor BP Hardening functionality from VHE-specific function Changes since v2: - Reworded commit message Changes since v1: - Rename kvm_vcpu_run to kvm_vcpu_run_vhe and rename __kvm_vcpu_run to __kvm_vcpu_run_nvhe - Removed stray whitespace line arch/arm/include/asm/kvm_asm.h | 5 ++- arch/arm/kvm/hyp/switch.c | 2 +- arch/arm64/include/asm/kvm_asm.h | 4 ++- arch/arm64/kvm/hyp/switch.c | 66 +++++++++++++++++++++++++++++++++++++++- virt/kvm/arm/arm.c | 5 ++- 5 files changed, 77 insertions(+), 5 deletions(-) diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 36dd2962a42d..5a953ecb0d78 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -70,7 +70,10 @@ extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); -extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); +/* no VHE on 32-bit :( */ +static inline int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { BUG(); return 0; } + +extern int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu); extern void __init_stage2_translation(void); diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c index e86679daddff..aac025783ee8 100644 --- a/arch/arm/kvm/hyp/switch.c +++ b/arch/arm/kvm/hyp/switch.c @@ -154,7 +154,7 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) return true; } -int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 6b626750b0a1..0be2747a6c5f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -58,7 +58,9 @@ extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); -extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); +extern int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu); + +extern int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu); extern u64 __vgic_v3_get_ich_vtr_el2(void); extern u64 __vgic_v3_read_vmcr(void); diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index d2c0b1ae3216..b6126af539b6 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -362,7 +362,71 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) return false; } -int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +/* Switch to the guest for VHE systems running in EL2 */ +int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + bool fp_enabled; + u64 exit_code; + + vcpu = kern_hyp_va(vcpu); + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + host_ctxt->__hyp_running_vcpu = vcpu; + guest_ctxt = &vcpu->arch.ctxt; + + __sysreg_save_host_state(host_ctxt); + + __activate_traps(vcpu); + __activate_vm(vcpu); + + __vgic_restore_state(vcpu); + __timer_enable_traps(vcpu); + + /* + * We must restore the 32-bit state before the sysregs, thanks + * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). + */ + __sysreg32_restore_state(vcpu); + __sysreg_restore_guest_state(guest_ctxt); + __debug_switch_to_guest(vcpu); + + do { + /* Jump in the fire! */ + exit_code = __guest_enter(vcpu, host_ctxt); + + /* And we're baaack! */ + } while (fixup_guest_exit(vcpu, &exit_code)); + + fp_enabled = __fpsimd_enabled(); + + __sysreg_save_guest_state(guest_ctxt); + __sysreg32_save_state(vcpu); + __timer_disable_traps(vcpu); + __vgic_save_state(vcpu); + + __deactivate_traps(vcpu); + __deactivate_vm(vcpu); + + __sysreg_restore_host_state(host_ctxt); + + if (fp_enabled) { + __fpsimd_save_state(&guest_ctxt->gp_regs.fp_regs); + __fpsimd_restore_state(&host_ctxt->gp_regs.fp_regs); + } + + /* + * This must come after restoring the host sysregs, since a non-VHE + * system may enable SPE here and make use of the TTBRs. + */ + __debug_switch_to_host(vcpu); + + return exit_code; +} + +/* Switch to the guest for legacy non-VHE systems */ +int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 2062d9357971..5bd879c78951 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -736,7 +736,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) if (has_vhe()) kvm_arm_vhe_guest_enter(); - ret = kvm_call_hyp(__kvm_vcpu_run, vcpu); + if (has_vhe()) + ret = kvm_vcpu_run_vhe(vcpu); + else + ret = kvm_call_hyp(__kvm_vcpu_run_nvhe, vcpu); if (has_vhe()) kvm_arm_vhe_guest_exit();