From patchwork Thu Dec 7 17:06:18 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10100297 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2F20C60360 for ; Thu, 7 Dec 2017 17:13:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 144992844C for ; Thu, 7 Dec 2017 17:13:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 08DC9284DE; Thu, 7 Dec 2017 17:13:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3EB572844C for ; Thu, 7 Dec 2017 17:13:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gNmasNV75jcxcoiurzQAcsJ3oQ0lHSPOH4mk9tE1waE=; b=agff0V2xU2sdGOaG6mw3VR19QS 4bJ8W5gkEnpO90Ad6WK6jtO5K2zzpNe8ist7sUekmfGCuoHNnwcSFEyiJVC03XD3zCTdLGxrm9QnY fLYd6+LLwz/IWdArO4t/V7Iibo4TfqEnm/093F9npzU3XclQcmOoHZu4PSF+iwu4uSVLYrdRgpbPJ Yvyeg3uWuF7uMcg35c5XpaW1M1tZLZ1cbukrnrISu1Vh3gA0ziehX46tDKtdI4lvVlNNud3dca/Pk p52zMgaUZKFIOX8Y9EGPUTKIPmhTsDV/YGMW6YjMykk60ibb6V4TOcQcdqlMnojBQIhPpXojxozK+ DGY/SNUg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1eMzk7-0001jd-Dw; Thu, 07 Dec 2017 17:13:51 +0000 Received: from mail-wm0-x244.google.com ([2a00:1450:400c:c09::244]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1eMze8-0003Gg-K6 for linux-arm-kernel@lists.infradead.org; Thu, 07 Dec 2017 17:08:25 +0000 Received: by mail-wm0-x244.google.com with SMTP id l141so14273541wmg.1 for ; Thu, 07 Dec 2017 09:07:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=10lycK+m9JraHxIVyGkXxQNtO1Yri0+BoUGzUaBy9ms=; b=BHrhFuD3cyzor8gLETQtA20ry/zQ0otxCilfDOEC0nWxNSJ3TDY5Z+2I2rIHBRuAtd kgBHVfnKmYrl3ivTyFAxqEsvFnCZ88zS3RBhwciaLTMIX0kpofOj8dD0N/wflcl2Sr/9 SOHJ1c5416phg82inkq1K/HP+eOJjARkourxo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=10lycK+m9JraHxIVyGkXxQNtO1Yri0+BoUGzUaBy9ms=; b=nDb9j5SsfW5TBiSvoRvmHch3sYalz4etnYKqtsp+xWa7aOnznS1Ii6umgSqwCb4i7U lc4OXv2RrSZtuCkAC/A4/h9QNtj75vt6K+2KkMq/xyE7isvatLa66Xz2QvXeWsRKew3Y 0N6EDX4j1gdAuABatUuRL6Zi50DeayCI9Ne8KsocYJP26FG26Kj+9yDyLBKhOwlsChv5 gWzbzHN4aH8AoEB1rURZmjQsN+I81jJGKvtswGIyd9i1fHe74sGk82IL6oA+lYmBpDP5 BMnYJ/H2xggPTk5dwWtX7POCoIHZVrx2xAhI8Bx5ReuyEpzs3R3KDoXJlEGdNrVqwsp5 Xn6w== X-Gm-Message-State: AKGB3mL0IecYBocMxd6zmIu7KzKvXJH4xcM5Og+9Rihp8HY/PnTHPQq3 Iv6lWjizlhg4PqiBjzu4tbJT06yGU9w= X-Google-Smtp-Source: AGs4zMb0V2Mw2NE3V6n9asMbxaYZtAHj5kmx4GRgtN2GM8DXlVUbUVwsjtuXBd6qsq3nmGvU438g0g== X-Received: by 10.80.181.83 with SMTP id z19mr429686edd.48.1512666438812; Thu, 07 Dec 2017 09:07:18 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id a16sm2868270edd.19.2017.12.07.09.07.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 07 Dec 2017 09:07:17 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 24/36] KVM: arm64: Prepare to handle traps on remaining deferred EL1 sysregs Date: Thu, 7 Dec 2017 18:06:18 +0100 Message-Id: <20171207170630.592-25-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171207170630.592-1-christoffer.dall@linaro.org> References: <20171207170630.592-1-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171207_090741_726468_4E7FB60D X-CRM114-Status: GOOD ( 18.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Andrew Jones , Christoffer Dall , Shih-Wei Li , kvm@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Handle accesses during traps to any remaining EL1 registers which can be deferred to vcpu_load and vcpu_put, by either accessing them directly on the physical CPU when the latest version is stored there, or by synchronizing the memory representation with the CPU state. Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_emulate.h | 16 ++++++++++++ arch/arm/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/kvm_emulate.h | 49 +++++++++++++++++++++++++----------- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/kvm/inject_fault.c | 19 ++++++++++---- arch/arm64/kvm/sys_regs.c | 6 ++++- virt/kvm/arm/aarch32.c | 22 +++++++++++++--- 7 files changed, 93 insertions(+), 23 deletions(-) diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h index d5e1b8bf6422..47efa953460a 100644 --- a/arch/arm/include/asm/kvm_emulate.h +++ b/arch/arm/include/asm/kvm_emulate.h @@ -55,6 +55,22 @@ static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, u8 reg_num, *vcpu_reg(vcpu, reg_num) = val; } +/* Set the SPSR for the current mode */ +static inline void vcpu_set_spsr(struct kvm_vcpu *vcpu, unsigned long val) +{ + *vcpu_spsr(vcpu) = val; +} + +static inline unsigned long vcpu_get_vbar(struct kvm_vcpu *vcpu) +{ + return vcpu_cp15(vcpu, c12_VBAR); +} + +static inline u32 vcpu_get_c1_sctlr(struct kvm_vcpu *vcpu) +{ + return vcpu_cp15(vcpu, c1_SCTLR); +} + bool kvm_condition_valid32(const struct kvm_vcpu *vcpu); void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr); void kvm_inject_undef32(struct kvm_vcpu *vcpu); diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 8fce576199e0..997c0568bfa3 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -203,6 +203,8 @@ struct kvm_vcpu_stat { #define vcpu_cp15(v,r) (v)->arch.ctxt.cp15[r] +#define vcpu_sysregs_loaded(_v) (false) + int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init); unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 635137e6ed1c..3f765b9de94d 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -26,6 +26,7 @@ #include #include +#include #include #include #include @@ -77,11 +78,6 @@ static inline unsigned long *vcpu_pc(const struct kvm_vcpu *vcpu) return (unsigned long *)&vcpu_gp_regs(vcpu)->regs.pc; } -static inline unsigned long *vcpu_elr_el1(const struct kvm_vcpu *vcpu) -{ - return (unsigned long *)&vcpu_gp_regs(vcpu)->elr_el1; -} - static inline unsigned long *vcpu_cpsr(const struct kvm_vcpu *vcpu) { return (unsigned long *)&vcpu_gp_regs(vcpu)->regs.pstate; @@ -92,6 +88,40 @@ static inline bool vcpu_mode_is_32bit(const struct kvm_vcpu *vcpu) return !!(*vcpu_cpsr(vcpu) & PSR_MODE32_BIT); } +/* Set the SPSR for the current mode */ +static inline void vcpu_set_spsr(struct kvm_vcpu *vcpu, u64 val) +{ + if (vcpu_mode_is_32bit(vcpu)) + *vcpu_spsr32(vcpu) = val; + + if (vcpu->arch.sysregs_loaded_on_cpu) + write_sysreg_el1(val, spsr); + else + vcpu_gp_regs(vcpu)->spsr[KVM_SPSR_EL1] = val; +} + +static inline unsigned long vcpu_get_vbar(struct kvm_vcpu *vcpu) +{ + unsigned long vbar; + + if (vcpu->arch.sysregs_loaded_on_cpu) + vbar = read_sysreg_el1(vbar); + else + vbar = vcpu_sys_reg(vcpu, VBAR_EL1); + + if (vcpu_el1_is_32bit(vcpu)) + return lower_32_bits(vbar); + return vbar; +} + +static inline u32 vcpu_get_c1_sctlr(struct kvm_vcpu *vcpu) +{ + if (vcpu_sysregs_loaded(vcpu)) + return lower_32_bits(read_sysreg_el1(sctlr)); + else + return vcpu_cp15(vcpu, c1_SCTLR); +} + static inline bool kvm_condition_valid(const struct kvm_vcpu *vcpu) { if (vcpu_mode_is_32bit(vcpu)) @@ -131,15 +161,6 @@ static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, u8 reg_num, vcpu_gp_regs(vcpu)->regs.regs[reg_num] = val; } -/* Get vcpu SPSR for current mode */ -static inline unsigned long *vcpu_spsr(const struct kvm_vcpu *vcpu) -{ - if (vcpu_mode_is_32bit(vcpu)) - return vcpu_spsr32(vcpu); - - return (unsigned long *)&vcpu_gp_regs(vcpu)->spsr[KVM_SPSR_EL1]; -} - static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu) { u32 mode; diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f6afe685a280..992c19816893 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -294,6 +294,8 @@ struct kvm_vcpu_arch { #define vcpu_cp14(v,r) ((v)->arch.ctxt.copro[(r)]) #define vcpu_cp15(v,r) ((v)->arch.ctxt.copro[(r)]) +#define vcpu_sysregs_loaded(_v) ((_v)->arch.sysregs_loaded_on_cpu) + struct kvm_vm_stat { ulong remote_tlb_flush; }; diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index 2d38ede2eff0..1d941e8e8102 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -23,6 +23,7 @@ #include #include +#include #include #define PSTATE_FAULT_BITS_64 (PSR_MODE_EL1h | PSR_A_BIT | PSR_F_BIT | \ @@ -33,6 +34,14 @@ #define LOWER_EL_AArch64_VECTOR 0x400 #define LOWER_EL_AArch32_VECTOR 0x600 +static void vcpu_set_elr_el1(struct kvm_vcpu *vcpu, u64 val) +{ + if (vcpu->arch.sysregs_loaded_on_cpu) + write_sysreg_el1(val, elr); + else + vcpu_gp_regs(vcpu)->elr_el1 = val; +} + enum exception_type { except_type_sync = 0, except_type_irq = 0x80, @@ -58,7 +67,7 @@ static u64 get_except_vector(struct kvm_vcpu *vcpu, enum exception_type type) exc_offset = LOWER_EL_AArch32_VECTOR; } - return vcpu_sys_reg(vcpu, VBAR_EL1) + exc_offset + type; + return vcpu_get_vbar(vcpu) + exc_offset + type; } static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr) @@ -67,11 +76,11 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr bool is_aarch32 = vcpu_mode_is_32bit(vcpu); u32 esr = 0; - *vcpu_elr_el1(vcpu) = *vcpu_pc(vcpu); + vcpu_set_elr_el1(vcpu, *vcpu_pc(vcpu)); *vcpu_pc(vcpu) = get_except_vector(vcpu, except_type_sync); *vcpu_cpsr(vcpu) = PSTATE_FAULT_BITS_64; - *vcpu_spsr(vcpu) = cpsr; + vcpu_set_spsr(vcpu, cpsr); vcpu_sys_reg(vcpu, FAR_EL1) = addr; @@ -102,11 +111,11 @@ static void inject_undef64(struct kvm_vcpu *vcpu) unsigned long cpsr = *vcpu_cpsr(vcpu); u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT); - *vcpu_elr_el1(vcpu) = *vcpu_pc(vcpu); + vcpu_set_elr_el1(vcpu, *vcpu_pc(vcpu)); *vcpu_pc(vcpu) = get_except_vector(vcpu, except_type_sync); *vcpu_cpsr(vcpu) = PSTATE_FAULT_BITS_64; - *vcpu_spsr(vcpu) = cpsr; + vcpu_set_spsr(vcpu, cpsr); /* * Build an unknown exception, depending on the instruction diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 80adbec933de..6109dc8d5fb7 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -87,12 +87,16 @@ static u32 cache_levels; static u32 get_ccsidr(u32 csselr) { u32 ccsidr; + u32 csselr_preserve; - /* Make sure noone else changes CSSELR during this! */ + /* Make sure noone else changes CSSELR during this and preserve any + * existing value in the CSSELR! */ local_irq_disable(); + csselr_preserve = read_sysreg(csselr_el1); write_sysreg(csselr, csselr_el1); isb(); ccsidr = read_sysreg(ccsidr_el1); + write_sysreg(csselr_preserve, csselr_el1); local_irq_enable(); return ccsidr; diff --git a/virt/kvm/arm/aarch32.c b/virt/kvm/arm/aarch32.c index 8bc479fa37e6..67b62ff79b6f 100644 --- a/virt/kvm/arm/aarch32.c +++ b/virt/kvm/arm/aarch32.c @@ -166,7 +166,7 @@ static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset) unsigned long new_spsr_value = *vcpu_cpsr(vcpu); bool is_thumb = (new_spsr_value & COMPAT_PSR_T_BIT); u32 return_offset = return_offsets[vect_offset >> 2][is_thumb]; - u32 sctlr = vcpu_cp15(vcpu, c1_SCTLR); + u32 sctlr = vcpu_get_c1_sctlr(vcpu); cpsr = mode | COMPAT_PSR_I_BIT; @@ -178,14 +178,14 @@ static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset) *vcpu_cpsr(vcpu) = cpsr; /* Note: These now point to the banked copies */ - *vcpu_spsr(vcpu) = new_spsr_value; + vcpu_set_spsr(vcpu, new_spsr_value); *vcpu_reg32(vcpu, 14) = *vcpu_pc(vcpu) + return_offset; /* Branch to exception vector */ if (sctlr & (1 << 13)) vect_offset += 0xffff0000; else /* always have security exceptions */ - vect_offset += vcpu_cp15(vcpu, c12_VBAR); + vect_offset += vcpu_get_vbar(vcpu); *vcpu_pc(vcpu) = vect_offset; } @@ -206,6 +206,19 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 *far, *fsr; bool is_lpae; + /* + * The emulation code here is going to modify several system + * registers, so on arm64 with VHE we want to load them into memory + * and store them back into registers, ensuring that we observe the + * most recent values and that we expose the right values back to the + * guest. + * + * We disable preemption to avoid racing with another vcpu_put/load + * operation. + */ + preempt_disable(); + kvm_vcpu_put_sysregs(vcpu); + if (is_pabt) { vect_offset = 12; far = &vcpu_cp15(vcpu, c6_IFAR); @@ -226,6 +239,9 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, *fsr = 1 << 9 | 0x34; else *fsr = 0x14; + + kvm_vcpu_load_sysregs(vcpu); + preempt_enable(); } void kvm_inject_dabt32(struct kvm_vcpu *vcpu, unsigned long addr)