From patchwork Sat Dec 26 21:56:29 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 7922571 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id CCAC1BEEE5 for ; Sat, 26 Dec 2015 21:58:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C262A20212 for ; Sat, 26 Dec 2015 21:58:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2A467202D1 for ; Sat, 26 Dec 2015 21:58:52 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aCwqC-000255-4J; Sat, 26 Dec 2015 21:57:32 +0000 Received: from mailout1.w2.samsung.com ([211.189.100.11] helo=usmailout1.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aCwpf-0001Sq-0l for linux-arm-kernel@lists.infradead.org; Sat, 26 Dec 2015 21:57:01 +0000 Received: from uscpsbgm1.samsung.com (u114.gpu85.samsung.co.kr [203.254.195.114]) by mailout1.w2.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0NZZ0078JJME9020@mailout1.w2.samsung.com> for linux-arm-kernel@lists.infradead.org; Sat, 26 Dec 2015 16:56:38 -0500 (EST) X-AuditID: cbfec372-f79bc6d000005d24-02-567f0d16435c Received: from ussync3.samsung.com ( [203.254.195.83]) by uscpsbgm1.samsung.com (USCPMTA) with SMTP id 0F.25.23844.61D0F765; Sat, 26 Dec 2015 16:56:38 -0500 (EST) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by ussync3.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0NZZ000EUJMDWG40@ussync3.samsung.com>; Sat, 26 Dec 2015 16:56:38 -0500 (EST) Received: from localhost.localdomain (105.160.5.6) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.3.224.2; Sat, 26 Dec 2015 13:56:36 -0800 From: Mario Smarduch To: kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com Subject: [PATCH v6 5/6] arm/arm64: KVM: Introduce armv8 fp/simd vcpu fields and helpers Date: Sat, 26 Dec 2015 13:56:29 -0800 Message-id: <1451166989-3754-1-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.9.1 MIME-version: 1.0 X-Originating-IP: [105.160.5.6] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrMLMWRmVeSWpSXmKPExsVy+t/hYF0x3vowg4YpLBYvXv9jtJgztdDi 46nj7BabHl9jtfh75x+bA6vHmnlrGD3uXNvD5nF+0xpmj81L6j0+b5ILYI3isklJzcksSy3S t0vgyviw9hNrwW69iq9rZjA1MC5Q72Lk5JAQMJH4eG0ZE4QtJnHh3nq2LkYuDiGBJYwSey81 sUM4LUwSR74ug8psY5RY838aI0gLm4CuxP57G9lBbBGBUIkpy1+DjWIWyJCY/fQfWFxYIFLi 8NWdLCA2i4CqxOHGHrA4r4CrxNcdi6FWy0mcPDaZFSIuKPFj8j2geg6gORISzz8rgYSFgFq3 3XzOCFEuL7Flexv7BEaBWUg6ZiF0LGBkWsUoWlqcXFCclJ5rqFecmFtcmpeul5yfu4kRErRF OxifbbA6xCjAwajEw/viRW2YEGtiWXFl7iFGCQ5mJRHeyqt1YUK8KYmVValF+fFFpTmpxYcY pTlYlMR5U1l9QoUE0hNLUrNTUwtSi2CyTBycUg2Mu1t7FN4f6Z1lXqqz50vgolfy3489mTj5 qV5CwsWdDS7236ZxmEyTmG7K81xhkVXr4WNXG2q/dZ5n6vBUE5LQei9nE67mnLa8sa/rzPFw y6xj5k6TTPsjXPXvWmp8T69+ujwknmmCXN+D09IXd2R/EX7r8GlD/PQbmXsqwlltCrKzX9du cH+hxFKckWioxVxUnAgA+zGW6VYCAAA= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151226_135659_389767_ED2779AF X-CRM114-Status: GOOD ( 14.31 ) X-Spam-Score: -6.9 (------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, Mario Smarduch Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Similar to armv7 add helper functions to enable access to fp/smid registers on guest entry. Save guest fpexc32_el2 on vcpu_put, check if guest is 32 bit. Save guest and restore host registers from host kernel, and check if fp/simd registers are dirty, lastly add cptr_el2 vcpu field. Signed-off-by: Mario Smarduch --- arch/arm/include/asm/kvm_emulate.h | 12 ++++++++++++ arch/arm64/include/asm/kvm_asm.h | 5 +++++ arch/arm64/include/asm/kvm_emulate.h | 26 ++++++++++++++++++++++++-- arch/arm64/include/asm/kvm_host.h | 12 +++++++++++- arch/arm64/kvm/hyp/hyp-entry.S | 26 ++++++++++++++++++++++++++ 5 files changed, 78 insertions(+), 3 deletions(-) diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h index d4d9da1..a434dc5 100644 --- a/arch/arm/include/asm/kvm_emulate.h +++ b/arch/arm/include/asm/kvm_emulate.h @@ -284,6 +284,12 @@ static inline bool vcpu_vfp_isdirty(struct kvm_vcpu *vcpu) { return !(vcpu->arch.hcptr & (HCPTR_TCP(10) | HCPTR_TCP(11))); } + +static inline bool vcpu_guest_is_32bit(struct kvm_vcpu *vcpu) +{ + return true; +} +static inline void vcpu_save_fpexc(struct kvm_vcpu *vcpu) {} #else static inline void vcpu_trap_vfp_enable(struct kvm_vcpu *vcpu) { @@ -295,6 +301,12 @@ static inline bool vcpu_vfp_isdirty(struct kvm_vcpu *vcpu) { return false; } + +static inline bool vcpu_guest_is_32bit(struct kvm_vcpu *vcpu) +{ + return true; +} +static inline void vcpu_save_fpexc(struct kvm_vcpu *vcpu) {} #endif #endif /* __ARM_KVM_EMULATE_H__ */ diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 52b777b..ddae814 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -48,6 +48,11 @@ extern u64 __vgic_v3_get_ich_vtr_el2(void); extern u32 __kvm_get_mdcr_el2(void); +extern void __fpsimd_prepare_fpexc32(void); +extern void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu); +extern void __fpsimd_save_state(struct user_fpsimd_state *); +extern void __fpsimd_restore_state(struct user_fpsimd_state *); + #endif #endif /* __ARM_KVM_ASM_H__ */ diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index ffe8ccf..f8203c7 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -299,12 +299,34 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu, return data; /* Leave LE untouched */ } -static inline void vcpu_trap_vfp_enable(struct kvm_vcpu *vcpu) {} +static inline bool vcpu_guest_is_32bit(struct kvm_vcpu *vcpu) +{ + return !(vcpu->arch.hcr_el2 & HCR_RW); +} + +static inline void vcpu_trap_vfp_enable(struct kvm_vcpu *vcpu) +{ + /* For 32 bit guest enable access to fp/simd registers */ + if (vcpu_guest_is_32bit(vcpu)) + vcpu_prepare_fpexc(); + + vcpu->arch.cptr_el2 = CPTR_EL2_TTA | CPTR_EL2_TFP; +} + static inline void vcpu_restore_host_fpexc(struct kvm_vcpu *vcpu) {} static inline bool vcpu_vfp_isdirty(struct kvm_vcpu *vcpu) { - return false; + return !(vcpu->arch.cptr_el2 & CPTR_EL2_TFP); +} + +static inline void vcpu_restore_host_vfp_state(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt = vcpu->arch.host_cpu_context; + struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; + + __fpsimd_save_state(&guest_ctxt->gp_regs.fp_regs); + __fpsimd_restore_state(&host_ctxt->gp_regs.fp_regs); } #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index bfe4d4e..5d0c256 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -26,6 +26,7 @@ #include #include #include +#include #define __KVM_HAVE_ARCH_INTC_INITIALIZED @@ -180,6 +181,7 @@ struct kvm_vcpu_arch { /* HYP configuration */ u64 hcr_el2; u32 mdcr_el2; + u32 cptr_el2; /* Exception Information */ struct kvm_vcpu_fault_info fault; @@ -338,7 +340,15 @@ static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} -static inline void vcpu_restore_host_vfp_state(struct kvm_vcpu *vcpu) {} +static inline void vcpu_prepare_fpexc(void) +{ + kvm_call_hyp(__fpsimd_prepare_fpexc32); +} + +static inline void vcpu_save_fpexc(struct kvm_vcpu *vcpu) +{ + kvm_call_hyp(__fpsimd_save_fpexc32, vcpu); +} void kvm_arm_init_debug(void); void kvm_arm_setup_debug(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 93e8d983..a9235e2 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -164,6 +164,32 @@ ENTRY(__hyp_do_panic) eret ENDPROC(__hyp_do_panic) +/** + * void __fpsimd_prepare_fpexc32(void) - + * We may be entering the guest and set CPTR_EL2.TFP to trap all floating + * point register accesses to EL2, however, the ARM manual clearly states + * that traps are only taken to EL2 if the operation would not otherwise + * trap to EL1. Therefore, always make sure that for 32-bit guests, + * we set FPEXC.EN to prevent traps to EL1, when setting the TFP bit. + */ +ENTRY(__fpsimd_prepare_fpexc32) + mov x2, #(1 << 30) + msr fpexc32_el2, x2 + ret +ENDPROC(__fpsimd_prepare_fpexc32) + +/** + * void __fpsimd_save_fpexc32(void) - + * This function restores guest FPEXC to its vcpu context, we call this + * function from vcpu_put. + */ +ENTRY(__fpsimd_save_fpexc32) + kern_hyp_va x0 + mrs x2, fpexc32_el2 + str x2, [x0, #VCPU_FPEXC32_EL2] + ret +ENDPROC(__fpsimd_save_fpexc32) + .macro invalid_vector label, target = __hyp_panic .align 2 \label: