From patchwork Fri Oct 30 21:56:33 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 7529461 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 993D2BEEA4 for ; Fri, 30 Oct 2015 21:59:51 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 99F352076F for ; Fri, 30 Oct 2015 21:59:50 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AEC5E2076E for ; Fri, 30 Oct 2015 21:59:49 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZsHgc-0005Kq-PI; Fri, 30 Oct 2015 21:58:14 +0000 Received: from mailout1.w2.samsung.com ([211.189.100.11] helo=usmailout1.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZsHfw-0004vr-9c for linux-arm-kernel@lists.infradead.org; Fri, 30 Oct 2015 21:57:34 +0000 Received: from uscpsbgex2.samsung.com (u123.gpu85.samsung.co.kr [203.254.195.123]) by mailout1.w2.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0NX1002G2ZNA2M50@mailout1.w2.samsung.com> for linux-arm-kernel@lists.infradead.org; Fri, 30 Oct 2015 17:57:10 -0400 (EDT) X-AuditID: cbfec37b-f79926d0000060e9-48-5633e7b6be4e Received: from usmmp2.samsung.com ( [203.254.195.78]) by uscpsbgex2.samsung.com (USCPEXMTA) with SMTP id 2F.BC.24809.6B7E3365; Fri, 30 Oct 2015 17:57:10 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp2.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0NX100IR6ZNATX60@usmmp2.samsung.com>; Fri, 30 Oct 2015 17:57:10 -0400 (EDT) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.145.28.253) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.3.224.2; Fri, 30 Oct 2015 14:57:09 -0700 From: Mario Smarduch To: kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com Subject: [PATCH 3/3] KVM/arm64: enable enhanced armv8 fp/simd lazy switch Date: Fri, 30 Oct 2015 14:56:33 -0700 Message-id: <1446242193-8424-4-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.9.1 In-reply-to: <1446242193-8424-1-git-send-email-m.smarduch@samsung.com> References: <1446242193-8424-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.145.28.253] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCLMWRmVeSWpSXmKPExsVy+t9hP91tz43DDPoXa1s03f/BavHi9T9G izlTCy0+njrObrHp8TVWi793/rE5sHmsmbeG0aPlyFtWjzvX9rB5nN+0htlj85J6j8+b5ALY orhsUlJzMstSi/TtErgyrhyoLVgqV/H17V32BsZbEl2MnBwSAiYSfa1rmCBsMYkL99azdTFy cQgJLGOUuPvsBpTTxCRxoWMFI4RzkVHi2P6jbCAtbAK6EvvvbWQHsUUEQiWmLH/NBFLELNDK KHFr+XwWkISwgKfE6avrGUFsFgFVifuL34M18Aq4Suz4f5sNYrecxMljk1m7GDk4OAXcJKa2 1IGEhYBK1hy7CVUuKPFj8j0WkBJmAQmJ55+VIEpUJbbdfM4IEpYQUJLYsV17AqPQLCQNsxAa FjAyrWIUKy1OLihOSk+tMNIrTswtLs1L10vOz93ECAn76h2Md7/aHGIU4GBU4uHV2G4cJsSa WFZcmXuIUYKDWUmEl2kqUIg3JbGyKrUoP76oNCe1+BCjNAeLkjjvHQm5UCGB9MSS1OzU1ILU IpgsEwenVAOj8oTds1J8F7vZzXugnf5RqvGd/fw1GUe7KkMvrxWYvXemiLn3tNjn5+v1Pp97 kBM8dznHJhv54NSJ7G+PXD6umdtznuH3ll83xUqsD08OinOeVxUSfyiFVcGQzaT63aU3LQbd j33Yeysbfn35U7fQ6Ah/pMwhr/tHmFK4Hydsj5hicHf+rL4KJZbijERDLeai4kQAZCGRLncC AAA= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151030_145732_567060_1CA64B9C X-CRM114-Status: GOOD ( 12.02 ) X-Spam-Score: -7.9 (-------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, antonios.motakis@huawei.com, Mario Smarduch Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch enables arm64 lazy fp/simd switch, similar to arm described in second patch. Change from previous version - restore function is moved to host. Signed-off-by: Mario Smarduch --- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kernel/asm-offsets.c | 1 + arch/arm64/kvm/hyp.S | 37 +++++++++++++++++++++++++++++++------ 3 files changed, 33 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 26a2347..dcecf92 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -251,11 +251,11 @@ static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} -static inline void kvm_restore_host_vfp_state(struct kvm_vcpu *vcpu) {} void kvm_arm_init_debug(void); void kvm_arm_setup_debug(struct kvm_vcpu *vcpu); void kvm_arm_clear_debug(struct kvm_vcpu *vcpu); void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu); +void kvm_restore_host_vfp_state(struct kvm_vcpu *vcpu); #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 8d89cf8..c9c5242 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -124,6 +124,7 @@ int main(void) DEFINE(VCPU_HCR_EL2, offsetof(struct kvm_vcpu, arch.hcr_el2)); DEFINE(VCPU_MDCR_EL2, offsetof(struct kvm_vcpu, arch.mdcr_el2)); DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines)); + DEFINE(VCPU_VFP_DIRTY, offsetof(struct kvm_vcpu, arch.vfp_dirty)); DEFINE(VCPU_HOST_CONTEXT, offsetof(struct kvm_vcpu, arch.host_cpu_context)); DEFINE(VCPU_HOST_DEBUG_STATE, offsetof(struct kvm_vcpu, arch.host_debug_state)); DEFINE(VCPU_TIMER_CNTV_CTL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_ctl)); diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index e583613..ed2c4cf 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -36,6 +36,28 @@ #define CPU_SYSREG_OFFSET(x) (CPU_SYSREGS + 8*x) .text + +/** + * void kvm_restore_host_vfp_state(struct vcpu *vcpu) - Executes lazy + * fp/simd switch, saves the guest, restores host. Called from host + * mode, placed outside of hyp section. + */ +ENTRY(kvm_restore_host_vfp_state) + push xzr, lr + + add x2, x0, #VCPU_CONTEXT + mov w3, #0 + strb w3, [x0, #VCPU_VFP_DIRTY] + + bl __save_fpsimd + + ldr x2, [x0, #VCPU_HOST_CONTEXT] + bl __restore_fpsimd + + pop xzr, lr + ret +ENDPROC(kvm_restore_host_vfp_state) + .pushsection .hyp.text, "ax" .align PAGE_SHIFT @@ -482,7 +504,11 @@ 99: msr hcr_el2, x2 mov x2, #CPTR_EL2_TTA + + ldrb w3, [x0, #VCPU_VFP_DIRTY] + tbnz w3, #0, 98f orr x2, x2, #CPTR_EL2_TFP +98: msr cptr_el2, x2 mov x2, #(1 << 15) // Trap CP15 Cr=15 @@ -669,14 +695,12 @@ __restore_debug: ret __save_fpsimd: - skip_fpsimd_state x3, 1f save_fpsimd -1: ret + ret __restore_fpsimd: - skip_fpsimd_state x3, 1f restore_fpsimd -1: ret + ret switch_to_guest_fpsimd: push x4, lr @@ -688,6 +712,9 @@ switch_to_guest_fpsimd: mrs x0, tpidr_el2 + mov w2, #1 + strb w2, [x0, #VCPU_VFP_DIRTY] + ldr x2, [x0, #VCPU_HOST_CONTEXT] kern_hyp_va x2 bl __save_fpsimd @@ -763,7 +790,6 @@ __kvm_vcpu_return: add x2, x0, #VCPU_CONTEXT save_guest_regs - bl __save_fpsimd bl __save_sysregs skip_debug_state x3, 1f @@ -784,7 +810,6 @@ __kvm_vcpu_return: kern_hyp_va x2 bl __restore_sysregs - bl __restore_fpsimd /* Clear FPSIMD and Trace trapping */ msr cptr_el2, xzr