From patchwork Tue Oct 6 02:33:14 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 7333101 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 158F49F2F7 for ; Tue, 6 Oct 2015 02:37:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 16FAB206D6 for ; Tue, 6 Oct 2015 02:36:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CA6B9206ED for ; Tue, 6 Oct 2015 02:36:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZjI4o-0006vI-09; Tue, 06 Oct 2015 02:34:02 +0000 Received: from mailout4.w2.samsung.com ([211.189.100.14] helo=usmailout4.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZjI4d-0006oP-Pq for linux-arm-kernel@lists.infradead.org; Tue, 06 Oct 2015 02:33:52 +0000 Received: from uscpsbgex3.samsung.com (u124.gpu85.samsung.co.kr [203.254.195.124]) by usmailout4.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0NVS00KCN1RW1770@usmailout4.samsung.com> for linux-arm-kernel@lists.infradead.org; Mon, 05 Oct 2015 22:33:32 -0400 (EDT) X-AuditID: cbfec37c-f796c6d000006809-8f-561332fba741 Received: from usmmp2.samsung.com ( [203.254.195.78]) by uscpsbgex3.samsung.com (USCPEXMTA) with SMTP id 47.ED.26633.BF233165; Mon, 5 Oct 2015 22:33:32 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp2.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0NVS00J2Q1RVVX20@usmmp2.samsung.com>; Mon, 05 Oct 2015 22:33:31 -0400 (EDT) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.145.28.253) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.3.224.2; Mon, 5 Oct 2015 19:33:30 -0700 From: Mario Smarduch To: kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com Subject: [PATCH v2 2/2] enable armv8 fp/simd lazy switch Date: Mon, 05 Oct 2015 19:33:14 -0700 Message-id: <1444098794-19244-3-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.9.1 In-reply-to: <1444098794-19244-1-git-send-email-m.smarduch@samsung.com> References: <1444098794-19244-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.145.28.253] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCLMWRmVeSWpSXmKPExsVy+t9hP90/RsJhBl+3Mlo03f/BavHi9T9G izlTCy0+njrObrHp8TVWi793/rE5sHmsmbeG0aPlyFtWjzvX9rB5nN+0htlj85J6j8+b5ALY orhsUlJzMstSi/TtErgyuheZFSxQquhYW9DA+Euqi5GTQ0LARGLlhcuMELaYxIV769m6GLk4 hASWMUocXHCZCcJpYpLY/2MTI4RzgVFi5+q9LCAtbAK6EvvvbWQHsUUEQiWmLH8N1sEs0Moo cWv5fLAiYQELiS9TNoDZLAKqEgcP7WAFsXkF3CSev/vFBLFbTuLksclgcU4Bd4nOB5PYQGwh oJr17z8zQ9QLSvyYfA9oDgfQAgmJ55+VIEpUJbbdfM4IEpYQUJLYsV17AqPQLCQNsxAaFjAy rWIUKy1OLihOSk+tMNYrTswtLs1L10vOz93ECAn7mh2M977aHGIU4GBU4uGVuCkUJsSaWFZc mXuIUYKDWUmE9yeXcJgQb0piZVVqUX58UWlOavEhRmkOFiVxXglJuVAhgfTEktTs1NSC1CKY LBMHp1QDYxujyKufRSvUN+++xv3zwMpcsSvxCwyaPQP8nvEmXvs3165zb0vyfd8Qv5+Ld//u YndYK8DTn14Udn+KRd1BBb4Hf6uLj11c2l730T/zXZWHpMj33ofc6idkq1sMLh3bM8nPOWvh H6/sQ+2TEk6uLap8F/FrxerlP2Y8i9kRt2HW56bth9O8rymxFGckGmoxFxUnAgCgwC2kdwIA AA== X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151005_193352_022377_5434DB46 X-CRM114-Status: GOOD ( 13.71 ) X-Spam-Score: -6.9 (------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, antonios.motakis@huawei.com, Mario Smarduch Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch enables arm64 lazy fp/simd switch. Removes the ARM constraint, and follows the same approach as armv7 version - found here. https://lists.cs.columbia.edu/pipermail/kvmarm/2015-September/016567.html To summarize - provided the guest accesses fp/simd unit we limit number of fp/simd context switches to two per vCPU execution schedule. Signed-off-by: Mario Smarduch --- arch/arm/kvm/arm.c | 2 -- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp.S | 59 +++++++++++++++++++++++++++------------- 3 files changed, 41 insertions(+), 21 deletions(-) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 1b1f9e9..fe609f1 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -112,12 +112,10 @@ void kvm_arch_check_processor_compat(void *rtn) */ static void kvm_switch_fp_regs(struct kvm_vcpu *vcpu) { -#ifdef CONFIG_ARM if (vcpu->arch.vfp_lazy == 1) { kvm_call_hyp(__kvm_restore_host_vfp_state, vcpu); vcpu->arch.vfp_lazy = 0; } -#endif } /** diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 5e37710..83dcac5 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -117,6 +117,7 @@ extern char __kvm_hyp_vector[]; extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); +extern void __kvm_restore_host_vfp_state(struct kvm_vcpu *vcpu); extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index e583613..ea99f66 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -385,14 +385,6 @@ tbz \tmp, #KVM_ARM64_DEBUG_DIRTY_SHIFT, \target .endm -/* - * Branch to target if CPTR_EL2.TFP bit is set (VFP/SIMD trapping enabled) - */ -.macro skip_fpsimd_state tmp, target - mrs \tmp, cptr_el2 - tbnz \tmp, #CPTR_EL2_TFP_SHIFT, \target -.endm - .macro compute_debug_state target // Compute debug state: If any of KDE, MDE or KVM_ARM64_DEBUG_DIRTY // is set, we do a full save/restore cycle and disable trapping. @@ -433,10 +425,6 @@ mrs x5, ifsr32_el2 stp x4, x5, [x3] - skip_fpsimd_state x8, 2f - mrs x6, fpexc32_el2 - str x6, [x3, #16] -2: skip_debug_state x8, 1f mrs x7, dbgvcr32_el2 str x7, [x3, #24] @@ -481,8 +469,15 @@ isb 99: msr hcr_el2, x2 - mov x2, #CPTR_EL2_TTA + + mov x2, #0 + ldr w3, [x0, #VCPU_VFP_LAZY] + tbnz w3, #0, 98f + orr x2, x2, #CPTR_EL2_TFP +98: + orr x2, x2, #CPTR_EL2_TTA + msr cptr_el2, x2 mov x2, #(1 << 15) // Trap CP15 Cr=15 @@ -669,14 +664,12 @@ __restore_debug: ret __save_fpsimd: - skip_fpsimd_state x3, 1f save_fpsimd -1: ret + ret __restore_fpsimd: - skip_fpsimd_state x3, 1f restore_fpsimd -1: ret + ret switch_to_guest_fpsimd: push x4, lr @@ -688,6 +681,9 @@ switch_to_guest_fpsimd: mrs x0, tpidr_el2 + mov w2, #1 + str w2, [x0, #VCPU_VFP_LAZY] + ldr x2, [x0, #VCPU_HOST_CONTEXT] kern_hyp_va x2 bl __save_fpsimd @@ -763,7 +759,6 @@ __kvm_vcpu_return: add x2, x0, #VCPU_CONTEXT save_guest_regs - bl __save_fpsimd bl __save_sysregs skip_debug_state x3, 1f @@ -784,7 +779,6 @@ __kvm_vcpu_return: kern_hyp_va x2 bl __restore_sysregs - bl __restore_fpsimd /* Clear FPSIMD and Trace trapping */ msr cptr_el2, xzr @@ -863,6 +857,33 @@ ENTRY(__kvm_flush_vm_context) ret ENDPROC(__kvm_flush_vm_context) +/** + * kvm_switch_fp_regs() - switch guest/host VFP/SIMD registers + * @vcpu: pointer to vcpu structure. + * + */ +ENTRY(__kvm_restore_host_vfp_state) + push x4, lr + + kern_hyp_va x0 + add x2, x0, #VCPU_CONTEXT + + // Load Guest HCR, determine if guest is 32 or 64 bit + ldr x3, [x0, #VCPU_HCR_EL2] + tbnz x3, #HCR_RW_SHIFT, 1f + mrs x4, fpexc32_el2 + str x4, [x2, #CPU_SYSREG_OFFSET(FPEXC32_EL2)] +1: + bl __save_fpsimd + + ldr x2, [x0, #VCPU_HOST_CONTEXT] + kern_hyp_va x2 + bl __restore_fpsimd + + pop x4, lr + ret +ENDPROC(__kvm_restore_host_vfp_state) + __kvm_hyp_panic: // Guess the context by looking at VTTBR: // If zero, then we're already a host.