From patchwork Thu Oct 12 10:41:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10001601 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5A0D9602BF for ; Thu, 12 Oct 2017 10:45:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4B48028BEC for ; Thu, 12 Oct 2017 10:45:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3FA5B28D6D; Thu, 12 Oct 2017 10:45:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BD74728BEC for ; Thu, 12 Oct 2017 10:45:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=/F3x7aTUko6TVY0hKsfXO8PsUEzt8dvtnspMu0PMP/E=; b=YSkv7z7B1hDPA16S05wesByISQ GrHqq9PZZEPViaWwjwq+GIcw3Vl6OX9fDmQtjDAGjqVkOmR9dKEOh1f645/KUkep2UfgjCzGIyxgO 6iR8N4oMTnjSX7hqjgytshHxqm2PYqvq05p2tw9wmgSl321e1xRSIzlGG/qjPwJKsHO+N7GEi8s8p FoESCBVmxgKHoirVdYFZKrBi/fqf54LQkiRDv7zM3yFuI5251m1nfbrdZ7BhVxTjSh5HSeN7RrMkQ N8mZ7ipWX2rSYprvb7oxPUXI1Nc79Ctlt49bvWej6gSK4+EYMpVuM5Jhn3+/4dBowq4H454x28Mof fsNYnsFg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1e2ays-0001Nb-9n; Thu, 12 Oct 2017 10:44:46 +0000 Received: from mail-wm0-x236.google.com ([2a00:1450:400c:c09::236]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1e2awU-0007Wt-5r for linux-arm-kernel@lists.infradead.org; Thu, 12 Oct 2017 10:42:30 +0000 Received: by mail-wm0-x236.google.com with SMTP id q124so11979955wmb.0 for ; Thu, 12 Oct 2017 03:41:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nDL5/WtjkrGOyMMhjaii0d1e+ig8RodGJ3yTpHkVCx4=; b=OcfIrrHmvlWkfjMop6hBeIp+TAMROB8x01PEnlbZpbomZYMkNpVAXY+G3VhwkHgy3C M6mK+ou4g8DIr+NT0pzxUtnsjwMfbopcdkNNqeHl457Z2jZCkSXKy0S+u5daCXpbaCvl +vv9DyTQq0OJ2gCV3lsiUnXZmw6xo+eSYjqoY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nDL5/WtjkrGOyMMhjaii0d1e+ig8RodGJ3yTpHkVCx4=; b=dis/JA9rqQauRHK+YsCSKNu+jY4/32QeUOyuuJI7nFE+ss1j8T5i4yjcFs5z9kg2RN qvDYPGFC764Y2U4QEmEVAd0jjDnG5055zWPmPV8ycZZEhARLsV+OnsaRk2lwDNetslm3 n8eHrZaIUhOKGFv2c0GExUQ/E1kXInWzCkzKx4cDUYuCXMXxte+cav7Hz5SwpB7XU44p 0RzYL7Pldh7s7eXBe1voM7WEQ6CqXqiNYi4ErhVlc0IeTgDQuVtXYfrTZlibFveyFxDG dQroQlQWRd8O7yJdy3fFconPHC9spsyYvKTKBvyqRMBexD7ySITtRRdEEkB6CfJ6Kurc f3yA== X-Gm-Message-State: AMCzsaUXCQrmT4NcRgCqAndmm4pbryEdrEzXFd3UfldW5VyKug9KsUjG Kq6HTUt2gbR40jVlI4ii4TTtog== X-Google-Smtp-Source: AOwi7QCJQYpOwdvlz8o7meClSBnPLiLtWrpbh9xH7gwYevp3j+53OYAkQ4oiI0hCEre6haZ/6NSvng== X-Received: by 10.80.192.73 with SMTP id u9mr1317861edd.37.1507804918092; Thu, 12 Oct 2017 03:41:58 -0700 (PDT) Received: from localhost.localdomain (xd93dd96b.cust.hiper.dk. [217.61.217.107]) by smtp.gmail.com with ESMTPSA id g49sm4798603edc.31.2017.10.12.03.41.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 12 Oct 2017 03:41:57 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH 12/37] KVM: arm64: Factor out fault info population and gic workarounds Date: Thu, 12 Oct 2017 12:41:16 +0200 Message-Id: <20171012104141.26902-13-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20171012104141.26902-1-christoffer.dall@linaro.org> References: <20171012104141.26902-1-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171012_034219_098428_E219D4BB X-CRM114-Status: GOOD ( 17.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Christoffer Dall , Shih-Wei Li , kvm@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The current world-switch function has functionality to detect a number of cases where we need to fixup some part of the exit condition and possibly run the guest again, before having restored the host state. This includes populating missing fault info, emulating GICv2 CPU interface accesses when mapped at unaligned addresses, and emulating the GICv3 CPU interfaceon systems that need that. As we are about to have an alternative switch function for VHE systems, but VHE systems still need the same early fixup logic, factor out this logic into a separate function that can be shared by both switch functions. No functional change. Signed-off-by: Christoffer Dall --- arch/arm64/kvm/hyp/switch.c | 91 ++++++++++++++++++++++++++------------------- 1 file changed, 52 insertions(+), 39 deletions(-) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index e270cba..ed30af5 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -258,50 +258,24 @@ static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu) write_sysreg_el2(*vcpu_pc(vcpu), elr); } -int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +/* + * Return true when we were able to fixup the guest exit and should return to + * the guest, false when we should restore the host state and return to the + * main run loop. + */ +static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { - struct kvm_cpu_context *host_ctxt; - struct kvm_cpu_context *guest_ctxt; - u64 exit_code; - - vcpu = kern_hyp_va(vcpu); - - host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - host_ctxt->__hyp_running_vcpu = vcpu; - guest_ctxt = &vcpu->arch.ctxt; - - __sysreg_save_host_state(host_ctxt); - - __activate_traps(vcpu); - __activate_vm(vcpu); - - __vgic_restore_state(vcpu); - __timer_enable_traps(vcpu); - - /* - * We must restore the 32-bit state before the sysregs, thanks - * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). - */ - __sysreg32_restore_state(vcpu); - __sysreg_restore_guest_state(guest_ctxt); - __debug_switch_to_guest(vcpu); - - /* Jump in the fire! */ -again: - exit_code = __guest_enter(vcpu, host_ctxt); - /* And we're baaack! */ - /* * We're using the raw exception code in order to only process * the trap if no SError is pending. We will come back to the * same PC once the SError has been injected, and replay the * trapping instruction. */ - if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu)) - goto again; + if (*exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu)) + return true; if (static_branch_unlikely(&vgic_v2_cpuif_trap) && - exit_code == ARM_EXCEPTION_TRAP) { + *exit_code == ARM_EXCEPTION_TRAP) { bool valid; valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW && @@ -315,13 +289,13 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) if (ret == 1) { __skip_instr(vcpu); - goto again; + return true; } if (ret == -1) { /* Promote an illegal access to an SError */ __skip_instr(vcpu); - exit_code = ARM_EXCEPTION_EL1_SERROR; + *exit_code = ARM_EXCEPTION_EL1_SERROR; } /* 0 falls through to be handler out of EL2 */ @@ -329,19 +303,58 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) } if (static_branch_unlikely(&vgic_v3_cpuif_trap) && - exit_code == ARM_EXCEPTION_TRAP && + *exit_code == ARM_EXCEPTION_TRAP && (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { int ret = __vgic_v3_perform_cpuif_access(vcpu); if (ret == 1) { __skip_instr(vcpu); - goto again; + return true; } /* 0 falls through to be handled out of EL2 */ } + return false; +} + +int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + u64 exit_code; + + vcpu = kern_hyp_va(vcpu); + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + host_ctxt->__hyp_running_vcpu = vcpu; + guest_ctxt = &vcpu->arch.ctxt; + + __sysreg_save_host_state(host_ctxt); + + __activate_traps(vcpu); + __activate_vm(vcpu); + + __vgic_restore_state(vcpu); + __timer_enable_traps(vcpu); + + /* + * We must restore the 32-bit state before the sysregs, thanks + * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). + */ + __sysreg32_restore_state(vcpu); + __sysreg_restore_guest_state(guest_ctxt); + __debug_switch_to_guest(vcpu); + + /* Jump in the fire! */ +again: + exit_code = __guest_enter(vcpu, host_ctxt); + /* And we're baaack! */ + + if (fixup_guest_exit(vcpu, &exit_code)) + goto again; + __sysreg_save_guest_state(guest_ctxt); __sysreg32_save_state(vcpu); __timer_disable_traps(vcpu);