From patchwork Fri Nov 10 19:54:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dongjiu Geng X-Patchwork-Id: 10052993 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5D6886032D for ; Fri, 10 Nov 2017 11:50:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A5DB2684F for ; Fri, 10 Nov 2017 11:50:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4F30C2B12C; Fri, 10 Nov 2017 11:50:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00, DATE_IN_FUTURE_06_12, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A1B122684F for ; Fri, 10 Nov 2017 11:50:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753117AbdKJLum (ORCPT ); Fri, 10 Nov 2017 06:50:42 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:10450 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753235AbdKJLte (ORCPT ); Fri, 10 Nov 2017 06:49:34 -0500 Received: from 172.30.72.58 (EHLO DGGEMS401-HUB.china.huawei.com) ([172.30.72.58]) by dggrg04-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id DKO50299; Fri, 10 Nov 2017 19:49:27 +0800 (CST) Received: from localhost.localdomain (10.143.28.90) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.361.1; Fri, 10 Nov 2017 19:48:22 +0800 From: Dongjiu Geng To: , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v8 6/7] arm64: kvm: Set Virtual SError Exception Syndrome for guest Date: Sat, 11 Nov 2017 03:54:09 +0800 Message-ID: <1510343650-23659-7-git-send-email-gengdongjiu@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1510343650-23659-1-git-send-email-gengdongjiu@huawei.com> References: <1510343650-23659-1-git-send-email-gengdongjiu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.143.28.90] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020205.5A059247.00B8, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 34450e840f0c4bded55b17ce4683128b Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP RAS Extension add a VSESR_EL2 register which can provides the syndrome value reported to software on taking a virtual SError interrupt exception. This patch supports to specify this Syndrome. In the RAS Extensions we can not set all-zero syndrome value for SError, which means 'RAS error: Uncategorized' instead of 'no valid ISS'. So set it to IMPLEMENTATION DEFINED syndrome by default. We also need to support userspace to specify a valid syndrome value, Because in some case, the recovery is driven by userspace. This patch can support that userspace can specify it. In the guest/host world switch, restore this value to VSESR_EL2 only when HCR_EL2.VSE is set. This value no need to be saved because it is stale vale when guest exit. Signed-off-by: Dongjiu Geng Signed-off-by: Quanming Wu [Set an impdef ESR for Virtual-SError] Signed-off-by: James Morse --- arch/arm64/include/asm/kvm_emulate.h | 10 ++++++++++ arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/sysreg.h | 3 +++ arch/arm64/kvm/guest.c | 11 ++++++++++- arch/arm64/kvm/hyp/switch.c | 16 ++++++++++++++++ arch/arm64/kvm/inject_fault.c | 13 ++++++++++++- 6 files changed, 52 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 555b28b..73c84d0 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -155,6 +155,16 @@ static inline u32 kvm_vcpu_get_hsr(const struct kvm_vcpu *vcpu) return vcpu->arch.fault.esr_el2; } +static inline u32 kvm_vcpu_get_vsesr(const struct kvm_vcpu *vcpu) +{ + return vcpu->arch.fault.vsesr_el2; +} + +static inline void kvm_vcpu_set_vsesr(struct kvm_vcpu *vcpu, unsigned long val) +{ + vcpu->arch.fault.vsesr_el2 = val; +} + static inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) { u32 esr = kvm_vcpu_get_hsr(vcpu); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 769cc58..53d1d81 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -88,6 +88,7 @@ struct kvm_vcpu_fault_info { u32 esr_el2; /* Hyp Syndrom Register */ u64 far_el2; /* Hyp Fault Address Register */ u64 hpfar_el2; /* Hyp IPA Fault Address Register */ + u32 vsesr_el2; /* Virtual SError Exception Syndrome Register */ }; /* diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 47b967d..3b035cc 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -86,6 +86,9 @@ #define REG_PSTATE_PAN_IMM sys_reg(0, 0, 4, 0, 4) #define REG_PSTATE_UAO_IMM sys_reg(0, 0, 4, 0, 3) +/* virtual SError exception syndrome register */ +#define REG_VSESR_EL2 sys_reg(3, 4, 5, 2, 3) + #define SET_PSTATE_PAN(x) __emit_inst(0xd5000000 | REG_PSTATE_PAN_IMM | \ (!!x)<<8 | 0x1f) #define SET_PSTATE_UAO(x) __emit_inst(0xd5000000 | REG_PSTATE_UAO_IMM | \ diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 738ae90..ffad42b 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -279,7 +279,16 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, int kvm_arm_set_sei_esr(struct kvm_vcpu *vcpu, u32 *syndrome) { - return -EINVAL; + u64 reg = *syndrome; + + /* inject virtual system Error or asynchronous abort */ + kvm_inject_vabt(vcpu); + + if (reg) + /* set vsesr_el2[24:0] with value that user space specified */ + kvm_vcpu_set_vsesr(vcpu, reg & ESR_ELx_ISS_MASK); + + return 0; } int __attribute_const__ kvm_target_cpu(void) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index c6f17c7..06a71d2 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -67,6 +67,14 @@ static hyp_alternate_select(__activate_traps_arch, __activate_traps_nvhe, __activate_traps_vhe, ARM64_HAS_VIRT_HOST_EXTN); +static void __hyp_text __sysreg_set_vsesr(struct kvm_vcpu *vcpu, u64 value) +{ + if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN) && + (value & HCR_VSE)) + write_sysreg_s(kvm_vcpu_get_vsesr(vcpu), REG_VSESR_EL2); +} + + static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) { u64 val; @@ -86,6 +94,14 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) isb(); } write_sysreg(val, hcr_el2); + + /* + * If the virtual SError interrupt is taken to EL1 using AArch64, + * then VSESR_EL2 provides the syndrome value reported in ISS field + * of ESR_EL1. + */ + __sysreg_set_vsesr(vcpu, val); + /* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */ write_sysreg(1 << 15, hstr_el2); /* diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index 3556715..fb94b5e 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -246,14 +246,25 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu) inject_undef64(vcpu); } +static void pend_guest_serror(struct kvm_vcpu *vcpu, u64 esr) +{ + kvm_vcpu_set_vsesr(vcpu, esr); + vcpu_set_hcr(vcpu, vcpu_get_hcr(vcpu) | HCR_VSE); +} + /** * kvm_inject_vabt - inject an async abort / SError into the guest * @vcpu: The VCPU to receive the exception * * It is assumed that this code is called from the VCPU thread and that the * VCPU therefore is not currently executing guest code. + * + * Systems with the RAS Extensions specify an imp-def ESR (ISV/IDS = 1) with + * the remaining ISS all-zeros so that this error is not interpreted as an + * uncatagorized RAS error. Without the RAS Extensions we can't specify an ESR + * value, so the CPU generates an imp-def value. */ void kvm_inject_vabt(struct kvm_vcpu *vcpu) { - vcpu_set_hcr(vcpu, vcpu_get_hcr(vcpu) | HCR_VSE); + pend_guest_serror(vcpu, ESR_ELx_ISV); }