From patchwork Tue Dec 22 08:08:10 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 7902541 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6C373BEEE5 for ; Tue, 22 Dec 2015 08:18:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8B2F72057F for ; Tue, 22 Dec 2015 08:18:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9C08120573 for ; Tue, 22 Dec 2015 08:18:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752855AbbLVISU (ORCPT ); Tue, 22 Dec 2015 03:18:20 -0500 Received: from szxga02-in.huawei.com ([119.145.14.65]:56982 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752652AbbLVIST (ORCPT ); Tue, 22 Dec 2015 03:18:19 -0500 Received: from 172.24.1.48 (EHLO szxeml431-hub.china.huawei.com) ([172.24.1.48]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CYK61693; Tue, 22 Dec 2015 16:08:58 +0800 (CST) Received: from HGHY1Z002260041.china.huawei.com (10.177.16.142) by szxeml431-hub.china.huawei.com (10.82.67.208) with Microsoft SMTP Server id 14.3.235.1; Tue, 22 Dec 2015 16:08:50 +0800 From: Shannon Zhao To: , , CC: , , , , , , , , Subject: [PATCH v8 15/20] KVM: ARM64: Add a helper to forward trap to guest EL1 Date: Tue, 22 Dec 2015 16:08:10 +0800 Message-ID: <1450771695-11948-16-git-send-email-zhaoshenglong@huawei.com> X-Mailer: git-send-email 1.9.0.msysgit.0 In-Reply-To: <1450771695-11948-1-git-send-email-zhaoshenglong@huawei.com> References: <1450771695-11948-1-git-send-email-zhaoshenglong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.16.142] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A0B0203.5679051C.00E9, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 9d5a8c67560cdfa1a3f34939dc4423b9 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Shannon Zhao This helper forward the trap caused by MRS/MSR for arch64 and MCR/MRC, MCRR/MRRC for arch32 CP15 to guest EL1. Signed-off-by: Shannon Zhao --- arch/arm64/include/asm/kvm_emulate.h | 1 + arch/arm64/kvm/inject_fault.c | 52 +++++++++++++++++++++++++++++++++++- 2 files changed, 52 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 3066328..88b2958 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -36,6 +36,7 @@ unsigned long *vcpu_spsr32(const struct kvm_vcpu *vcpu); bool kvm_condition_valid32(const struct kvm_vcpu *vcpu); void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr); +void kvm_forward_trap_to_el1(struct kvm_vcpu *vcpu); void kvm_inject_undefined(struct kvm_vcpu *vcpu); void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index 648112e..052ef25 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -27,7 +27,10 @@ #define PSTATE_FAULT_BITS_64 (PSR_MODE_EL1h | PSR_A_BIT | PSR_F_BIT | \ PSR_I_BIT | PSR_D_BIT) -#define EL1_EXCEPT_SYNC_OFFSET 0x200 +#define EL1_EXCEPT_BAD_SYNC_OFFSET 0x0 +#define EL1_EXCEPT_SYNC_OFFSET 0x200 +#define EL0_EXCEPT_SYNC_OFFSET_64 0x400 +#define EL0_EXCEPT_SYNC_OFFSET_32 0x600 static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset) { @@ -201,3 +204,50 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu) else inject_undef64(vcpu); } + +/** + * kvm_forward_trap_to_el1 - forward access trap to the guest EL1 + * + * It is assumed that this code is called from the VCPU thread and that the + * VCPU therefore is not currently executing guest code. + */ +void kvm_forward_trap_to_el1(struct kvm_vcpu *vcpu) +{ + unsigned long cpsr; + u32 esr = vcpu->arch.fault.esr_el2; + u32 esr_ec = (esr & ESR_ELx_EC_MASK) >> ESR_ELx_EC_SHIFT; + + if (esr_ec == ESR_ELx_EC_SYS64) { + u64 exc_offset; + + cpsr = *vcpu_cpsr(vcpu); + *vcpu_spsr(vcpu) = cpsr; + *vcpu_elr_el1(vcpu) = *vcpu_pc(vcpu); + + *vcpu_cpsr(vcpu) = PSTATE_FAULT_BITS_64; + + switch (cpsr & (PSR_MODE_MASK | PSR_MODE32_BIT)) { + case PSR_MODE_EL0t: + exc_offset = EL0_EXCEPT_SYNC_OFFSET_64; + break; + case PSR_MODE_EL1t: + exc_offset = EL1_EXCEPT_BAD_SYNC_OFFSET; + break; + case PSR_MODE_EL1h: + exc_offset = EL1_EXCEPT_SYNC_OFFSET; + break; + default: + exc_offset = EL0_EXCEPT_SYNC_OFFSET_32; + } + + *vcpu_pc(vcpu) = vcpu_sys_reg(vcpu, VBAR_EL1) + exc_offset; + + if (kvm_vcpu_trap_il_is32bit(vcpu)) + esr |= ESR_ELx_IL; + + vcpu_sys_reg(vcpu, ESR_EL1) = esr; + } else if (esr_ec == ESR_ELx_EC_CP15_32 || + esr_ec == ESR_ELx_EC_CP15_64) { + prepare_fault32(vcpu, COMPAT_PSR_MODE_UND, 4); + } +}