From patchwork Tue Dec 22 08:08:10 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 7902611 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id BE32A9F32E for ; Tue, 22 Dec 2015 08:23:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E6E222057F for ; Tue, 22 Dec 2015 08:23:19 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 086C420573 for ; Tue, 22 Dec 2015 08:23:19 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aBICY-00049w-Ad; Tue, 22 Dec 2015 08:21:46 +0000 Received: from szxga02-in.huawei.com ([119.145.14.65]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aBI9x-0000cg-NY for linux-arm-kernel@lists.infradead.org; Tue, 22 Dec 2015 08:19:12 +0000 Received: from 172.24.1.48 (EHLO szxeml431-hub.china.huawei.com) ([172.24.1.48]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CYK61693; Tue, 22 Dec 2015 16:08:58 +0800 (CST) Received: from HGHY1Z002260041.china.huawei.com (10.177.16.142) by szxeml431-hub.china.huawei.com (10.82.67.208) with Microsoft SMTP Server id 14.3.235.1; Tue, 22 Dec 2015 16:08:50 +0800 From: Shannon Zhao To: , , Subject: [PATCH v8 15/20] KVM: ARM64: Add a helper to forward trap to guest EL1 Date: Tue, 22 Dec 2015 16:08:10 +0800 Message-ID: <1450771695-11948-16-git-send-email-zhaoshenglong@huawei.com> X-Mailer: git-send-email 1.9.0.msysgit.0 In-Reply-To: <1450771695-11948-1-git-send-email-zhaoshenglong@huawei.com> References: <1450771695-11948-1-git-send-email-zhaoshenglong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.16.142] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A0B0203.5679051C.00E9, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 9d5a8c67560cdfa1a3f34939dc4423b9 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151222_001906_853964_31C6281F X-CRM114-Status: GOOD ( 14.15 ) X-Spam-Score: -4.2 (----) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wei@redhat.com, hangaohuai@huawei.com, kvm@vger.kernel.org, will.deacon@arm.com, peter.huangpeng@huawei.com, shannon.zhao@linaro.org, zhaoshenglong@huawei.com, linux-arm-kernel@lists.infradead.org, cov@codeaurora.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Shannon Zhao This helper forward the trap caused by MRS/MSR for arch64 and MCR/MRC, MCRR/MRRC for arch32 CP15 to guest EL1. Signed-off-by: Shannon Zhao --- arch/arm64/include/asm/kvm_emulate.h | 1 + arch/arm64/kvm/inject_fault.c | 52 +++++++++++++++++++++++++++++++++++- 2 files changed, 52 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 3066328..88b2958 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -36,6 +36,7 @@ unsigned long *vcpu_spsr32(const struct kvm_vcpu *vcpu); bool kvm_condition_valid32(const struct kvm_vcpu *vcpu); void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr); +void kvm_forward_trap_to_el1(struct kvm_vcpu *vcpu); void kvm_inject_undefined(struct kvm_vcpu *vcpu); void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index 648112e..052ef25 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -27,7 +27,10 @@ #define PSTATE_FAULT_BITS_64 (PSR_MODE_EL1h | PSR_A_BIT | PSR_F_BIT | \ PSR_I_BIT | PSR_D_BIT) -#define EL1_EXCEPT_SYNC_OFFSET 0x200 +#define EL1_EXCEPT_BAD_SYNC_OFFSET 0x0 +#define EL1_EXCEPT_SYNC_OFFSET 0x200 +#define EL0_EXCEPT_SYNC_OFFSET_64 0x400 +#define EL0_EXCEPT_SYNC_OFFSET_32 0x600 static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset) { @@ -201,3 +204,50 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu) else inject_undef64(vcpu); } + +/** + * kvm_forward_trap_to_el1 - forward access trap to the guest EL1 + * + * It is assumed that this code is called from the VCPU thread and that the + * VCPU therefore is not currently executing guest code. + */ +void kvm_forward_trap_to_el1(struct kvm_vcpu *vcpu) +{ + unsigned long cpsr; + u32 esr = vcpu->arch.fault.esr_el2; + u32 esr_ec = (esr & ESR_ELx_EC_MASK) >> ESR_ELx_EC_SHIFT; + + if (esr_ec == ESR_ELx_EC_SYS64) { + u64 exc_offset; + + cpsr = *vcpu_cpsr(vcpu); + *vcpu_spsr(vcpu) = cpsr; + *vcpu_elr_el1(vcpu) = *vcpu_pc(vcpu); + + *vcpu_cpsr(vcpu) = PSTATE_FAULT_BITS_64; + + switch (cpsr & (PSR_MODE_MASK | PSR_MODE32_BIT)) { + case PSR_MODE_EL0t: + exc_offset = EL0_EXCEPT_SYNC_OFFSET_64; + break; + case PSR_MODE_EL1t: + exc_offset = EL1_EXCEPT_BAD_SYNC_OFFSET; + break; + case PSR_MODE_EL1h: + exc_offset = EL1_EXCEPT_SYNC_OFFSET; + break; + default: + exc_offset = EL0_EXCEPT_SYNC_OFFSET_32; + } + + *vcpu_pc(vcpu) = vcpu_sys_reg(vcpu, VBAR_EL1) + exc_offset; + + if (kvm_vcpu_trap_il_is32bit(vcpu)) + esr |= ESR_ELx_IL; + + vcpu_sys_reg(vcpu, ESR_EL1) = esr; + } else if (esr_ec == ESR_ELx_EC_CP15_32 || + esr_ec == ESR_ELx_EC_CP15_64) { + prepare_fault32(vcpu, COMPAT_PSR_MODE_UND, 4); + } +}