From patchwork Thu May 19 13:41:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855070 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D33C3C433EF for ; Thu, 19 May 2022 13:48:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239297AbiESNsv (ORCPT ); Thu, 19 May 2022 09:48:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231621AbiESNsJ (ORCPT ); Thu, 19 May 2022 09:48:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EEE91E3E0 for ; Thu, 19 May 2022 06:47:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1A329617CA for ; Thu, 19 May 2022 13:47:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0FA43C34115; Thu, 19 May 2022 13:47:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968062; bh=U5L7YKpcVZ4RGBMLh9l/9JmqOUeaR68eA9xYHCvM3eA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q+Ks1TY6Op73OxzQK3ChLtd/TSZuV9gBh2YroRocI9RKFy09F1zcAGk9CdV3M5IxC XjhGIB1gsZkQe+MAXANKkVhd2iTHJNKBC7gJs2+ZVaIPmtCJlmr+P0nrm/+DqRn5p9 KyDWNcTpwlkiOkDYlZhOAv6aa6DnFue54lCccxOch7gHIXVij5y8nhgsdN6nTqB+9r 1WzVTquG5+Kig1mctkHhHOS6BI+MtGMuJCiE7a2vXnE2MKO3ptt2MHBW5821u3HxJ3 lPU9JocWCCzZZS0aX4Vt048eG6NXkFJjZCJ4bIAjVtxD5S1VidlCnPPcepaNbSOEgL CoamQoUCITO4g== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 80/89] KVM: arm64: Refactor enter_exception64() Date: Thu, 19 May 2022 14:41:55 +0100 Message-Id: <20220519134204.5379-81-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Quentin Perret In order to simplify the injection of exceptions in the host in pkvm context, let's factor out of enter_exception64() the code calculating the exception offset from VBAR_EL1 and the cpsr. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_emulate.h | 5 ++ arch/arm64/kvm/hyp/exception.c | 89 ++++++++++++++++------------ 2 files changed, 57 insertions(+), 37 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 2a79c861b8e0..8b6c391bbee8 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -41,6 +41,11 @@ void kvm_inject_vabt(struct kvm_vcpu *vcpu); void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); +unsigned long get_except64_offset(unsigned long psr, unsigned long target_mode, + enum exception_type type); +unsigned long get_except64_cpsr(unsigned long old, bool has_mte, + unsigned long sctlr, unsigned long mode); + void kvm_vcpu_wfi(struct kvm_vcpu *vcpu); static inline int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index c5d009715402..14a80b0e2f91 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -60,31 +60,12 @@ static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val) vcpu->arch.ctxt.spsr_und = val; } -/* - * This performs the exception entry at a given EL (@target_mode), stashing PC - * and PSTATE into ELR and SPSR respectively, and compute the new PC/PSTATE. - * The EL passed to this function *must* be a non-secure, privileged mode with - * bit 0 being set (PSTATE.SP == 1). - * - * When an exception is taken, most PSTATE fields are left unchanged in the - * handler. However, some are explicitly overridden (e.g. M[4:0]). Luckily all - * of the inherited bits have the same position in the AArch64/AArch32 SPSR_ELx - * layouts, so we don't need to shuffle these for exceptions from AArch32 EL0. - * - * For the SPSR_ELx layout for AArch64, see ARM DDI 0487E.a page C5-429. - * For the SPSR_ELx layout for AArch32, see ARM DDI 0487E.a page C5-426. - * - * Here we manipulate the fields in order of the AArch64 SPSR_ELx layout, from - * MSB to LSB. - */ -static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, - enum exception_type type) +unsigned long get_except64_offset(unsigned long psr, unsigned long target_mode, + enum exception_type type) { - unsigned long sctlr, vbar, old, new, mode; + u64 mode = psr & (PSR_MODE_MASK | PSR_MODE32_BIT); u64 exc_offset; - mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT); - if (mode == target_mode) exc_offset = CURRENT_EL_SP_ELx_VECTOR; else if ((mode | PSR_MODE_THREAD_BIT) == target_mode) @@ -94,28 +75,32 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, else exc_offset = LOWER_EL_AArch32_VECTOR; - switch (target_mode) { - case PSR_MODE_EL1h: - vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL1); - sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1); - __vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1); - break; - default: - /* Don't do that */ - BUG(); - } - - *vcpu_pc(vcpu) = vbar + exc_offset + type; + return exc_offset + type; +} - old = *vcpu_cpsr(vcpu); - new = 0; +/* + * When an exception is taken, most PSTATE fields are left unchanged in the + * handler. However, some are explicitly overridden (e.g. M[4:0]). Luckily all + * of the inherited bits have the same position in the AArch64/AArch32 SPSR_ELx + * layouts, so we don't need to shuffle these for exceptions from AArch32 EL0. + * + * For the SPSR_ELx layout for AArch64, see ARM DDI 0487E.a page C5-429. + * For the SPSR_ELx layout for AArch32, see ARM DDI 0487E.a page C5-426. + * + * Here we manipulate the fields in order of the AArch64 SPSR_ELx layout, from + * MSB to LSB. + */ +unsigned long get_except64_cpsr(unsigned long old, bool has_mte, + unsigned long sctlr, unsigned long target_mode) +{ + u64 new = 0; new |= (old & PSR_N_BIT); new |= (old & PSR_Z_BIT); new |= (old & PSR_C_BIT); new |= (old & PSR_V_BIT); - if (kvm_has_mte(vcpu->kvm)) + if (has_mte) new |= PSR_TCO_BIT; new |= (old & PSR_DIT_BIT); @@ -151,6 +136,36 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, new |= target_mode; + return new; +} + +/* + * This performs the exception entry at a given EL (@target_mode), stashing PC + * and PSTATE into ELR and SPSR respectively, and compute the new PC/PSTATE. + * The EL passed to this function *must* be a non-secure, privileged mode with + * bit 0 being set (PSTATE.SP == 1). + */ +static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode, + enum exception_type type) +{ + u64 offset = get_except64_offset(*vcpu_cpsr(vcpu), target_mode, type); + unsigned long sctlr, vbar, old, new; + + switch (target_mode) { + case PSR_MODE_EL1h: + vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL1); + sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1); + __vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1); + break; + default: + /* Don't do that */ + BUG(); + } + + *vcpu_pc(vcpu) = vbar + offset; + + old = *vcpu_cpsr(vcpu); + new = get_except64_cpsr(old, kvm_has_mte(vcpu->kvm), sctlr, target_mode); *vcpu_cpsr(vcpu) = new; __vcpu_write_spsr(vcpu, old); }