From patchwork Tue Mar 25 18:48:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 14029391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50735C3600B for ; Tue, 25 Mar 2025 19:12:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=GfJH8D/UescYUiMv67UNBmRW36/WcRUDdyhmWh6oCyc=; b=SaJYNWL2LwOx8rY8DG6Bypiwkh d9+uET1PNc4ODGtwhBNnut8Kr1HONS1Ua1kznTxXXeiKo0ouYfySvBVvg8TYQUJKI26fMmxnzW8XR GyeRopMaTlqbQ0hOUV9GWo1mpRnqHvjxLLWrctpWg4TyVyX7sOAbenubAhSKjPvNFl9c10Bj1UoLb RouFl+8d6V7kOLkgClF8F6Rtc/aiNg8X1+YCtflrk+um9afv+xOZLie0GAXzx9+9iKyZANKmFGhdJ Yp5kOVehXopau7jJRSc+iGDE+Uq3PGLV6M/GZUKlzmA2Ith5Q3fIa/RrlhsiJ5vFhDc1CCUtbASVL 4xRCGdBw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9hP-00000006pR9-1TiF; Tue, 25 Mar 2025 19:12:31 +0000 Received: from sea.source.kernel.org ([2600:3c0a:e001:78e:0:1991:8:25]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tx9Lc-00000006lZ6-46ha for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2025 18:50:02 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 66AC1441DA; Tue, 25 Mar 2025 18:50:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0F5E5C4CEEE; Tue, 25 Mar 2025 18:49:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742928600; bh=IDO7E6w9qjamd5TrcjNxi0E2jcOjMFZ2J5WIZylwHI8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=ZnGqtIwKyfkIb7pNBrkqkjFoS8kEMC2sQOhRZqufUuOK5e2gYKL7LpcnL1QQeViWE U2u693u0fIC6nZvip6jTTovLxlr6Yl60GtO+mdkGtRZXQilH1iX3BRzT8pRVSOrD8N Z2tiBtBBDrRI4Zh+DwjhR9GENNAqrYI1QzqR/pnMkWjG1wzis6FT67m/0zaDczDh44 mfO7f2BKA0ZHLxT+2qeH+EDlKc3qZ0VD6Ybss1/sAsNdpf59S2xTOOGEGP9nzO5dMA FX90gJHMduAu7U6gzZ5aXU6g89wCleVJAouCvEDQUriE8hXOdHZEgPhOoMQkBtqBZ9 /NXtcyYHYiG+Q== From: Mark Brown Date: Tue, 25 Mar 2025 18:48:26 +0000 Subject: [PATCH 6.1 12/12] KVM: arm64: Eagerly switch ZCR_EL{1,2} MIME-Version: 1.0 Message-Id: <20250325-stable-sve-6-1-v1-12-83259d427d84@kernel.org> References: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> In-Reply-To: <20250325-stable-sve-6-1-v1-0-83259d427d84@kernel.org> To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Oleg Nesterov Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, Mark Brown , Mark Rutland , Fuad Tabba X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=openpgp-sha256; l=13565; i=broonie@kernel.org; h=from:subject:message-id; bh=6/ESzO08AZX0OJJg1IJbC8h+krjXkQjGCllhmD0ODNk=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBn4vqvma8dNL/lY6pHdBzRmN70mxIxYa0D+K7C3/HH HqoNszKJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZ+L6rwAKCRAk1otyXVSH0GUJB/ wIOjVuuZokBg6YDhCIUDU6lauNzG+hFQiQ7eY65SS9hwNzKMGkS/Y+32heUNWKftCsPibX5sF2Yzeb xzjKaJRh3j44J5+XDZyu7KcXfk3zbjHWoDUVEbHkmp3TFosOVDozbQS6c3gegmS+ngGn+mklc9wSsx SMdI53D427o9Uu+lA3v9F238opRq0UISdGQ5a8SCVstxKP83fT7fhXYUpxiYlQeLRmk15mWv+hijmX CBvJ3W/hQipxDSF0ieGy1vVJrGeM7Btugr3Lr9RLmsoUaCFcUTMfVLt1w0QBE04GGD1NSZfAEmfm/p aoqwvmtOthV/7DzPb/5W6YAPF6OUx7 X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250325_115001_080249_FA9B61C9 X-CRM114-Status: GOOD ( 32.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Rutland [ Upstream commit 59419f10045bc955d2229819c7cf7a8b0b9c5b59 ] In non-protected KVM modes, while the guest FPSIMD/SVE/SME state is live on the CPU, the host's active SVE VL may differ from the guest's maximum SVE VL: * For VHE hosts, when a VM uses NV, ZCR_EL2 contains a value constrained by the guest hypervisor, which may be less than or equal to that guest's maximum VL. Note: in this case the value of ZCR_EL1 is immaterial due to E2H. * For nVHE/hVHE hosts, ZCR_EL1 contains a value written by the guest, which may be less than or greater than the guest's maximum VL. Note: in this case hyp code traps host SVE usage and lazily restores ZCR_EL2 to the host's maximum VL, which may be greater than the guest's maximum VL. This can be the case between exiting a guest and kvm_arch_vcpu_put_fp(). If a softirq is taken during this period and the softirq handler tries to use kernel-mode NEON, then the kernel will fail to save the guest's FPSIMD/SVE state, and will pend a SIGKILL for the current thread. This happens because kvm_arch_vcpu_ctxsync_fp() binds the guest's live FPSIMD/SVE state with the guest's maximum SVE VL, and fpsimd_save_user_state() verifies that the live SVE VL is as expected before attempting to save the register state: | if (WARN_ON(sve_get_vl() != vl)) { | force_signal_inject(SIGKILL, SI_KERNEL, 0, 0); | return; | } Fix this and make this a bit easier to reason about by always eagerly switching ZCR_EL{1,2} at hyp during guest<->host transitions. With this happening, there's no need to trap host SVE usage, and the nVHE/nVHE __deactivate_cptr_traps() logic can be simplified to enable host access to all present FPSIMD/SVE/SME features. In protected nVHE/hVHE modes, the host's state is always saved/restored by hyp, and the guest's state is saved prior to exit to the host, so from the host's PoV the guest never has live FPSIMD/SVE/SME state, and the host's ZCR_EL1 is never clobbered by hyp. Fixes: 8c8010d69c132273 ("KVM: arm64: Save/restore SVE state for nVHE") Fixes: 2e3cf82063a00ea0 ("KVM: arm64: nv: Ensure correct VL is loaded before saving SVE state") Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Tested-by: Mark Brown Cc: Catalin Marinas Cc: Fuad Tabba Cc: Marc Zyngier Cc: Oliver Upton Cc: Will Deacon Reviewed-by: Oliver Upton Link: https://lore.kernel.org/r/20250210195226.1215254-9-mark.rutland@arm.com Signed-off-by: Marc Zyngier [ v6.6 lacks pKVM saving of host SVE state, pull in discovery of maximum host VL separately -- broonie ] Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_hyp.h | 1 + arch/arm64/kvm/fpsimd.c | 19 ++++++------ arch/arm64/kvm/hyp/entry.S | 5 +++ arch/arm64/kvm/hyp/include/hyp/switch.h | 55 +++++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 8 ++--- arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 ++ arch/arm64/kvm/hyp/nvhe/switch.c | 30 +++++++++++------- arch/arm64/kvm/hyp/vhe/switch.c | 4 +++ arch/arm64/kvm/reset.c | 3 ++ 10 files changed, 103 insertions(+), 25 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index c13a0d5907e8756cbbf458847403bab78de7947c..0935f9849510471ab29c1bcc5fa584557a852dfe 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -67,6 +67,7 @@ enum kvm_mode kvm_get_mode(void); DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); extern unsigned int kvm_sve_max_vl; +extern unsigned int kvm_host_sve_max_vl; int kvm_arm_init_sve(void); u32 __attribute_const__ kvm_target_cpu(void); diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index aa7fa2a08f0604af5b25f2eb0f334f4a716b4431..1d0bb7624a1c073b5deffff7d851e581a6433c45 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -122,5 +122,6 @@ extern u64 kvm_nvhe_sym(id_aa64isar2_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val); +extern unsigned int kvm_nvhe_sym(kvm_host_sve_max_vl); #endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index df050e4d3562d2ef36b7c27602a6feaa431dfa93..3fd86b71ee379e6e4d9b53e0634bea6f37d3be99 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -148,15 +148,16 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) local_irq_save(flags); if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED) { - if (vcpu_has_sve(vcpu)) { - __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); - - /* Restore the VL that was saved when bound to the CPU */ - if (!has_vhe()) - sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, - SYS_ZCR_EL1); - } - + /* + * Flush (save and invalidate) the fpsimd/sve state so that if + * the host tries to use fpsimd/sve, it's not using stale data + * from the guest. + * + * Flushing the state sets the TIF_FOREIGN_FPSTATE bit for the + * context unconditionally, in both nVHE and VHE. This allows + * the kernel to restore the fpsimd/sve state, including ZCR_EL1 + * when needed. + */ fpsimd_save_and_flush_cpu_state(); } diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 435346ea1504e158f7877499514118d39400f247..d8c94c45cb2f2f815d0f5e9e58f9fd4e6eb572f2 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -44,6 +44,11 @@ alternative_if ARM64_HAS_RAS_EXTN alternative_else_nop_endif mrs x1, isr_el1 cbz x1, 1f + + // Ensure that __guest_enter() always provides a context + // synchronization event so that callers don't need ISBs for anything + // that would usually be synchonized by the ERET. + isb mov x0, #ARM_EXCEPTION_IRQ ret diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 0db90cb47308453caafa214c3ee337ce990a76e2..275176e61d748811e8e6b55b9756409c4bf2d719 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -167,6 +167,61 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR); } +static inline void fpsimd_lazy_switch_to_guest(struct kvm_vcpu *vcpu) +{ + u64 zcr_el1, zcr_el2; + + if (!guest_owns_fp_regs(vcpu)) + return; + + if (vcpu_has_sve(vcpu)) { + zcr_el2 = vcpu_sve_max_vq(vcpu) - 1; + + write_sysreg_el2(zcr_el2, SYS_ZCR); + + zcr_el1 = __vcpu_sys_reg(vcpu, ZCR_EL1); + write_sysreg_el1(zcr_el1, SYS_ZCR); + } +} + +static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu) +{ + u64 zcr_el1, zcr_el2; + + if (!guest_owns_fp_regs(vcpu)) + return; + + /* + * When the guest owns the FP regs, we know that guest+hyp traps for + * any FPSIMD/SVE/SME features exposed to the guest have been disabled + * by either fpsimd_lazy_switch_to_guest() or kvm_hyp_handle_fpsimd() + * prior to __guest_entry(). As __guest_entry() guarantees a context + * synchronization event, we don't need an ISB here to avoid taking + * traps for anything that was exposed to the guest. + */ + if (vcpu_has_sve(vcpu)) { + zcr_el1 = read_sysreg_el1(SYS_ZCR); + __vcpu_sys_reg(vcpu, ZCR_EL1) = zcr_el1; + + /* + * The guest's state is always saved using the guest's max VL. + * Ensure that the host has the guest's max VL active such that + * the host can save the guest's state lazily, but don't + * artificially restrict the host to the guest's max VL. + */ + if (has_vhe()) { + zcr_el2 = vcpu_sve_max_vq(vcpu) - 1; + write_sysreg_el2(zcr_el2, SYS_ZCR); + } else { + zcr_el2 = sve_vq_from_vl(kvm_host_sve_max_vl) - 1; + write_sysreg_el2(zcr_el2, SYS_ZCR); + + zcr_el1 = vcpu_sve_max_vq(vcpu) - 1; + write_sysreg_el1(zcr_el1, SYS_ZCR); + } + } +} + /* * We trap the first access to the FP/SIMD to save the host context and * restore the guest context lazily. diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 3cea4b6ac23ec114c037348fee1363ad25edeff6..b183cc866404633cd17f6603a5d4626574e56021 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -5,6 +5,7 @@ */ #include +#include #include #include @@ -25,7 +26,9 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); + fpsimd_lazy_switch_to_guest(kern_hyp_va(vcpu)); cpu_reg(host_ctxt, 1) = __kvm_vcpu_run(kern_hyp_va(vcpu)); + fpsimd_lazy_switch_to_host(kern_hyp_va(vcpu)); } static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) @@ -285,11 +288,6 @@ void handle_trap(struct kvm_cpu_context *host_ctxt) case ESR_ELx_EC_SMC64: handle_host_smc(host_ctxt); break; - case ESR_ELx_EC_SVE: - sysreg_clear_set(cptr_el2, CPTR_EL2_TZ, 0); - isb(); - sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2); - break; case ESR_ELx_EC_IABT_LOW: case ESR_ELx_EC_DABT_LOW: handle_host_mem_abort(host_ctxt); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 93586bf80ec9f2bab44625253fed71654e22c87c..6042cdd3d887709888d997dd6d4ff888fdf9f715 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -9,6 +9,8 @@ #include #include +unsigned int kvm_host_sve_max_vl; + /* * Set trap register values based on features in ID_AA64PFR0. */ diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 58171926f9ba23844997ff02406518a312eb8bb7..47c7f3a675aec8c140aec24732fa46ebbadbf2af 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -40,6 +40,9 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu) { u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */ + if (!guest_owns_fp_regs(vcpu)) + __activate_traps_fpsimd32(vcpu); + /* !hVHE case upstream */ if (1) { val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1; @@ -55,12 +58,24 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu) if (!guest_owns_fp_regs(vcpu)) val |= CPTR_EL2_TFP; + + write_sysreg(val, cptr_el2); } +} - if (!guest_owns_fp_regs(vcpu)) - __activate_traps_fpsimd32(vcpu); +static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu) +{ + /* !hVHE case upstream */ + if (1) { + u64 val = CPTR_NVHE_EL2_RES1; - write_sysreg(val, cptr_el2); + if (!cpus_have_final_cap(ARM64_SVE)) + val |= CPTR_EL2_TZ; + if (!cpus_have_final_cap(ARM64_SME)) + val |= CPTR_EL2_TSM; + + write_sysreg(val, cptr_el2); + } } static void __activate_traps(struct kvm_vcpu *vcpu) @@ -89,7 +104,6 @@ static void __activate_traps(struct kvm_vcpu *vcpu) static void __deactivate_traps(struct kvm_vcpu *vcpu) { extern char __kvm_hyp_host_vector[]; - u64 cptr; ___deactivate_traps(vcpu); @@ -114,13 +128,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2); - cptr = CPTR_EL2_DEFAULT; - if (vcpu_has_sve(vcpu) && (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED)) - cptr |= CPTR_EL2_TZ; - if (cpus_have_final_cap(ARM64_SME)) - cptr &= ~CPTR_EL2_TSM; - - write_sysreg(cptr, cptr_el2); + __deactivate_cptr_traps(vcpu); write_sysreg(__kvm_hyp_host_vector, vbar_el2); } diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index f24569ac26c22978fc991221a9f5435e64cf4db2..179152bb9e4286c4398c0439feede87e755bc26f 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -134,6 +134,8 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) sysreg_save_host_state_vhe(host_ctxt); + fpsimd_lazy_switch_to_guest(vcpu); + /* * ARM erratum 1165522 requires us to configure both stage 1 and * stage 2 translation for the guest context before we clear @@ -164,6 +166,8 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __deactivate_traps(vcpu); + fpsimd_lazy_switch_to_host(vcpu); + sysreg_restore_host_state_vhe(host_ctxt); if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED) diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index f9d070473614e5d8d52c0ff9b90f48c7a9100791..54e00ee631a052fa60979f70ebab47f1faeacbde 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -42,11 +42,14 @@ static u32 kvm_ipa_limit; PSR_AA32_I_BIT | PSR_AA32_F_BIT) unsigned int kvm_sve_max_vl; +unsigned int kvm_host_sve_max_vl; int kvm_arm_init_sve(void) { if (system_supports_sve()) { kvm_sve_max_vl = sve_max_virtualisable_vl(); + kvm_host_sve_max_vl = sve_max_vl(); + kvm_nvhe_sym(kvm_host_sve_max_vl) = kvm_host_sve_max_vl; /* * The get_sve_reg()/set_sve_reg() ioctl interface will need