Message ID | 20230918065740.3670662-9-ryan.roberts@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm64: Support FEAT_LPA2 at hyp s1 and vm s2 | expand |
'Assembly' is overloaded in this context, I had to reread the shortlog to make sense of it. Maybe: KVM: arm64: Prepare TCR_EL2.PS in cpu_prepare_hyp_mode() On Mon, Sep 18, 2023 at 07:57:35AM +0100, Ryan Roberts wrote: [...] > tcr &= ~TCR_T0SZ_MASK; > tcr |= TCR_T0SZ(hyp_va_bits); > + tcr |= kvm_get_parange(mmfr0) << TCR_EL2_PS_SHIFT; nit: FIELD_PREP() is slightly more defensive than shifting an unmasked value.
On 27/09/2023 08:20, Oliver Upton wrote: > 'Assembly' is overloaded in this context, I had to reread the shortlog > to make sense of it. Maybe: > > KVM: arm64: Prepare TCR_EL2.PS in cpu_prepare_hyp_mode() Yep, will fix it in the next version. > > On Mon, Sep 18, 2023 at 07:57:35AM +0100, Ryan Roberts wrote: > > [...] > >> tcr &= ~TCR_T0SZ_MASK; >> tcr |= TCR_T0SZ(hyp_va_bits); >> + tcr |= kvm_get_parange(mmfr0) << TCR_EL2_PS_SHIFT; > > nit: FIELD_PREP() is slightly more defensive than shifting an unmasked > value. > Yep, good idea, will fix it in the next version. Do let me know if you have any other comments across the series; If I don't hear anything further, I'll aim to get a new version with your proposed changes posted sometime next week. Thanks, Ryan
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 73cc67c2a8a7..fa30522cc935 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1726,6 +1726,7 @@ static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits) { struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu); unsigned long tcr; + u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); /* * Calculate the raw per-cpu offset without a translation from the @@ -1747,6 +1748,7 @@ static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits) } tcr &= ~TCR_T0SZ_MASK; tcr |= TCR_T0SZ(hyp_va_bits); + tcr |= kvm_get_parange(mmfr0) << TCR_EL2_PS_SHIFT; if (system_supports_lpa2()) tcr |= TCR_EL2_DS; params->tcr_el2 = tcr; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S index 90fade1b032e..a0d7bc384404 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S +++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S @@ -121,11 +121,7 @@ alternative_if ARM64_HAS_CNP alternative_else_nop_endif msr ttbr0_el2, x2 - /* - * Set the PS bits in TCR_EL2. - */ ldr x0, [x0, #NVHE_INIT_TCR_EL2] - tcr_compute_pa_size x0, #TCR_EL2_PS_SHIFT, x1, x2 msr tcr_el2, x0 isb
With the addition of LPA2 support in the hypervisor, the PA size supported by the HW must be capped with a runtime decision, rather than simply using a compile-time decision based on PA_BITS. For example, on a system that advertises 52 bit PA but does not support FEAT_LPA2, A 4KB or 16KB kernel compiled with LPA2 support must still limit the PA size to 48 bits. Therefore, move the insertion of the PS field into TCR_EL2 out of __kvm_hyp_init assembly code and instead do it in cpu_prepare_hyp_mode() where the rest of TCR_EL2 is assembled. This allows us to figure out PS with kvm_get_parange(), which has the appropriate logic to ensure the above requirement. (and the PS field of VTCR_EL2 is already populated this way). Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> --- arch/arm64/kvm/arm.c | 2 ++ arch/arm64/kvm/hyp/nvhe/hyp-init.S | 4 ---- 2 files changed, 2 insertions(+), 4 deletions(-)