Message ID | 20190925111941.88103-4-maz@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: KVM: Add workaround for errata 1319367 and 1319537 | expand |
Hi Marc, On 25/09/2019 12:19, Marc Zyngier wrote: > When erratum 1319367 is being worked around, special care must > be taken not to allow the page table walker to populate TLBs > while we have the stage-2 translation enabled (which would otherwise > result in a bizare mix of the host S1 and the guest S2). > > We enforce this by setting TCR_EL1.EPD{0,1} before restoring the S2 > configuration, and clear the same bits after having disabled S2. Some comment Nits... > diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c > index eb0efc5557f3..4ef0bf0d76a6 100644 > --- a/arch/arm64/kvm/hyp/tlb.c > +++ b/arch/arm64/kvm/hyp/tlb.c > @@ -63,6 +63,22 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, > static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm, > struct tlb_inv_context *cxt) > { > + if (cpus_have_const_cap(ARM64_WORKAROUND_1319367)) { > + u64 val; > + > + /* > + * For CPUs that are affected by ARM 1319367, we need to > + * avoid a host Stage-1 walk while we have the guest's > + * Stage-2 set in the VTTBR in order to invalidate TLBs. Isn't HCR_EL2.VM==0 for all this? I think its the VMID that matters here: | ... have the guest's VMID set in VTTBR ... ? > + * We're guaranteed that the S1 MMU is enabled, so we can > + * simply set the EPD bits to avoid any further TLB fill. > + */ > + val = cxt->tcr = read_sysreg_el1(SYS_TCR); > + val |= TCR_EPD1_MASK | TCR_EPD0_MASK; > + write_sysreg_el1(val, SYS_TCR); > + isb(); > + } > + > __load_guest_stage2(kvm); > isb(); > } > @@ -100,6 +116,13 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm, > struct tlb_inv_context *cxt) > { > write_sysreg(0, vttbr_el2); > + > + if (cpus_have_const_cap(ARM64_WORKAROUND_1319367)) { > + /* Ensure stage-2 is actually disabled */ | Ensure the host's VMID has been written ? > + isb(); > + /* Restore the host's TCR_EL1 */ > + write_sysreg_el1(cxt->tcr, SYS_TCR); > + } > } Thanks, James
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index eb0efc5557f3..4ef0bf0d76a6 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -63,6 +63,22 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm, struct tlb_inv_context *cxt) { + if (cpus_have_const_cap(ARM64_WORKAROUND_1319367)) { + u64 val; + + /* + * For CPUs that are affected by ARM 1319367, we need to + * avoid a host Stage-1 walk while we have the guest's + * Stage-2 set in the VTTBR in order to invalidate TLBs. + * We're guaranteed that the S1 MMU is enabled, so we can + * simply set the EPD bits to avoid any further TLB fill. + */ + val = cxt->tcr = read_sysreg_el1(SYS_TCR); + val |= TCR_EPD1_MASK | TCR_EPD0_MASK; + write_sysreg_el1(val, SYS_TCR); + isb(); + } + __load_guest_stage2(kvm); isb(); } @@ -100,6 +116,13 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm, struct tlb_inv_context *cxt) { write_sysreg(0, vttbr_el2); + + if (cpus_have_const_cap(ARM64_WORKAROUND_1319367)) { + /* Ensure stage-2 is actually disabled */ + isb(); + /* Restore the host's TCR_EL1 */ + write_sysreg_el1(cxt->tcr, SYS_TCR); + } } static void __hyp_text __tlb_switch_to_host(struct kvm *kvm,
When erratum 1319367 is being worked around, special care must be taken not to allow the page table walker to populate TLBs while we have the stage-2 translation enabled (which would otherwise result in a bizare mix of the host S1 and the guest S2). We enforce this by setting TCR_EL1.EPD{0,1} before restoring the S2 configuration, and clear the same bits after having disabled S2. Signed-off-by: Marc Zyngier <maz@kernel.org> --- arch/arm64/kvm/hyp/tlb.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+)