Message ID | 20200211174938.27809-32-maz@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm64: ARMv8.3/8.4 Nested Virtualization support | expand |
On Tue, Feb 11, 2020 at 05:48:35PM +0000, Marc Zyngier wrote: > From: Christoffer Dall <christoffer.dall@linaro.org> > > So far we were flushing almost the entire universe whenever a VM would > load/unload the SCTLR_EL1 and the two versions of that register had > different MMU enabled settings. This turned out to be so slow that it > prevented forward progress for a nested VM, because a scheduler timer > tick interrupt would always be pending when we reached the nested VM. > > To avoid this problem, we consider the SCTLR_EL2 when evaluating if > caches are on or off when entering virtual EL2 (because this is the > value that we end up shadowing onto the hardware EL1 register). > > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> > Signed-off-by: Jintack Lim <jintack.lim@linaro.org> > Signed-off-by: Marc Zyngier <maz@kernel.org> > --- > arch/arm64/include/asm/kvm_mmu.h | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > index ee47f7637f28..ec4de0613e7c 100644 > --- a/arch/arm64/include/asm/kvm_mmu.h > +++ b/arch/arm64/include/asm/kvm_mmu.h > @@ -88,6 +88,7 @@ alternative_cb_end > #include <asm/cacheflush.h> > #include <asm/mmu_context.h> > #include <asm/pgtable.h> > +#include <asm/kvm_emulate.h> > > void kvm_update_va_mask(struct alt_instr *alt, > __le32 *origptr, __le32 *updptr, int nr_inst); > @@ -305,7 +306,10 @@ struct kvm; > > static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) > { > - return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101; > + if (vcpu_mode_el2(vcpu)) > + return (__vcpu_sys_reg(vcpu, SCTLR_EL2) & 0b101) == 0b101; > + else > + return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101; > } How about: static bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) { unsigned long cm = SCTLR_ELx_C | SCTLR_ELx_M; unsigned long sctlr; if (vcpu_mode_el2(vcpu)) sctlr = __vcpu_sys_reg(vcpu, SCTLR_EL2); else sctlr = vcpu_read_sys_reg(vcpu, SCTLR_EL1); return (sctlr & cm) == cm; } ... to avoid duplication and make it clearer which fields we're accessing. Thanks, Mark. > > static inline void __clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size) > -- > 2.20.1 > > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index ee47f7637f28..ec4de0613e7c 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -88,6 +88,7 @@ alternative_cb_end #include <asm/cacheflush.h> #include <asm/mmu_context.h> #include <asm/pgtable.h> +#include <asm/kvm_emulate.h> void kvm_update_va_mask(struct alt_instr *alt, __le32 *origptr, __le32 *updptr, int nr_inst); @@ -305,7 +306,10 @@ struct kvm; static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) { - return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101; + if (vcpu_mode_el2(vcpu)) + return (__vcpu_sys_reg(vcpu, SCTLR_EL2) & 0b101) == 0b101; + else + return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101; } static inline void __clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)