Message ID | 20220321224358.1305530-7-bgardon@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/MMU: Optimize disabling dirty logging | expand |
On Mon, Mar 21, 2022 at 03:43:55PM -0700, Ben Gardon wrote: > Factor out the parts of vmx_get_mt_mask which do not depend on the vCPU > argument. This also requires adding some error reporting to the helper > function to say whether it was possible to generate the MT mask without > a vCPU argument. This refactoring will allow the MT mask to be computed > when noncoherent DMA is not enabled on a VM. We could probably make vmx_get_mt_mask() entirely independent of the kvm_vcpu, but it would take more work. For MTRRs, the guest must update them on all CPUs at once (SDM 11.11.8) so we could just cache vCPU 0's MTRRs at the VM level and use that here. (From my experience, Intel CPUs implement MTRRs at the core level. Properly emulating that would require a different EPT table for every virtual core.) For CR0.CD, I'm not exactly sure what the semantics are for MP systems but I can't imagine it's valid for software to configure CR0.CD differently on different cores. I would have to scoure the SDM closely to confirm, but we could probably do something like cache max(CR0.CD for all vCPUs) at the VM level and use that to indicate if caching is disabled. > > No functional change intended. > > > Signed-off-by: Ben Gardon <bgardon@google.com> > --- > arch/x86/kvm/vmx/vmx.c | 24 +++++++++++++++++++----- > 1 file changed, 19 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index e8963f5af618..69c654567475 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -7149,9 +7149,26 @@ static int __init vmx_check_processor_compat(void) > return 0; > } > > +static bool vmx_try_get_mt_mask(struct kvm *kvm, gfn_t gfn, > + bool is_mmio, u64 *mask) > +{ > + if (is_mmio) { > + *mask = MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; > + return true; > + } > + > + if (!kvm_arch_has_noncoherent_dma(kvm)) { > + *mask = (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; > + return true; > + } > + > + return false; > +} > + > static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) > { > u8 cache; > + u64 mask; > > /* We wanted to honor guest CD/MTRR/PAT, but doing so could result in > * memory aliases with conflicting memory types and sometimes MCEs. > @@ -7171,11 +7188,8 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) > * EPT memory type is used to emulate guest CD/MTRR. > */ > > - if (is_mmio) > - return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; > - > - if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) > - return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; > + if (vmx_try_get_mt_mask(vcpu->kvm, gfn, is_mmio, &mask)) > + return mask; > > if (kvm_read_cr0(vcpu) & X86_CR0_CD) { > if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) > -- > 2.35.1.894.gb6a874cedc-goog >
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e8963f5af618..69c654567475 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7149,9 +7149,26 @@ static int __init vmx_check_processor_compat(void) return 0; } +static bool vmx_try_get_mt_mask(struct kvm *kvm, gfn_t gfn, + bool is_mmio, u64 *mask) +{ + if (is_mmio) { + *mask = MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; + return true; + } + + if (!kvm_arch_has_noncoherent_dma(kvm)) { + *mask = (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; + return true; + } + + return false; +} + static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { u8 cache; + u64 mask; /* We wanted to honor guest CD/MTRR/PAT, but doing so could result in * memory aliases with conflicting memory types and sometimes MCEs. @@ -7171,11 +7188,8 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) * EPT memory type is used to emulate guest CD/MTRR. */ - if (is_mmio) - return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; - - if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) - return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; + if (vmx_try_get_mt_mask(vcpu->kvm, gfn, is_mmio, &mask)) + return mask; if (kvm_read_cr0(vcpu) & X86_CR0_CD) { if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))
Factor out the parts of vmx_get_mt_mask which do not depend on the vCPU argument. This also requires adding some error reporting to the helper function to say whether it was possible to generate the MT mask without a vCPU argument. This refactoring will allow the MT mask to be computed when noncoherent DMA is not enabled on a VM. No functional change intended. Signed-off-by: Ben Gardon <bgardon@google.com> --- arch/x86/kvm/vmx/vmx.c | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-)