Message ID | e2521b4c48c582260454764e84a057a2da99ac3c.1625186503.git.isaku.yamahata@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: X86: TDX support | expand |
On 03/07/21 00:04, isaku.yamahata@intel.com wrote: > From: Sean Christopherson <sean.j.christopherson@intel.com> > > TODO: This is tentative patch. Support large page and delete this patch. > > Allow TDX to effectively disable large pages, as SEPT will initially > support only 4k pages. > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> > --- > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/mmu/mmu.c | 4 +++- > 2 files changed, 4 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 9631b985ebdc..a47e17892258 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -989,6 +989,7 @@ struct kvm_arch { > unsigned long n_requested_mmu_pages; > unsigned long n_max_mmu_pages; > unsigned int indirect_shadow_pages; > + int tdp_max_page_level; > u8 mmu_valid_gen; > struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; > struct list_head active_mmu_pages; > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 82db62753acb..4ee6d7803f18 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -4084,7 +4084,7 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > kvm_pfn_t pfn; > int max_level; > > - for (max_level = KVM_MAX_HUGEPAGE_LEVEL; > + for (max_level = vcpu->kvm->arch.tdp_max_page_level; > max_level > PG_LEVEL_4K; > max_level--) { > int page_num = KVM_PAGES_PER_HPAGE(max_level); > @@ -5802,6 +5802,8 @@ void kvm_mmu_init_vm(struct kvm *kvm) > node->track_write = kvm_mmu_pte_write; > node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; > kvm_page_track_register_notifier(kvm, node); > + > + kvm->arch.tdp_max_page_level = KVM_MAX_HUGEPAGE_LEVEL; > } > > void kvm_mmu_uninit_vm(struct kvm *kvm) > Seems good enough for now. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Paolo
On Tue, Jul 06, 2021, Paolo Bonzini wrote: > On 03/07/21 00:04, isaku.yamahata@intel.com wrote: > > From: Sean Christopherson <sean.j.christopherson@intel.com> > > > > TODO: This is tentative patch. Support large page and delete this patch. > > > > Allow TDX to effectively disable large pages, as SEPT will initially > > support only 4k pages. ... > Seems good enough for now. Looks like SNP needs a dynamic check, i.e. a kvm_x86_ops hook, to handle an edge case in the RMP. That's probably the better route given that this is a short-term hack (hopefully :-D).
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9631b985ebdc..a47e17892258 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -989,6 +989,7 @@ struct kvm_arch { unsigned long n_requested_mmu_pages; unsigned long n_max_mmu_pages; unsigned int indirect_shadow_pages; + int tdp_max_page_level; u8 mmu_valid_gen; struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; struct list_head active_mmu_pages; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 82db62753acb..4ee6d7803f18 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4084,7 +4084,7 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, kvm_pfn_t pfn; int max_level; - for (max_level = KVM_MAX_HUGEPAGE_LEVEL; + for (max_level = vcpu->kvm->arch.tdp_max_page_level; max_level > PG_LEVEL_4K; max_level--) { int page_num = KVM_PAGES_PER_HPAGE(max_level); @@ -5802,6 +5802,8 @@ void kvm_mmu_init_vm(struct kvm *kvm) node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; kvm_page_track_register_notifier(kvm, node); + + kvm->arch.tdp_max_page_level = KVM_MAX_HUGEPAGE_LEVEL; } void kvm_mmu_uninit_vm(struct kvm *kvm)