Message ID | 439c7be59c35a03bced88a44567431e721fab3da.1699368363.git.isaku.yamahata@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM TDX: TDP MMU: large page support | expand |
On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote: > From: Xiaoyao Li <xiaoyao.li@intel.com> > > Cannot map a private page as large page if any smaller mapping exists. > > It has to wait for all the not-mapped smaller page to be mapped and > promote it to larger mapping. > > Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> > --- > arch/x86/kvm/mmu/tdp_mmu.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index 2c5257628881..d806574f7f2d 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -1287,7 +1287,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) { > int r; > > - if (fault->nx_huge_page_workaround_enabled) > + if (fault->nx_huge_page_workaround_enabled || > + kvm_gfn_shared_mask(vcpu->kvm)) As I mentioned in https://lore.kernel.org/kvm/fef75d54-e319-5170-5f76-f5abc4856315@linux.intel.com/, The change of this patch will not take effect. If "fault->nx_huge_page_workaround_enabled" is false, the condition "spte_to_child_sp(spte)->nx_huge_page_disallowed" will not be true. IIUC, the function disallowed_hugepage_adjust() currently is only to handle nx_huge_page_workaround, it seems no special handling needed for TD. > disallowed_hugepage_adjust(fault, iter.old_spte, iter.level); > > /*
On Thu, Nov 16, 2023 at 09:32:22AM +0800, Binbin Wu <binbin.wu@linux.intel.com> wrote: > > > On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote: > > From: Xiaoyao Li <xiaoyao.li@intel.com> > > > > Cannot map a private page as large page if any smaller mapping exists. > > > > It has to wait for all the not-mapped smaller page to be mapped and > > promote it to larger mapping. > > > > Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> > > --- > > arch/x86/kvm/mmu/tdp_mmu.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > > index 2c5257628881..d806574f7f2d 100644 > > --- a/arch/x86/kvm/mmu/tdp_mmu.c > > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > > @@ -1287,7 +1287,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > > tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) { > > int r; > > - if (fault->nx_huge_page_workaround_enabled) > > + if (fault->nx_huge_page_workaround_enabled || > > + kvm_gfn_shared_mask(vcpu->kvm)) > As I mentioned in https://lore.kernel.org/kvm/fef75d54-e319-5170-5f76-f5abc4856315@linux.intel.com/, > The change of this patch will not take effect. > If "fault->nx_huge_page_workaround_enabled" is false, the condition > "spte_to_child_sp(spte)->nx_huge_page_disallowed" will not be true. > > IIUC, the function disallowed_hugepage_adjust() currently is only to handle > nx_huge_page_workaround, it seems no special handling needed for TD. > > disallowed_hugepage_adjust(fault, iter.old_spte, iter.level); > > /* You're correct. Now guest memfd memory attributes takes care of large page mapping, this patch is uncessary. Will drop this patch.
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 2c5257628881..d806574f7f2d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1287,7 +1287,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) { int r; - if (fault->nx_huge_page_workaround_enabled) + if (fault->nx_huge_page_workaround_enabled || + kvm_gfn_shared_mask(vcpu->kvm)) disallowed_hugepage_adjust(fault, iter.old_spte, iter.level); /*