Message ID | 20220401233737.3021889-3-dmatlack@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: Split huge pages mapped by the TDP MMU on fault | expand |
On Fri, Apr 1, 2022 at 4:37 PM David Matlack <dmatlack@google.com> wrote: > > In preparation for splitting huge pages during fault, pass account_nx to > tdp_mmu_split_huge_page(). Eager page splitting hard-codes account_nx to > false because the splitting is being done for dirty-logging rather than > vCPU execution faults. > > No functional change intended. > > Signed-off-by: David Matlack <dmatlack@google.com> Looks good to me, but this will conflict with some other patches from Mingwei and Sean, so someone will have to send out another version. > --- > arch/x86/kvm/mmu/tdp_mmu.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index d71d177ae6b8..9263765c8068 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -1456,7 +1456,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, > } > > static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, > - struct kvm_mmu_page *sp, bool shared) > + struct kvm_mmu_page *sp, bool shared, > + bool account_nx) > { > const u64 huge_spte = iter->old_spte; > const int level = iter->level; > @@ -1479,7 +1480,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, > * correctness standpoint since the translation will be the same either > * way. > */ > - ret = tdp_mmu_link_sp(kvm, iter, sp, false, shared); > + ret = tdp_mmu_link_sp(kvm, iter, sp, account_nx, shared); > if (ret) > goto out; > > @@ -1539,7 +1540,7 @@ static int tdp_mmu_split_huge_pages_root(struct kvm *kvm, > continue; > } > > - if (tdp_mmu_split_huge_page(kvm, &iter, sp, shared)) > + if (tdp_mmu_split_huge_page(kvm, &iter, sp, shared, false)) > goto retry; > > sp = NULL; > -- > 2.35.1.1094.g7c7d902a7c-goog >
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index d71d177ae6b8..9263765c8068 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1456,7 +1456,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, } static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, - struct kvm_mmu_page *sp, bool shared) + struct kvm_mmu_page *sp, bool shared, + bool account_nx) { const u64 huge_spte = iter->old_spte; const int level = iter->level; @@ -1479,7 +1480,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * correctness standpoint since the translation will be the same either * way. */ - ret = tdp_mmu_link_sp(kvm, iter, sp, false, shared); + ret = tdp_mmu_link_sp(kvm, iter, sp, account_nx, shared); if (ret) goto out; @@ -1539,7 +1540,7 @@ static int tdp_mmu_split_huge_pages_root(struct kvm *kvm, continue; } - if (tdp_mmu_split_huge_page(kvm, &iter, sp, shared)) + if (tdp_mmu_split_huge_page(kvm, &iter, sp, shared, false)) goto retry; sp = NULL;
In preparation for splitting huge pages during fault, pass account_nx to tdp_mmu_split_huge_page(). Eager page splitting hard-codes account_nx to false because the splitting is being done for dirty-logging rather than vCPU execution faults. No functional change intended. Signed-off-by: David Matlack <dmatlack@google.com> --- arch/x86/kvm/mmu/tdp_mmu.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)