Message ID | 20220203010051.2813563-14-dmatlack@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Extend Eager Page Splitting to the shadow MMU | expand |
On Wed, Feb 2, 2022 at 5:02 PM David Matlack <dmatlack@google.com> wrote: > > Update the page stats in __rmap_add() rather than at the call site. This > will avoid having to manually update page stats when splitting huge > pages in a subsequent commit. > > No functional change intended. > Reviewed-by: Ben Gardon <bgardon@google.com> > Signed-off-by: David Matlack <dmatlack@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index c2f7f026d414..ae1564e67e49 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -1621,6 +1621,8 @@ static void __rmap_add(struct kvm *kvm, > rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); > rmap_count = pte_list_add(cache, spte, rmap_head); > > + kvm_update_page_stats(kvm, sp->role.level, 1); > + > if (rmap_count > RMAP_RECYCLE_THRESHOLD) { > kvm_unmap_rmapp(kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); > kvm_flush_remote_tlbs_with_address( > @@ -2831,7 +2833,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, > > if (!was_rmapped) { > WARN_ON_ONCE(ret == RET_PF_SPURIOUS); > - kvm_update_page_stats(vcpu->kvm, level, 1); > rmap_add(vcpu, slot, sptep, gfn); > } > > -- > 2.35.0.rc2.247.g8bbb082509-goog >
On Wed, Feb 23, 2022 at 3:32 PM Ben Gardon <bgardon@google.com> wrote: > > On Wed, Feb 2, 2022 at 5:02 PM David Matlack <dmatlack@google.com> wrote: > > > > Update the page stats in __rmap_add() rather than at the call site. This > > will avoid having to manually update page stats when splitting huge > > pages in a subsequent commit. > > > > No functional change intended. > > > > Reviewed-by: Ben Gardon <bgardon@google.com> > > > Signed-off-by: David Matlack <dmatlack@google.com> > > --- > > arch/x86/kvm/mmu/mmu.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index c2f7f026d414..ae1564e67e49 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -1621,6 +1621,8 @@ static void __rmap_add(struct kvm *kvm, > > rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); > > rmap_count = pte_list_add(cache, spte, rmap_head); > > > > + kvm_update_page_stats(kvm, sp->role.level, 1); > > + Strictly speaking, this is a functional change since you're moving the stat update after the rmap update, but there's no synchronization on the stats anyway, so I don't think it matters if it's updated before or after. > > if (rmap_count > RMAP_RECYCLE_THRESHOLD) { > > kvm_unmap_rmapp(kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); > > kvm_flush_remote_tlbs_with_address( > > @@ -2831,7 +2833,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, > > > > if (!was_rmapped) { > > WARN_ON_ONCE(ret == RET_PF_SPURIOUS); > > - kvm_update_page_stats(vcpu->kvm, level, 1); > > rmap_add(vcpu, slot, sptep, gfn); > > } > > > > -- > > 2.35.0.rc2.247.g8bbb082509-goog > >
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c2f7f026d414..ae1564e67e49 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1621,6 +1621,8 @@ static void __rmap_add(struct kvm *kvm, rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); rmap_count = pte_list_add(cache, spte, rmap_head); + kvm_update_page_stats(kvm, sp->role.level, 1); + if (rmap_count > RMAP_RECYCLE_THRESHOLD) { kvm_unmap_rmapp(kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); kvm_flush_remote_tlbs_with_address( @@ -2831,7 +2833,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, if (!was_rmapped) { WARN_ON_ONCE(ret == RET_PF_SPURIOUS); - kvm_update_page_stats(vcpu->kvm, level, 1); rmap_add(vcpu, slot, sptep, gfn); }
Update the page stats in __rmap_add() rather than at the call site. This will avoid having to manually update page stats when splitting huge pages in a subsequent commit. No functional change intended. Signed-off-by: David Matlack <dmatlack@google.com> --- arch/x86/kvm/mmu/mmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)