diff mbox series

[v4,09/18] KVM: x86/mmu: Shrink mmu_shadowed_info_cache via MMU shrinker

Message ID 20230306224127.1689967-10-vipinsh@google.com (mailing list archive)
State New, archived
Headers show
Series NUMA aware page table allocation | expand

Commit Message

Vipin Sharma March 6, 2023, 10:41 p.m. UTC
Shrink shadow page cache via MMU shrinker based on
kvm_total_unused_cached_pages.

Tested by running dirty_log_perf_test while dropping cache
via "echo 2 > /proc/sys/vm/drop_caches" at 1 second interval. Global
always return to 0. Shrinker also gets invoked to remove pages in cache.

Above test were run with three configurations:
- EPT=N
- EPT=Y, TDP_MMU=N
- EPT=Y, TDP_MMU=Y

Signed-off-by: Vipin Sharma <vipinsh@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 2 ++
 1 file changed, 2 insertions(+)
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index b7ca31b5699c..a4bf2e433030 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6725,6 +6725,8 @@  static unsigned long mmu_shrink_scan(struct shrinker *shrink,
 		kvm_for_each_vcpu(i, vcpu, kvm) {
 			freed += mmu_memory_cache_try_empty(&vcpu->arch.mmu_shadow_page_cache,
 							    &vcpu->arch.mmu_shadow_page_cache_lock);
+			freed += mmu_memory_cache_try_empty(&vcpu->arch.mmu_shadowed_info_cache,
+							    &vcpu->arch.mmu_shadow_page_cache_lock);
 			if (freed >= sc->nr_to_scan)
 				goto out;
 		}