diff mbox series

KVM: x86: Zap the oldest MMU pages, not the newest

Message ID 20210113205030.3481307-1-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86: Zap the oldest MMU pages, not the newest | expand

Commit Message

Sean Christopherson Jan. 13, 2021, 8:50 p.m. UTC
Walk the list of MMU pages in reverse in kvm_mmu_zap_oldest_mmu_pages().
The list is FIFO, meaning new pages are inserted at the head and thus
the oldest pages are at the tail.  Using a "forward" iterator causes KVM
to zap MMU pages that were just added, which obliterates guest
performance once the max number of shadow MMU pages is reached.

Fixes: 6b82ef2c9cf1 ("KVM: x86/mmu: Batch zap MMU pages when recycling oldest pages")
Reported-by: Zdenek Kaspar <zkaspar82@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Paolo Bonzini Jan. 18, 2021, 5:34 p.m. UTC | #1
On 13/01/21 21:50, Sean Christopherson wrote:
> Walk the list of MMU pages in reverse in kvm_mmu_zap_oldest_mmu_pages().
> The list is FIFO, meaning new pages are inserted at the head and thus
> the oldest pages are at the tail.  Using a "forward" iterator causes KVM
> to zap MMU pages that were just added, which obliterates guest
> performance once the max number of shadow MMU pages is reached.
> 
> Fixes: 6b82ef2c9cf1 ("KVM: x86/mmu: Batch zap MMU pages when recycling oldest pages")
> Reported-by: Zdenek Kaspar <zkaspar82@gmail.com>
> Cc: stable@vger.kernel.org
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>   arch/x86/kvm/mmu/mmu.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 6d16481aa29d..ed861245ecf0 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2417,7 +2417,7 @@ static unsigned long kvm_mmu_zap_oldest_mmu_pages(struct kvm *kvm,
>   		return 0;
>   
>   restart:
> -	list_for_each_entry_safe(sp, tmp, &kvm->arch.active_mmu_pages, link) {
> +	list_for_each_entry_safe_reverse(sp, tmp, &kvm->arch.active_mmu_pages, link) {
>   		/*
>   		 * Don't zap active root pages, the page itself can't be freed
>   		 * and zapping it will just force vCPUs to realloc and reload.
> 

Queued for 5.11, thanks.

Paolo
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 6d16481aa29d..ed861245ecf0 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2417,7 +2417,7 @@  static unsigned long kvm_mmu_zap_oldest_mmu_pages(struct kvm *kvm,
 		return 0;
 
 restart:
-	list_for_each_entry_safe(sp, tmp, &kvm->arch.active_mmu_pages, link) {
+	list_for_each_entry_safe_reverse(sp, tmp, &kvm->arch.active_mmu_pages, link) {
 		/*
 		 * Don't zap active root pages, the page itself can't be freed
 		 * and zapping it will just force vCPUs to realloc and reload.