mbox series

[v2,0/4] KVM: x86/mmu: Zapping and recycling cleanups

Message ID 20200623193542.7554-1-sean.j.christopherson@intel.com (mailing list archive)
Headers show
Series KVM: x86/mmu: Zapping and recycling cleanups | expand

Message

Sean Christopherson June 23, 2020, 7:35 p.m. UTC
Semi-random, but related, changes that deal with the handling of active
root shadow pages during zapping and the zapping of arbitary/old pages.

Patch 1 changes the low level handling to keep zapped active roots off the
active page list.  KVM already relies on the vCPU to explicitly free the
root, putting invalid root pages back on the list is just a quirk of the
implementation.

Patches 2 reworks the MMU page recycling to batch zap pages instead of
zapping them one at a time.  This provides better handling for active root
pages and also avoids multiple remote TLB flushes.

Patch 3 applies the batch zapping to the .shrink_scan() path.  This is a
significant change in behavior, i.e. is the scariest of the changes, but
unless I'm missing something it provides the intended functionality that
has been lacking since shrinker support was first added.

Patch 4 changes the page fault handlers to return an error to userspace
instead of restarting the guest if there are no MMU pages available.  This
is dependent on patch 2 as theoretically the old recycling flow could
prematurely bail if it encountered an active root.

v2:
  - Add a comment for the list shenanigans in patch 1. [Paolo]
  - Add patches 2-4.
  - Rebased to kvm/queue, commit a037ff353ba6 ("Merge branch ...")

Sean Christopherson (4):
  KVM: x86/mmu: Don't put invalid SPs back on the list of active pages
  KVM: x86/mmu: Batch zap MMU pages when recycling oldest pages
  KVM: x86/mmu: Batch zap MMU pages when shrinking the slab
  KVM: x86/mmu: Exit to userspace on make_mmu_pages_available() error

 arch/x86/kvm/mmu/mmu.c         | 94 +++++++++++++++++++++-------------
 arch/x86/kvm/mmu/paging_tmpl.h |  3 +-
 2 files changed, 61 insertions(+), 36 deletions(-)

Comments

Paolo Bonzini July 3, 2020, 5:18 p.m. UTC | #1
On 23/06/20 21:35, Sean Christopherson wrote:
> Semi-random, but related, changes that deal with the handling of active
> root shadow pages during zapping and the zapping of arbitary/old pages.
> 
> Patch 1 changes the low level handling to keep zapped active roots off the
> active page list.  KVM already relies on the vCPU to explicitly free the
> root, putting invalid root pages back on the list is just a quirk of the
> implementation.
> 
> Patches 2 reworks the MMU page recycling to batch zap pages instead of
> zapping them one at a time.  This provides better handling for active root
> pages and also avoids multiple remote TLB flushes.
> 
> Patch 3 applies the batch zapping to the .shrink_scan() path.  This is a
> significant change in behavior, i.e. is the scariest of the changes, but
> unless I'm missing something it provides the intended functionality that
> has been lacking since shrinker support was first added.
> 
> Patch 4 changes the page fault handlers to return an error to userspace
> instead of restarting the guest if there are no MMU pages available.  This
> is dependent on patch 2 as theoretically the old recycling flow could
> prematurely bail if it encountered an active root.
> 
> v2:
>   - Add a comment for the list shenanigans in patch 1. [Paolo]
>   - Add patches 2-4.
>   - Rebased to kvm/queue, commit a037ff353ba6 ("Merge branch ...")
> 
> Sean Christopherson (4):
>   KVM: x86/mmu: Don't put invalid SPs back on the list of active pages
>   KVM: x86/mmu: Batch zap MMU pages when recycling oldest pages
>   KVM: x86/mmu: Batch zap MMU pages when shrinking the slab
>   KVM: x86/mmu: Exit to userspace on make_mmu_pages_available() error
> 
>  arch/x86/kvm/mmu/mmu.c         | 94 +++++++++++++++++++++-------------
>  arch/x86/kvm/mmu/paging_tmpl.h |  3 +-
>  2 files changed, 61 insertions(+), 36 deletions(-)
> 

Queued, thanks.

Paolo