diff mbox series

[RFC,4/6] KVM: arm64: Optimize TLBIs in the dirty logging path

Message ID 20230109215347.3119271-5-rananta@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: Add support for FEAT_TLBIRANGE | expand

Commit Message

Raghavendra Rao Ananta Jan. 9, 2023, 9:53 p.m. UTC
Currently the dirty-logging paths, including
kvm_arch_flush_remote_tlbs_memslot() and kvm_mmu_wp_memory_region()
ivalidates the entire VM's TLB entries using kvm_flush_remote_tlbs().
As the range of IPAs is provided by these functions, this is highly
inefficient on the systems which support FEAT_TLBIRANGE. Hence,
use kvm_flush_remote_tlbs_range() to flush the TLBs instead.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 arch/arm64/kvm/arm.c | 7 ++++++-
 arch/arm64/kvm/mmu.c | 2 +-
 2 files changed, 7 insertions(+), 2 deletions(-)

Comments

Oliver Upton Jan. 24, 2023, 10:54 p.m. UTC | #1
Hi Raghavendra,

I find the commit title rather ambiguous. May I suggest:

  KVM: arm64: Use range-based TLBIs for write protection

On Mon, Jan 09, 2023 at 09:53:45PM +0000, Raghavendra Rao Ananta wrote:
> Currently the dirty-logging paths, including
> kvm_arch_flush_remote_tlbs_memslot() and kvm_mmu_wp_memory_region()
> ivalidates the entire VM's TLB entries using kvm_flush_remote_tlbs().
> As the range of IPAs is provided by these functions, this is highly
> inefficient on the systems which support FEAT_TLBIRANGE. Hence,
> use kvm_flush_remote_tlbs_range() to flush the TLBs instead.

This commit message gives a rather mechanical description of the commit.
Instead of describing the change, could you describe _why_ this is an
improvement over the VM-wide invalidation?

--
Thanks,
Oliver

> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  arch/arm64/kvm/arm.c | 7 ++++++-
>  arch/arm64/kvm/mmu.c | 2 +-
>  2 files changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 00da570ed72bd..179520888c697 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1433,7 +1433,12 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
>  void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
>  					const struct kvm_memory_slot *memslot)
>  {
> -	kvm_flush_remote_tlbs(kvm);
> +	phys_addr_t start, end;
> +
> +	start = memslot->base_gfn << PAGE_SHIFT;
> +	end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
> +
> +	kvm_flush_remote_tlbs_range(kvm, start, end);
>  }
>  
>  static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 70f76bc909c5d..e34b81f5922ce 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -976,7 +976,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
>  	write_lock(&kvm->mmu_lock);
>  	stage2_wp_range(&kvm->arch.mmu, start, end);
>  	write_unlock(&kvm->mmu_lock);
> -	kvm_flush_remote_tlbs(kvm);
> +	kvm_flush_remote_tlbs_range(kvm, start, end);
>  }
>  
>  /**
> -- 
> 2.39.0.314.g84b9a713c41-goog
> 
>
Raghavendra Rao Ananta Jan. 25, 2023, 9:52 p.m. UTC | #2
Hi Oliver,

On Tue, Jan 24, 2023 at 2:54 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hi Raghavendra,
>
> I find the commit title rather ambiguous. May I suggest:
>
>   KVM: arm64: Use range-based TLBIs for write protection
>
> On Mon, Jan 09, 2023 at 09:53:45PM +0000, Raghavendra Rao Ananta wrote:
> > Currently the dirty-logging paths, including
> > kvm_arch_flush_remote_tlbs_memslot() and kvm_mmu_wp_memory_region()
> > ivalidates the entire VM's TLB entries using kvm_flush_remote_tlbs().
> > As the range of IPAs is provided by these functions, this is highly
> > inefficient on the systems which support FEAT_TLBIRANGE. Hence,
> > use kvm_flush_remote_tlbs_range() to flush the TLBs instead.
>
> This commit message gives a rather mechanical description of the commit.
> Instead of describing the change, could you describe _why_ this is an
> improvement over the VM-wide invalidation?
>
Of course. I assumed the optimization would be obvious, but sure,
it'll be better to describe it.
FYI, thanks to David's common code for range-based TLBI, this patch
shrunk to just one line, and would impact only the flush after
write-protect.

Thanks,
Raghavendra
> --
> Thanks,
> Oliver
>
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  arch/arm64/kvm/arm.c | 7 ++++++-
> >  arch/arm64/kvm/mmu.c | 2 +-
> >  2 files changed, 7 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 00da570ed72bd..179520888c697 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -1433,7 +1433,12 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
> >  void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
> >                                       const struct kvm_memory_slot *memslot)
> >  {
> > -     kvm_flush_remote_tlbs(kvm);
> > +     phys_addr_t start, end;
> > +
> > +     start = memslot->base_gfn << PAGE_SHIFT;
> > +     end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
> > +
> > +     kvm_flush_remote_tlbs_range(kvm, start, end);
> >  }
> >
> >  static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 70f76bc909c5d..e34b81f5922ce 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -976,7 +976,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
> >       write_lock(&kvm->mmu_lock);
> >       stage2_wp_range(&kvm->arch.mmu, start, end);
> >       write_unlock(&kvm->mmu_lock);
> > -     kvm_flush_remote_tlbs(kvm);
> > +     kvm_flush_remote_tlbs_range(kvm, start, end);
> >  }
> >
> >  /**
> > --
> > 2.39.0.314.g84b9a713c41-goog
> >
> >
diff mbox series

Patch

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 00da570ed72bd..179520888c697 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1433,7 +1433,12 @@  void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
 void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
 					const struct kvm_memory_slot *memslot)
 {
-	kvm_flush_remote_tlbs(kvm);
+	phys_addr_t start, end;
+
+	start = memslot->base_gfn << PAGE_SHIFT;
+	end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
+
+	kvm_flush_remote_tlbs_range(kvm, start, end);
 }
 
 static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 70f76bc909c5d..e34b81f5922ce 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -976,7 +976,7 @@  static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
 	write_lock(&kvm->mmu_lock);
 	stage2_wp_range(&kvm->arch.mmu, start, end);
 	write_unlock(&kvm->mmu_lock);
-	kvm_flush_remote_tlbs(kvm);
+	kvm_flush_remote_tlbs_range(kvm, start, end);
 }
 
 /**