diff mbox series

KVM: Align the function name of kvm_swap_active_memslots()

Message ID 20230220084500.938739-1-jun.miao@intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: Align the function name of kvm_swap_active_memslots() | expand

Commit Message

Jun Miao Feb. 20, 2023, 8:45 a.m. UTC
The function of install_new_memslots() is replaced by kvm_swap_active_memslots().
In order to avoid confusion, align the name in the comments which always be ignored.

Fixes: a54d806688fe "KVM: Keep memslots in tree-based structures instead of array-based ones")
Signed-off-by: Jun Miao <jun.miao@intel.com>
---
 Documentation/virt/kvm/locking.rst | 2 +-
 include/linux/kvm_host.h           | 4 ++--
 virt/kvm/kvm_main.c                | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

Comments

Sean Christopherson Feb. 22, 2023, 6:39 p.m. UTC | #1
Nit, "Align" is a confusing because it's often used to refer to indentation.  Maybe?

   KVM: Fix comments that refer to the non-existent install_new_memslots()

On Mon, Feb 20, 2023, Jun Miao wrote:
> The function of install_new_memslots() is replaced by kvm_swap_active_memslots().
> In order to avoid confusion, align the name in the comments which always be ignored.
> 
> Fixes: a54d806688fe "KVM: Keep memslots in tree-based structures instead of array-based ones")
> Signed-off-by: Jun Miao <jun.miao@intel.com>
> ---
>  Documentation/virt/kvm/locking.rst | 2 +-
>  include/linux/kvm_host.h           | 4 ++--
>  virt/kvm/kvm_main.c                | 4 ++--
>  3 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
> index 14c4e9fa501d..ac0e549a3ae7 100644
> --- a/Documentation/virt/kvm/locking.rst
> +++ b/Documentation/virt/kvm/locking.rst
> @@ -21,7 +21,7 @@ The acquisition orders for mutexes are as follows:
>  - kvm->mn_active_invalidate_count ensures that pairs of
>    invalidate_range_start() and invalidate_range_end() callbacks
>    use the same memslots array.  kvm->slots_lock and kvm->slots_arch_lock
> -  are taken on the waiting side in install_new_memslots, so MMU notifiers
> +  are taken on the waiting side in kvm_swap_active_memslots, so MMU notifiers

Can you send a v2 and opportunistically add () to the blurbs that don't have it?
I.e. so these are all "kvm_swap_active_memslots()"?

Thanks!
Jun Miao Feb. 23, 2023, 5:07 a.m. UTC | #2
> 
> Nit, "Align" is a confusing because it's often used to refer to indentation.  Maybe?
> 
>    KVM: Fix comments that refer to the non-existent install_new_memslots()
> 
> On Mon, Feb 20, 2023, Jun Miao wrote:
> > The function of install_new_memslots() is replaced by
> kvm_swap_active_memslots().
> > In order to avoid confusion, align the name in the comments which always be
> ignored.
> >
> > Fixes: a54d806688fe "KVM: Keep memslots in tree-based structures instead of
> array-based ones")
> > Signed-off-by: Jun Miao <jun.miao@intel.com>
> > ---
> >  Documentation/virt/kvm/locking.rst | 2 +-
> >  include/linux/kvm_host.h           | 4 ++--
> >  virt/kvm/kvm_main.c                | 4 ++--
> >  3 files changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/Documentation/virt/kvm/locking.rst
> b/Documentation/virt/kvm/locking.rst
> > index 14c4e9fa501d..ac0e549a3ae7 100644
> > --- a/Documentation/virt/kvm/locking.rst
> > +++ b/Documentation/virt/kvm/locking.rst
> > @@ -21,7 +21,7 @@ The acquisition orders for mutexes are as follows:
> >  - kvm->mn_active_invalidate_count ensures that pairs of
> >    invalidate_range_start() and invalidate_range_end() callbacks
> >    use the same memslots array.  kvm->slots_lock and kvm->slots_arch_lock
> > -  are taken on the waiting side in install_new_memslots, so MMU notifiers
> > +  are taken on the waiting side in kvm_swap_active_memslots, so MMU
> notifiers
> 
> Can you send a v2 and opportunistically add () to the blurbs that don't have it?
> I.e. so these are all "kvm_swap_active_memslots()"?
> 
I will send V2 with the lost "()", thank you for the warm and accurate advice.

--Jun
> Thanks!
diff mbox series

Patch

diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
index 14c4e9fa501d..ac0e549a3ae7 100644
--- a/Documentation/virt/kvm/locking.rst
+++ b/Documentation/virt/kvm/locking.rst
@@ -21,7 +21,7 @@  The acquisition orders for mutexes are as follows:
 - kvm->mn_active_invalidate_count ensures that pairs of
   invalidate_range_start() and invalidate_range_end() callbacks
   use the same memslots array.  kvm->slots_lock and kvm->slots_arch_lock
-  are taken on the waiting side in install_new_memslots, so MMU notifiers
+  are taken on the waiting side in kvm_swap_active_memslots, so MMU notifiers
   must not take either kvm->slots_lock or kvm->slots_arch_lock.
 
 For SRCU:
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 8ada23756b0e..7f8242dd2745 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -58,7 +58,7 @@ 
 
 /*
  * Bit 63 of the memslot generation number is an "update in-progress flag",
- * e.g. is temporarily set for the duration of install_new_memslots().
+ * e.g. is temporarily set for the duration of kvm_swap_active_memslots().
  * This flag effectively creates a unique generation number that is used to
  * mark cached memslot data, e.g. MMIO accesses, as potentially being stale,
  * i.e. may (or may not) have come from the previous memslots generation.
@@ -713,7 +713,7 @@  struct kvm {
 	 * use by the VM. To be used under the slots_lock (above) or in a
 	 * kvm->srcu critical section where acquiring the slots_lock would
 	 * lead to deadlock with the synchronize_srcu in
-	 * install_new_memslots.
+	 * kvm_swap_active_memslots.
 	 */
 	struct mutex slots_arch_lock;
 	struct mm_struct *mm; /* userspace tied to this vm */
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d255964ec331..7a4ff9fc5978 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1298,7 +1298,7 @@  static void kvm_destroy_vm(struct kvm *kvm)
 	 * At this point, pending calls to invalidate_range_start()
 	 * have completed but no more MMU notifiers will run, so
 	 * mn_active_invalidate_count may remain unbalanced.
-	 * No threads can be waiting in install_new_memslots as the
+	 * No threads can be waiting in kvm_swap_active_memslots as the
 	 * last reference on KVM has been dropped, but freeing
 	 * memslots would deadlock without this manual intervention.
 	 */
@@ -1748,7 +1748,7 @@  static void kvm_invalidate_memslot(struct kvm *kvm,
 	/*
 	 * Copy the arch-specific field of the newly-installed slot back to the
 	 * old slot as the arch data could have changed between releasing
-	 * slots_arch_lock in install_new_memslots() and re-acquiring the lock
+	 * slots_arch_lock in kvm_swap_active_memslots() and re-acquiring the lock
 	 * above.  Writers are required to retrieve memslots *after* acquiring
 	 * slots_arch_lock, thus the active slot's data is guaranteed to be fresh.
 	 */