Message ID | 20230223052851.1054799-1-jun.miao@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] KVM: Fix comments that refer to the non-existent install_new_memslots() | expand |
On Thu, Feb 23, 2023, Jun Miao wrote: > The function of install_new_memslots() was replaced by kvm_swap_active_memslots(). > In order to avoid confusion, fix the comments that refer the non-existent name of > install_new_memslots which always be ignored. > > Fixes: a54d806688fe ("KVM: Keep memslots in tree-based structures instead of array-based ones") > Signed-off-by: Jun Miao <jun.miao@intel.com> Two nits, but I'll fix them up when applying. Thanks! > --- > Documentation/virt/kvm/locking.rst | 2 +- > include/linux/kvm_host.h | 4 ++-- > virt/kvm/kvm_main.c | 10 +++++----- > 3 files changed, 8 insertions(+), 8 deletions(-) > > diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst > index 14c4e9fa501d..6e03ad853c27 100644 > --- a/Documentation/virt/kvm/locking.rst > +++ b/Documentation/virt/kvm/locking.rst > @@ -21,7 +21,7 @@ The acquisition orders for mutexes are as follows: > - kvm->mn_active_invalidate_count ensures that pairs of > invalidate_range_start() and invalidate_range_end() callbacks > use the same memslots array. kvm->slots_lock and kvm->slots_arch_lock > - are taken on the waiting side in install_new_memslots, so MMU notifiers > + are taken on the waiting side in kvm_swap_active_memslots(), so MMU notifiers This isn't accurate, kvm_swap_active_memslots() isn't what actually acquires those locks. I don't see any reason to be super precise, the key takeaway is that modifying memslots takes the locks _and_ is the waiter. I'll tweak this to: are taken on the waiting side when modifying memslots, so MMU notifiers > @@ -1810,11 +1810,11 @@ static int kvm_set_memslot(struct kvm *kvm, > int r; > > /* > - * Released in kvm_swap_active_memslots. > + * Released in kvm_swap_active_memslots(). > * > * Must be held from before the current memslots are copied until > * after the new memslots are installed with rcu_assign_pointer, > - * then released before the synchronize srcu in kvm_swap_active_memslots. > + * then released before the synchronize srcu in kvm_swap_active_memslots(). This block can be massaged slightly to avoid running past 80 chars: * Must be held from before the current memslots are copied until after * the new memslots are installed with rcu_assign_pointer, then * released before the synchronize srcu in kvm_swap_active_memslots(). > * > * When modifying memslots outside of the slots_lock, must be held > * before reading the pointer to the current memslots until after all > -- > 2.32.0 >
On Thu, 23 Feb 2023 13:28:51 +0800, Jun Miao wrote: > The function of install_new_memslots() was replaced by kvm_swap_active_memslots(). > In order to avoid confusion, fix the comments that refer the non-existent name of > install_new_memslots which always be ignored. > > Applied to kvm-x86 generic, thanks! [1/1] KVM: Fix comments that refer to the non-existent install_new_memslots() https://github.com/kvm-x86/linux/commit/b0d237087c67 -- https://github.com/kvm-x86/linux/tree/next https://github.com/kvm-x86/linux/tree/fixes
diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 14c4e9fa501d..6e03ad853c27 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -21,7 +21,7 @@ The acquisition orders for mutexes are as follows: - kvm->mn_active_invalidate_count ensures that pairs of invalidate_range_start() and invalidate_range_end() callbacks use the same memslots array. kvm->slots_lock and kvm->slots_arch_lock - are taken on the waiting side in install_new_memslots, so MMU notifiers + are taken on the waiting side in kvm_swap_active_memslots(), so MMU notifiers must not take either kvm->slots_lock or kvm->slots_arch_lock. For SRCU: diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 8ada23756b0e..98605bd25060 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -58,7 +58,7 @@ /* * Bit 63 of the memslot generation number is an "update in-progress flag", - * e.g. is temporarily set for the duration of install_new_memslots(). + * e.g. is temporarily set for the duration of kvm_swap_active_memslots(). * This flag effectively creates a unique generation number that is used to * mark cached memslot data, e.g. MMIO accesses, as potentially being stale, * i.e. may (or may not) have come from the previous memslots generation. @@ -713,7 +713,7 @@ struct kvm { * use by the VM. To be used under the slots_lock (above) or in a * kvm->srcu critical section where acquiring the slots_lock would * lead to deadlock with the synchronize_srcu in - * install_new_memslots. + * kvm_swap_active_memslots(). */ struct mutex slots_arch_lock; struct mm_struct *mm; /* userspace tied to this vm */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d255964ec331..9f9077a898dc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1298,7 +1298,7 @@ static void kvm_destroy_vm(struct kvm *kvm) * At this point, pending calls to invalidate_range_start() * have completed but no more MMU notifiers will run, so * mn_active_invalidate_count may remain unbalanced. - * No threads can be waiting in install_new_memslots as the + * No threads can be waiting in kvm_swap_active_memslots() as the * last reference on KVM has been dropped, but freeing * memslots would deadlock without this manual intervention. */ @@ -1742,13 +1742,13 @@ static void kvm_invalidate_memslot(struct kvm *kvm, kvm_arch_flush_shadow_memslot(kvm, old); kvm_arch_guest_memory_reclaimed(kvm); - /* Was released by kvm_swap_active_memslots, reacquire. */ + /* Was released by kvm_swap_active_memslots(), reacquire. */ mutex_lock(&kvm->slots_arch_lock); /* * Copy the arch-specific field of the newly-installed slot back to the * old slot as the arch data could have changed between releasing - * slots_arch_lock in install_new_memslots() and re-acquiring the lock + * slots_arch_lock in kvm_swap_active_memslots() and re-acquiring the lock * above. Writers are required to retrieve memslots *after* acquiring * slots_arch_lock, thus the active slot's data is guaranteed to be fresh. */ @@ -1810,11 +1810,11 @@ static int kvm_set_memslot(struct kvm *kvm, int r; /* - * Released in kvm_swap_active_memslots. + * Released in kvm_swap_active_memslots(). * * Must be held from before the current memslots are copied until * after the new memslots are installed with rcu_assign_pointer, - * then released before the synchronize srcu in kvm_swap_active_memslots. + * then released before the synchronize srcu in kvm_swap_active_memslots(). * * When modifying memslots outside of the slots_lock, must be held * before reading the pointer to the current memslots until after all
The function of install_new_memslots() was replaced by kvm_swap_active_memslots(). In order to avoid confusion, fix the comments that refer the non-existent name of install_new_memslots which always be ignored. Fixes: a54d806688fe ("KVM: Keep memslots in tree-based structures instead of array-based ones") Signed-off-by: Jun Miao <jun.miao@intel.com> --- Documentation/virt/kvm/locking.rst | 2 +- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 10 +++++----- 3 files changed, 8 insertions(+), 8 deletions(-)