@@ -184,6 +184,15 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
return called;
}
+/*
+ * tlbs_dirty is used only for optimizing x86's shadow paging code with mmu
+ * notifiers in mind, see the note on sync_page(). Since it is always protected
+ * with mmu_lock there, should kvm_flush_remote_tlbs() be called before
+ * releasing mmu_lock, the trick using smp_mb() and cmpxchg() is not necessary.
+ *
+ * Currently, the assumption about kvm_flush_remote_tlbs() callers is true, but
+ * the code is kept as is for someone who may change the rule in the future.
+ */
void kvm_flush_remote_tlbs(struct kvm *kvm)
{
long dirty_count = kvm->tlbs_dirty;
When this was introduced, kvm_flush_remote_tlbs() could be called without holding mmu_lock. It is now acknowledged that the function must be called before releasing mmu_lock, and all callers have already been changed to do so. This patch adds a comment explaining this. Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> --- virt/kvm/kvm_main.c | 9 +++++++++ 1 file changed, 9 insertions(+)