diff mbox

[B] KVM: Explain tlbs_dirty trick in kvm_flush_remote_tlbs()

Message ID 20140218172347.b38d66b3.yoshikawa_takuya_b1@lab.ntt.co.jp (mailing list archive)
State New, archived
Headers show

Commit Message

Takuya Yoshikawa Feb. 18, 2014, 8:23 a.m. UTC
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock.  It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
been changed to do so.

This patch adds a comment explaining this.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 virt/kvm/kvm_main.c |    9 +++++++++
 1 file changed, 9 insertions(+)
diff mbox

Patch

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a9e999a..53521ea 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -184,6 +184,15 @@  static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
 	return called;
 }
 
+/*
+ * tlbs_dirty is used only for optimizing x86's shadow paging code with mmu
+ * notifiers in mind, see the note on sync_page().  Since it is always protected
+ * with mmu_lock there, should kvm_flush_remote_tlbs() be called before
+ * releasing mmu_lock, the trick using smp_mb() and cmpxchg() is not necessary.
+ *
+ * Currently, the assumption about kvm_flush_remote_tlbs() callers is true, but
+ * the code is kept as is for someone who may change the rule in the future.
+ */
 void kvm_flush_remote_tlbs(struct kvm *kvm)
 {
 	long dirty_count = kvm->tlbs_dirty;