diff mbox

[A] KVM: Simplify kvm->tlbs_dirty handling

Message ID 20140218172247.ca61f503.yoshikawa_takuya_b1@lab.ntt.co.jp (mailing list archive)
State New, archived
Headers show

Commit Message

Takuya Yoshikawa Feb. 18, 2014, 8:22 a.m. UTC
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock.  It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
been changed to do so.

There is no need to use smp_mb() and cmpxchg() any more.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/paging_tmpl.h |    7 ++++---
 include/linux/kvm_host.h   |    2 +-
 virt/kvm/kvm_main.c        |   11 +++++++----
 3 files changed, 12 insertions(+), 8 deletions(-)

Comments

Xiao Guangrong Feb. 18, 2014, 9:43 a.m. UTC | #1
On 02/18/2014 04:22 PM, Takuya Yoshikawa wrote:
> When this was introduced, kvm_flush_remote_tlbs() could be called
> without holding mmu_lock.  It is now acknowledged that the function
> must be called before releasing mmu_lock, and all callers have already
> been changed to do so.
> 

I have already posted the patch to moving flush tlb out of mmu-lock
when do dirty tracking:
KVM: MMU: flush tlb out of mmu lock when write-protect the sptes

Actually some patches (patch 1 - patch 4) in that patchset can be
pickup-ed separately and i should apologize for the patchset was
not updated for a long time since i was busy on other development.

And moving tlb flush out of mmu-lock should be going on in the
future, so i think the B patch is more acceptable.


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Takuya Yoshikawa Feb. 19, 2014, 12:40 a.m. UTC | #2
(2014/02/18 18:43), Xiao Guangrong wrote:
> On 02/18/2014 04:22 PM, Takuya Yoshikawa wrote:
>> When this was introduced, kvm_flush_remote_tlbs() could be called
>> without holding mmu_lock.  It is now acknowledged that the function
>> must be called before releasing mmu_lock, and all callers have already
>> been changed to do so.
>>
>
> I have already posted the patch to moving flush tlb out of mmu-lock
> when do dirty tracking:
> KVM: MMU: flush tlb out of mmu lock when write-protect the sptes

Yes, I had your patch in mind, so made patch B.

>
> Actually some patches (patch 1 - patch 4) in that patchset can be
> pickup-ed separately and i should apologize for the patchset was
> not updated for a long time since i was busy on other development.

Looking forward to seeing that work again!

>
> And moving tlb flush out of mmu-lock should be going on in the
> future, so i think the B patch is more acceptable.
>

Maybe.

Some time ago, someone working on non-x86 stuff asked us why
smp_mb() and cmpxchg() were used there.  So a brief comment
which points them to the note on sync_page() will be helpful.

	Takuya

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index cba218a..b1e6c1b 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -913,7 +913,8 @@  static gpa_t FNAME(gva_to_gpa_nested)(struct kvm_vcpu *vcpu, gva_t vaddr,
  *   and kvm_mmu_notifier_invalidate_range_start detect the mapping page isn't
  *   used by guest then tlbs are not flushed, so guest is allowed to access the
  *   freed pages.
- *   And we increase kvm->tlbs_dirty to delay tlbs flush in this case.
+ *   We set tlbs_dirty to let the notifier know this change and delay the flush
+ *   until such a case actually happens.
  */
 static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 {
@@ -942,7 +943,7 @@  static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 			return -EINVAL;
 
 		if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) {
-			vcpu->kvm->tlbs_dirty++;
+			vcpu->kvm->tlbs_dirty = true;
 			continue;
 		}
 
@@ -957,7 +958,7 @@  static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 
 		if (gfn != sp->gfns[i]) {
 			drop_spte(vcpu->kvm, &sp->spt[i]);
-			vcpu->kvm->tlbs_dirty++;
+			vcpu->kvm->tlbs_dirty = true;
 			continue;
 		}
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f5937b8..ed1cc89 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -401,7 +401,7 @@  struct kvm {
 	unsigned long mmu_notifier_seq;
 	long mmu_notifier_count;
 #endif
-	long tlbs_dirty;
+	bool tlbs_dirty;
 	struct list_head devices;
 };
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a9e999a..51744da 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -186,12 +186,15 @@  static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
 
 void kvm_flush_remote_tlbs(struct kvm *kvm)
 {
-	long dirty_count = kvm->tlbs_dirty;
-
-	smp_mb();
 	if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
 		++kvm->stat.remote_tlb_flush;
-	cmpxchg(&kvm->tlbs_dirty, dirty_count, 0);
+	/*
+	 * tlbs_dirty is used only for optimizing x86's shadow paging code with
+	 * mmu notifiers in mind, see the note on sync_page().  Since it is
+	 * always protected with mmu_lock there, should kvm_flush_remote_tlbs()
+	 * be called before releasing mmu_lock, this is safe.
+	 */
+	kvm->tlbs_dirty = false;
 }
 EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);