diff mbox

[v2,1/7] KVM: MMU: optimize pte write path if don't have protected sp

Message ID 4DCFEF3B.5060806@cn.fujitsu.com (mailing list archive)
State New, archived
Headers show

Commit Message

Xiao Guangrong May 15, 2011, 3:20 p.m. UTC
Simply return from kvm_mmu_pte_write path if no shadow page is
write-protected, then we can avoid to walk all shadow pages and hold
mmu-lock

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 arch/x86/include/asm/kvm_host.h |    1 +
 arch/x86/kvm/mmu.c              |    9 +++++++++
 2 files changed, 10 insertions(+), 0 deletions(-)

Comments

Avi Kivity May 16, 2011, 11:25 a.m. UTC | #1
On 05/15/2011 06:20 PM, Xiao Guangrong wrote:
> Simply return from kvm_mmu_pte_write path if no shadow page is
> write-protected, then we can avoid to walk all shadow pages and hold
> mmu-lock

Patchset looks like a very good cleanup (plus the nice optimization in 
patch 1).
Marcelo Tosatti May 18, 2011, 1:12 p.m. UTC | #2
On Mon, May 16, 2011 at 02:25:02PM +0300, Avi Kivity wrote:
> On 05/15/2011 06:20 PM, Xiao Guangrong wrote:
> >Simply return from kvm_mmu_pte_write path if no shadow page is
> >write-protected, then we can avoid to walk all shadow pages and hold
> >mmu-lock
> 
> Patchset looks like a very good cleanup (plus the nice optimization
> in patch 1).

What case is patch 1 optimizing for?

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity May 18, 2011, 1:20 p.m. UTC | #3
On 05/18/2011 04:12 PM, Marcelo Tosatti wrote:
> On Mon, May 16, 2011 at 02:25:02PM +0300, Avi Kivity wrote:
> >  On 05/15/2011 06:20 PM, Xiao Guangrong wrote:
> >  >Simply return from kvm_mmu_pte_write path if no shadow page is
> >  >write-protected, then we can avoid to walk all shadow pages and hold
> >  >mmu-lock
> >
> >  Patchset looks like a very good cleanup (plus the nice optimization
> >  in patch 1).
>
> What case is patch 1 optimizing for?
>

Say, kvmclock updates.
Marcelo Tosatti May 20, 2011, 3:49 p.m. UTC | #4
On Sun, May 15, 2011 at 11:20:27PM +0800, Xiao Guangrong wrote:
> Simply return from kvm_mmu_pte_write path if no shadow page is
> write-protected, then we can avoid to walk all shadow pages and hold
> mmu-lock
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>

Applied, thanks.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index d2ac8e2..152601a 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -441,6 +441,7 @@  struct kvm_arch {
 	unsigned int n_used_mmu_pages;
 	unsigned int n_requested_mmu_pages;
 	unsigned int n_max_mmu_pages;
+	unsigned int indirect_shadow_pages;
 	atomic_t invlpg_counter;
 	struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
 	/*
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2841805..ad520d4 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -498,6 +498,7 @@  static void account_shadowed(struct kvm *kvm, gfn_t gfn)
 		linfo = lpage_info_slot(gfn, slot, i);
 		linfo->write_count += 1;
 	}
+	kvm->arch.indirect_shadow_pages++;
 }
 
 static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn)
@@ -513,6 +514,7 @@  static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn)
 		linfo->write_count -= 1;
 		WARN_ON(linfo->write_count < 0);
 	}
+	kvm->arch.indirect_shadow_pages--;
 }
 
 static int has_wrprotected_page(struct kvm *kvm,
@@ -3233,6 +3235,13 @@  void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
 	int level, npte, invlpg_counter, r, flooded = 0;
 	bool remote_flush, local_flush, zap_page;
 
+	/*
+	 * If we don't have indirect shadow pages, it means no page is
+	 * write-protected, so we can exit simply.
+	 */
+	if (!ACCESS_ONCE(vcpu->kvm->arch.indirect_shadow_pages))
+		return;
+
 	zap_page = remote_flush = local_flush = false;
 	offset = offset_in_page(gpa);