Message ID | 4DCFEF3B.5060806@cn.fujitsu.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 05/15/2011 06:20 PM, Xiao Guangrong wrote: > Simply return from kvm_mmu_pte_write path if no shadow page is > write-protected, then we can avoid to walk all shadow pages and hold > mmu-lock Patchset looks like a very good cleanup (plus the nice optimization in patch 1).
On Mon, May 16, 2011 at 02:25:02PM +0300, Avi Kivity wrote: > On 05/15/2011 06:20 PM, Xiao Guangrong wrote: > >Simply return from kvm_mmu_pte_write path if no shadow page is > >write-protected, then we can avoid to walk all shadow pages and hold > >mmu-lock > > Patchset looks like a very good cleanup (plus the nice optimization > in patch 1). What case is patch 1 optimizing for? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 05/18/2011 04:12 PM, Marcelo Tosatti wrote: > On Mon, May 16, 2011 at 02:25:02PM +0300, Avi Kivity wrote: > > On 05/15/2011 06:20 PM, Xiao Guangrong wrote: > > >Simply return from kvm_mmu_pte_write path if no shadow page is > > >write-protected, then we can avoid to walk all shadow pages and hold > > >mmu-lock > > > > Patchset looks like a very good cleanup (plus the nice optimization > > in patch 1). > > What case is patch 1 optimizing for? > Say, kvmclock updates.
On Sun, May 15, 2011 at 11:20:27PM +0800, Xiao Guangrong wrote: > Simply return from kvm_mmu_pte_write path if no shadow page is > write-protected, then we can avoid to walk all shadow pages and hold > mmu-lock > > Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Applied, thanks. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d2ac8e2..152601a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -441,6 +441,7 @@ struct kvm_arch { unsigned int n_used_mmu_pages; unsigned int n_requested_mmu_pages; unsigned int n_max_mmu_pages; + unsigned int indirect_shadow_pages; atomic_t invlpg_counter; struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; /* diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 2841805..ad520d4 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -498,6 +498,7 @@ static void account_shadowed(struct kvm *kvm, gfn_t gfn) linfo = lpage_info_slot(gfn, slot, i); linfo->write_count += 1; } + kvm->arch.indirect_shadow_pages++; } static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn) @@ -513,6 +514,7 @@ static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn) linfo->write_count -= 1; WARN_ON(linfo->write_count < 0); } + kvm->arch.indirect_shadow_pages--; } static int has_wrprotected_page(struct kvm *kvm, @@ -3233,6 +3235,13 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, int level, npte, invlpg_counter, r, flooded = 0; bool remote_flush, local_flush, zap_page; + /* + * If we don't have indirect shadow pages, it means no page is + * write-protected, so we can exit simply. + */ + if (!ACCESS_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) + return; + zap_page = remote_flush = local_flush = false; offset = offset_in_page(gpa);
Simply return from kvm_mmu_pte_write path if no shadow page is write-protected, then we can avoid to walk all shadow pages and hold mmu-lock Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu.c | 9 +++++++++ 2 files changed, 10 insertions(+), 0 deletions(-)