diff mbox

[1/7] KVM: MMU: optimize pte write path if don't have protected sp

Message ID 4DCEF5B1.3050706@cn.fujitsu.com (mailing list archive)
State New, archived
Headers show

Commit Message

Xiao Guangrong May 14, 2011, 9:35 p.m. UTC
Simply return from kvm_mmu_pte_write path if no shadow page is
write-protected, then we can avoid to walk all shadow pages and hold
mmu-lock

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 arch/x86/include/asm/kvm_host.h |    1 +
 arch/x86/kvm/mmu.c              |    9 +++++++++
 2 files changed, 10 insertions(+), 0 deletions(-)

Comments

Avi Kivity May 15, 2011, 8:20 a.m. UTC | #1
On 05/15/2011 12:35 AM, Xiao Guangrong wrote:
> Simply return from kvm_mmu_pte_write path if no shadow page is
> write-protected, then we can avoid to walk all shadow pages and hold
> mmu-lock
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 2841805..971e2d2 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -498,6 +498,7 @@ static void account_shadowed(struct kvm *kvm, gfn_t gfn)
>   		linfo = lpage_info_slot(gfn, slot, i);
>   		linfo->write_count += 1;
>   	}
> +	atomic_inc(&kvm->arch.indirect_shadow_pages);
>   }
>
>   static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn)
> @@ -513,6 +514,7 @@ static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn)
>   		linfo->write_count -= 1;
>   		WARN_ON(linfo->write_count<  0);
>   	}
> +	atomic_dec(&kvm->arch.indirect_shadow_pages);
>   }

These atomic ops are always called from within the spinlock, so we don't 
need an atomic_t here.

Sorry, I should have noticed this on the first version.
Xiao Guangrong May 15, 2011, 8:33 a.m. UTC | #2
On 05/15/2011 04:20 PM, Avi Kivity wrote:
> On 05/15/2011 12:35 AM, Xiao Guangrong wrote:
>> Simply return from kvm_mmu_pte_write path if no shadow page is
>> write-protected, then we can avoid to walk all shadow pages and hold
>> mmu-lock
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index 2841805..971e2d2 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -498,6 +498,7 @@ static void account_shadowed(struct kvm *kvm, gfn_t gfn)
>>           linfo = lpage_info_slot(gfn, slot, i);
>>           linfo->write_count += 1;
>>       }
>> +    atomic_inc(&kvm->arch.indirect_shadow_pages);
>>   }
>>
>>   static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn)
>> @@ -513,6 +514,7 @@ static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn)
>>           linfo->write_count -= 1;
>>           WARN_ON(linfo->write_count<  0);
>>       }
>> +    atomic_dec(&kvm->arch.indirect_shadow_pages);
>>   }
> 
> These atomic ops are always called from within the spinlock, so we don't need an atomic_t here.
> 
> Sorry, I should have noticed this on the first version.

We read indirect_shadow_pages atomically on pte write path, that is allowed out of mmu_lock
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity May 15, 2011, 8:38 a.m. UTC | #3
On 05/15/2011 11:33 AM, Xiao Guangrong wrote:
> >
> >  These atomic ops are always called from within the spinlock, so we don't need an atomic_t here.
> >
> >  Sorry, I should have noticed this on the first version.
>
> We read indirect_shadow_pages atomically on pte write path, that is allowed out of mmu_lock

Reading is fine:

   #define atomic_read(v)    (*(volatile int *)&(v)->counter)
diff mbox

Patch

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index d2ac8e2..d2e5fb8 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -442,6 +442,7 @@  struct kvm_arch {
 	unsigned int n_requested_mmu_pages;
 	unsigned int n_max_mmu_pages;
 	atomic_t invlpg_counter;
+	atomic_t indirect_shadow_pages;
 	struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
 	/*
 	 * Hash table of struct kvm_mmu_page.
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2841805..971e2d2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -498,6 +498,7 @@  static void account_shadowed(struct kvm *kvm, gfn_t gfn)
 		linfo = lpage_info_slot(gfn, slot, i);
 		linfo->write_count += 1;
 	}
+	atomic_inc(&kvm->arch.indirect_shadow_pages);
 }
 
 static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn)
@@ -513,6 +514,7 @@  static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn)
 		linfo->write_count -= 1;
 		WARN_ON(linfo->write_count < 0);
 	}
+	atomic_dec(&kvm->arch.indirect_shadow_pages);
 }
 
 static int has_wrprotected_page(struct kvm *kvm,
@@ -3233,6 +3235,13 @@  void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
 	int level, npte, invlpg_counter, r, flooded = 0;
 	bool remote_flush, local_flush, zap_page;
 
+	/*
+	 * If we don't have indirect shadow pages, it means no page is
+	 * write-protected, so we can exit simply.
+	 */
+	if (!atomic_read(&vcpu->kvm->arch.indirect_shadow_pages))
+		return;
+
 	zap_page = remote_flush = local_flush = false;
 	offset = offset_in_page(gpa);