diff mbox

[v2] kvm: set page dirty only if page has been writable

Message ID 1459370289-15994-1-git-send-email-yuzhao@google.com (mailing list archive)
State New, archived
Headers show

Commit Message

Yu Zhao March 30, 2016, 8:38 p.m. UTC
In absence of shadow dirty mask, there is no need to set page dirty
if page has never been writable. This is a tiny optimization but
good to have for people who care much about dirty page tracking.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 arch/x86/kvm/mmu.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

Comments

Paolo Bonzini March 30, 2016, 9:08 p.m. UTC | #1
On 30/03/2016 22:38, Yu Zhao wrote:
> In absence of shadow dirty mask, there is no need to set page dirty
> if page has never been writable. This is a tiny optimization but
> good to have for people who care much about dirty page tracking.
> 
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> ---
>  arch/x86/kvm/mmu.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 70e95d0..1ff4dbb 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -557,8 +557,15 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
>  	      !is_writable_pte(new_spte))
>  		ret = true;
>  
> -	if (!shadow_accessed_mask)
> +	if (!shadow_accessed_mask) {
> +		/*
> +		 * We don't set page dirty when dropping non-writable spte.
> +		 * So do it now if the new spte is becoming non-writable.
> +		 */
> +		if (ret)
> +			kvm_set_pfn_dirty(spte_to_pfn(old_spte));
>  		return ret;
> +	}
>  
>  	/*
>  	 * Flush TLB when accessed/dirty bits are changed in the page tables,
> @@ -605,7 +612,8 @@ static int mmu_spte_clear_track_bits(u64 *sptep)
>  
>  	if (!shadow_accessed_mask || old_spte & shadow_accessed_mask)
>  		kvm_set_pfn_accessed(pfn);
> -	if (!shadow_dirty_mask || (old_spte & shadow_dirty_mask))
> +	if (old_spte & (shadow_dirty_mask ? shadow_dirty_mask :
> +					    PT_WRITABLE_MASK))
>  		kvm_set_pfn_dirty(pfn);
>  	return 1;
>  }
> 

Looks good, thanks!

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 70e95d0..1ff4dbb 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -557,8 +557,15 @@  static bool mmu_spte_update(u64 *sptep, u64 new_spte)
 	      !is_writable_pte(new_spte))
 		ret = true;
 
-	if (!shadow_accessed_mask)
+	if (!shadow_accessed_mask) {
+		/*
+		 * We don't set page dirty when dropping non-writable spte.
+		 * So do it now if the new spte is becoming non-writable.
+		 */
+		if (ret)
+			kvm_set_pfn_dirty(spte_to_pfn(old_spte));
 		return ret;
+	}
 
 	/*
 	 * Flush TLB when accessed/dirty bits are changed in the page tables,
@@ -605,7 +612,8 @@  static int mmu_spte_clear_track_bits(u64 *sptep)
 
 	if (!shadow_accessed_mask || old_spte & shadow_accessed_mask)
 		kvm_set_pfn_accessed(pfn);
-	if (!shadow_dirty_mask || (old_spte & shadow_dirty_mask))
+	if (old_spte & (shadow_dirty_mask ? shadow_dirty_mask :
+					    PT_WRITABLE_MASK))
 		kvm_set_pfn_dirty(pfn);
 	return 1;
 }