diff mbox

KVM: x86: MMU: Initialize force_pt_level before calling mapping_level()

Message ID 20151019151329.cf4b7cbbfe1fdb15bb756d93@lab.ntt.co.jp (mailing list archive)
State New, archived
Headers show

Commit Message

Takuya Yoshikawa Oct. 19, 2015, 6:13 a.m. UTC
Commit fd1369021878 ("KVM: x86: MMU: Move mapping_level_dirty_bitmap()
call in mapping_level()") forgot to initialize force_pt_level to false
in FNAME(page_fault)() before calling mapping_level() like
nonpaging_map() does.  This can sometimes result in forcing page table
level mapping unnecessarily.

Fix this and move the first *force_pt_level check in mapping_level()
before kvm_vcpu_gfn_to_memslot() call to make it a bit clearer that
the variable must be initialized before mapping_level() gets called.

This change can also avoid calling kvm_vcpu_gfn_to_memslot() when
!check_hugepage_cache_consistency() check in tdp_page_fault() forces
page table level mapping.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c         | 7 ++++---
 arch/x86/kvm/paging_tmpl.h | 2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

Comments

Paolo Bonzini Oct. 19, 2015, 9:37 a.m. UTC | #1
On 19/10/2015 08:13, Takuya Yoshikawa wrote:
> Commit fd1369021878 ("KVM: x86: MMU: Move mapping_level_dirty_bitmap()
> call in mapping_level()") forgot to initialize force_pt_level to false
> in FNAME(page_fault)() before calling mapping_level() like
> nonpaging_map() does.  This can sometimes result in forcing page table
> level mapping unnecessarily.
> 
> Fix this and move the first *force_pt_level check in mapping_level()
> before kvm_vcpu_gfn_to_memslot() call to make it a bit clearer that
> the variable must be initialized before mapping_level() gets called.
> 
> This change can also avoid calling kvm_vcpu_gfn_to_memslot() when
> !check_hugepage_cache_consistency() check in tdp_page_fault() forces
> page table level mapping.
> 
> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
> ---
>  arch/x86/kvm/mmu.c         | 7 ++++---
>  arch/x86/kvm/paging_tmpl.h | 2 +-
>  2 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index dd2a7c6..7d85bca 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -886,10 +886,11 @@ static int mapping_level(struct kvm_vcpu *vcpu, gfn_t large_gfn,
>  	int host_level, level, max_level;
>  	struct kvm_memory_slot *slot;
>  
> -	slot = kvm_vcpu_gfn_to_memslot(vcpu, large_gfn);
> +	if (unlikely(*force_pt_level))
> +		return PT_PAGE_TABLE_LEVEL;
>  
> -	if (likely(!*force_pt_level))
> -		*force_pt_level = !memslot_valid_for_gpte(slot, true);
> +	slot = kvm_vcpu_gfn_to_memslot(vcpu, large_gfn);
> +	*force_pt_level = !memslot_valid_for_gpte(slot, true);
>  	if (unlikely(*force_pt_level))
>  		return PT_PAGE_TABLE_LEVEL;
>  
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index bf39d0f..b41faa9 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -698,7 +698,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
>  	int r;
>  	pfn_t pfn;
>  	int level = PT_PAGE_TABLE_LEVEL;
> -	bool force_pt_level;
> +	bool force_pt_level = false;
>  	unsigned long mmu_seq;
>  	bool map_writable, is_self_change_mapping;
>  
> 

Looks good, thanks.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index dd2a7c6..7d85bca 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -886,10 +886,11 @@  static int mapping_level(struct kvm_vcpu *vcpu, gfn_t large_gfn,
 	int host_level, level, max_level;
 	struct kvm_memory_slot *slot;
 
-	slot = kvm_vcpu_gfn_to_memslot(vcpu, large_gfn);
+	if (unlikely(*force_pt_level))
+		return PT_PAGE_TABLE_LEVEL;
 
-	if (likely(!*force_pt_level))
-		*force_pt_level = !memslot_valid_for_gpte(slot, true);
+	slot = kvm_vcpu_gfn_to_memslot(vcpu, large_gfn);
+	*force_pt_level = !memslot_valid_for_gpte(slot, true);
 	if (unlikely(*force_pt_level))
 		return PT_PAGE_TABLE_LEVEL;
 
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index bf39d0f..b41faa9 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -698,7 +698,7 @@  static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code,
 	int r;
 	pfn_t pfn;
 	int level = PT_PAGE_TABLE_LEVEL;
-	bool force_pt_level;
+	bool force_pt_level = false;
 	unsigned long mmu_seq;
 	bool map_writable, is_self_change_mapping;