Message ID | 20130313013642.GA20722@amt.cnet (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 03/13/2013 09:36 AM, Marcelo Tosatti wrote: > > As noticed by Ulrich Obergfell <uobergfe@redhat.com>, the mmu > counters are for beancounting purposes only - so n_used_mmu_pages and > n_max_mmu_pages could be relaxed (example: before f0f5933a1626c8df7b), > resulting in n_used_mmu_pages > n_max_mmu_pages. > Interesting. > Make code robust against n_used_mmu_pages > n_max_mmu_pages. Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Mar 12, 2013 at 10:36:43PM -0300, Marcelo Tosatti wrote: > > As noticed by Ulrich Obergfell <uobergfe@redhat.com>, the mmu > counters are for beancounting purposes only - so n_used_mmu_pages and > n_max_mmu_pages could be relaxed (example: before f0f5933a1626c8df7b), > resulting in n_used_mmu_pages > n_max_mmu_pages. > > Make code robust against n_used_mmu_pages > n_max_mmu_pages. > > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> > Applied, thanks. > diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h > index 6987108..3b1ad00 100644 > --- a/arch/x86/kvm/mmu.h > +++ b/arch/x86/kvm/mmu.h > @@ -57,8 +57,11 @@ int kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context); > > static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm) > { > - return kvm->arch.n_max_mmu_pages - > - kvm->arch.n_used_mmu_pages; > + if (kvm->arch.n_max_mmu_pages > kvm->arch.n_used_mmu_pages) > + return kvm->arch.n_max_mmu_pages - > + kvm->arch.n_used_mmu_pages; > + > + return 0; > } > > static inline void kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu) -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 6987108..3b1ad00 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -57,8 +57,11 @@ int kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context); static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm) { - return kvm->arch.n_max_mmu_pages - - kvm->arch.n_used_mmu_pages; + if (kvm->arch.n_max_mmu_pages > kvm->arch.n_used_mmu_pages) + return kvm->arch.n_max_mmu_pages - + kvm->arch.n_used_mmu_pages; + + return 0; } static inline void kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)
As noticed by Ulrich Obergfell <uobergfe@redhat.com>, the mmu counters are for beancounting purposes only - so n_used_mmu_pages and n_max_mmu_pages could be relaxed (example: before f0f5933a1626c8df7b), resulting in n_used_mmu_pages > n_max_mmu_pages. Make code robust against n_used_mmu_pages > n_max_mmu_pages. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html