From patchwork Wed Oct 27 09:04:58 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 285062 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id o9R90fJi015832 for ; Wed, 27 Oct 2010 09:00:42 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758216Ab0J0JAe (ORCPT ); Wed, 27 Oct 2010 05:00:34 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:56457 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1752777Ab0J0JAc (ORCPT ); Wed, 27 Oct 2010 05:00:32 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id F067D170E90; Wed, 27 Oct 2010 17:00:30 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o9R8uLat024443; Wed, 27 Oct 2010 16:56:21 +0800 Received: from [10.167.225.99] (unknown [10.167.225.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id 38AEC14C09A; Wed, 27 Oct 2010 17:02:13 +0800 (CST) Message-ID: <4CC7EB3A.6070002@cn.fujitsu.com> Date: Wed, 27 Oct 2010 17:04:58 +0800 From: Xiao Guangrong User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.11) Gecko/20100713 Thunderbird/3.0.6 MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , LKML , KVM Subject: [PATCH 4/8] KVM: avoid unnecessary wait for a async pf References: <4CC7EA7D.5020901@cn.fujitsu.com> In-Reply-To: <4CC7EA7D.5020901@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter1.kernel.org [140.211.167.41]); Wed, 27 Oct 2010 09:00:42 +0000 (UTC) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 16f42ff..0b2c420 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5321,8 +5321,6 @@ static int __vcpu_run(struct kvm_vcpu *vcpu) vcpu->run->exit_reason = KVM_EXIT_INTR; ++vcpu->stat.request_irq_exits; } - - kvm_check_async_pf_completion(vcpu); if (signal_pending(current)) { r = -EINTR; @@ -6108,7 +6106,6 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) { return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE && !vcpu->arch.apf.halted) - || !list_empty_careful(&vcpu->async_pf.done) || vcpu->arch.mp_state == KVM_MP_STATE_SIPI_RECEIVED || vcpu->arch.nmi_pending || (kvm_arch_interrupt_allowed(vcpu) && diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ee4314e..0c1b7c5 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -90,7 +90,6 @@ struct kvm_async_pf { }; void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu); -void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu); int kvm_setup_async_pf(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn, struct kvm_arch_async_pf *arch); int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 60df9e0..e213ca4 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -120,13 +120,13 @@ void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu) vcpu->async_pf.queued = 0; } -void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu) +bool kvm_check_async_pf_completion(struct kvm_vcpu *vcpu) { struct kvm_async_pf *work; if (list_empty_careful(&vcpu->async_pf.done) || !kvm_arch_can_inject_async_page_present(vcpu)) - return; + return false; spin_lock(&vcpu->async_pf.lock); work = list_first_entry(&vcpu->async_pf.done, typeof(*work), link); @@ -142,6 +142,8 @@ void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu) if (work->page) put_page(work->page); kmem_cache_free(async_pf_cache, work); + + return true; } int kvm_setup_async_pf(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn, diff --git a/virt/kvm/async_pf.h b/virt/kvm/async_pf.h index e7ef644..e21533c 100644 --- a/virt/kvm/async_pf.h +++ b/virt/kvm/async_pf.h @@ -27,10 +27,16 @@ int kvm_async_pf_init(void); void kvm_async_pf_deinit(void); void kvm_async_pf_vcpu_init(struct kvm_vcpu *vcpu); +bool kvm_check_async_pf_completion(struct kvm_vcpu *vcpu); + #else #define kvm_async_pf_init() (0) #define kvm_async_pf_deinit() do{}while(0) #define kvm_async_pf_vcpu_init(C) do{}while(0) +static inline bool kvm_check_async_pf_completion(struct kvm_vcpu *vcpu) +{ + return false; +} #endif #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 88d869e..d9aed28 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1347,7 +1347,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) for (;;) { prepare_to_wait(&vcpu->wq, &wait, TASK_INTERRUPTIBLE); - if (kvm_arch_vcpu_runnable(vcpu)) { + if (kvm_arch_vcpu_runnable(vcpu) || + kvm_check_async_pf_completion(vcpu)) { kvm_make_request(KVM_REQ_UNHALT, vcpu); break; }