From patchwork Sun Nov 1 11:56:30 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gleb Natapov X-Patchwork-Id: 56849 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id nA1Bvolh029112 for ; Sun, 1 Nov 2009 11:57:51 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752469AbZKAL5H (ORCPT ); Sun, 1 Nov 2009 06:57:07 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752275AbZKAL4j (ORCPT ); Sun, 1 Nov 2009 06:56:39 -0500 Received: from mx1.redhat.com ([209.132.183.28]:40456 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752092AbZKAL4a (ORCPT ); Sun, 1 Nov 2009 06:56:30 -0500 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id nA1BuYvS025985; Sun, 1 Nov 2009 06:56:35 -0500 Received: from dhcp-1-237.tlv.redhat.com (dhcp-1-237.tlv.redhat.com [10.35.1.237]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id nA1BuXMj015149; Sun, 1 Nov 2009 06:56:34 -0500 Received: by dhcp-1-237.tlv.redhat.com (Postfix, from userid 13519) id 225B518D41F; Sun, 1 Nov 2009 13:56:31 +0200 (IST) From: Gleb Natapov To: kvm@vger.kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 11/11] Send async PF when guest is not in userspace too. Date: Sun, 1 Nov 2009 13:56:30 +0200 Message-Id: <1257076590-29559-12-git-send-email-gleb@redhat.com> In-Reply-To: <1257076590-29559-1-git-send-email-gleb@redhat.com> References: <1257076590-29559-1-git-send-email-gleb@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 3d33994..21ec65a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2358,7 +2358,7 @@ static bool kvm_asyc_pf_is_done(struct kvm_vcpu *vcpu) spin_lock(&vcpu->arch.mmu_async_pf_lock); list_for_each_entry_safe(p, node, &vcpu->arch.mmu_async_pf_done, link) { - if (p->guest_task != vcpu->arch.pv_shm->current_task) + if (p->token != vcpu->arch.pv_shm->param) continue; list_del(&p->link); found = true; @@ -2370,7 +2370,7 @@ static bool kvm_asyc_pf_is_done(struct kvm_vcpu *vcpu) p->error_code); put_page(p->page); async_pf_work_free(p); - trace_kvm_mmu_async_pf_wait(vcpu->arch.pv_shm->current_task, 0); + trace_kvm_mmu_async_pf_wait(vcpu->arch.pv_shm->param, 0); } return found; } @@ -2378,7 +2378,7 @@ static bool kvm_asyc_pf_is_done(struct kvm_vcpu *vcpu) int kvm_pv_wait_for_async_pf(struct kvm_vcpu *vcpu) { ++vcpu->stat.pf_async_wait; - trace_kvm_mmu_async_pf_wait(vcpu->arch.pv_shm->current_task, 1); + trace_kvm_mmu_async_pf_wait(vcpu->arch.pv_shm->param, 1); wait_event(vcpu->wq, kvm_asyc_pf_is_done(vcpu)); return 0; @@ -2386,17 +2386,13 @@ int kvm_pv_wait_for_async_pf(struct kvm_vcpu *vcpu) static bool can_do_async_pf(struct kvm_vcpu *vcpu) { - struct kvm_segment kvm_seg; - if (!vcpu->arch.pv_shm || !(vcpu->arch.pv_shm->features & KVM_PV_SHM_FEATURES_ASYNC_PF) || - kvm_event_needs_reinjection(vcpu)) + kvm_event_needs_reinjection(vcpu) || + !kvm_x86_ops->interrupt_allowed(vcpu)) return false; - kvm_get_segment(vcpu, &kvm_seg, VCPU_SREG_CS); - - /* is userspace code? TODO check VM86 mode */ - return !!(kvm_seg.selector & 3); + return true; } static int setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr3, gva_t gva,