From patchwork Thu May 26 13:39:11 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Feng" X-Patchwork-Id: 9137013 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 86812607D5 for ; Thu, 26 May 2016 14:07:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7AB0B282F2 for ; Thu, 26 May 2016 14:07:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6F53F282F7; Thu, 26 May 2016 14:07:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C63B3282F2 for ; Thu, 26 May 2016 14:07:06 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1b5vsn-0006g1-SJ; Thu, 26 May 2016 14:03:29 +0000 Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1b5vsm-0006fE-8b for xen-devel@lists.xen.org; Thu, 26 May 2016 14:03:28 +0000 Received: from [85.158.143.35] by server-3.bemta-6.messagelabs.com id B1/72-02835-03207475; Thu, 26 May 2016 14:03:28 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrDLMWRWlGSWpSXmKPExsVywNykQlefyT3 cYNJaKYslHxezODB6HN39mymAMYo1My8pvyKBNePVgrNMBUe1KnbNesPawHheqYuRg0NIoFJi xcuELkZODgkBXokjy2awQtj+EjN7vrGD2EIC9RIzF/1gArHZBBQlDl48BFYjIiAtce3zZcYuR i4OZoETjBK/p8wESwgLREo0Nm4Da2YRUJXomPGdDcTmFXCQ2P97GSPEAjmJDbv/g9mcAo4S/7 6fZoRY5iDxYO4ZxgmMvAsYGVYxqhenFpWlFuka6yUVZaZnlOQmZuboGhqY6eWmFhcnpqfmJCY V6yXn525iBIYCAxDsYOz453SIUZKDSUmUV1zcLVyILyk/pTIjsTgjvqg0J7X4EKMMB4eSBO+l A0A5waLU9NSKtMwcYFDCpCU4eJREePczuIcL8RYXJOYWZ6ZDpE4xKkqJ864FSQiAJDJK8+DaY JFwiVFWSpiXEegQIZ6C1KLczBJU+VeM4hyMSsK8d0Cm8GTmlcBNfwW0mAlosf8XZ5DFJYkIKa kGxlw2g9UJjb4z071eMFb8TJnU8yKw7YGA7unqRNng+bJ3FZ2D1x7t5sqPkjgy7+i9m5evFlx awsv6d19m+clNNZwfzty44+k2sT1v5eFdEYeuvfDojxZeK74iyuj0+UlhajM9gvKz5st9ee5k 2seskfI6b0mc/58qlnChgyWv5sqFJHKHB534o8RSnJFoqMVcVJwIALfDMl5/AgAA X-Env-Sender: feng.wu@intel.com X-Msg-Ref: server-9.tower-21.messagelabs.com!1464271404!16005351!2 X-Originating-IP: [192.55.52.120] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 22447 invoked from network); 26 May 2016 14:03:26 -0000 Received: from mga04.intel.com (HELO mga04.intel.com) (192.55.52.120) by server-9.tower-21.messagelabs.com with SMTP; 26 May 2016 14:03:26 -0000 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP; 26 May 2016 07:03:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,367,1459839600"; d="scan'208";a="110948229" Received: from feng-bdw-de-pi.bj.intel.com ([10.238.154.60]) by fmsmga004.fm.intel.com with ESMTP; 26 May 2016 07:03:24 -0700 From: Feng Wu To: xen-devel@lists.xen.org Date: Thu, 26 May 2016 21:39:11 +0800 Message-Id: <1464269954-8056-2-git-send-email-feng.wu@intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1464269954-8056-1-git-send-email-feng.wu@intel.com> References: <1464269954-8056-1-git-send-email-feng.wu@intel.com> Cc: kevin.tian@intel.com, keir@xen.org, george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, dario.faggioli@citrix.com, jbeulich@suse.com, Feng Wu Subject: [Xen-devel] [PATCH v2 1/4] VMX: Properly handle pi when all the assigned devices are removed X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch handles some concern case when the last assigned device is removed from the domain. In this case we should carefully handle pi descriptor and the per-cpu blocking list, to make sure: - all the PI descriptor are in the right state when next time a devices is assigned to the domain again. This is achrived by always making all the pi hooks available, so the pi descriptor is updated during scheduling, which make it always up-to-data. - No remaining vcpus of the domain in the per-cpu blocking list. Signed-off-by: Feng Wu --- xen/arch/x86/hvm/vmx/vmx.c | 75 +++++++++++++++++++++++++++++++------- xen/include/asm-x86/hvm/vmx/vmcs.h | 3 ++ 2 files changed, 65 insertions(+), 13 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index bc4410f..65f5288 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -113,7 +113,19 @@ static void vmx_vcpu_block(struct vcpu *v) &per_cpu(vmx_pi_blocking, v->processor).lock; struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc; - spin_lock_irqsave(pi_blocking_list_lock, flags); + spin_lock_irqsave(&v->arch.hvm_vmx.pi_hotplug_lock, flags); + if ( unlikely(v->arch.hvm_vmx.pi_blocking_cleaned_up) ) + { + /* + * The vcpu is to be destroyed and it has already been removed + * from the per-CPU list if it is blocking, we shouldn't add + * new vCPU to the list. + */ + spin_unlock_irqrestore(&v->arch.hvm_vmx.pi_hotplug_lock, flags); + return; + } + + spin_lock(pi_blocking_list_lock); old_lock = cmpxchg(&v->arch.hvm_vmx.pi_blocking.lock, NULL, pi_blocking_list_lock); @@ -126,7 +138,9 @@ static void vmx_vcpu_block(struct vcpu *v) list_add_tail(&v->arch.hvm_vmx.pi_blocking.list, &per_cpu(vmx_pi_blocking, v->processor).list); - spin_unlock_irqrestore(pi_blocking_list_lock, flags); + spin_unlock(pi_blocking_list_lock); + + spin_unlock_irqrestore(&v->arch.hvm_vmx.pi_hotplug_lock, flags); ASSERT(!pi_test_sn(pi_desc)); @@ -199,32 +213,65 @@ static void vmx_pi_do_resume(struct vcpu *v) spin_unlock_irqrestore(pi_blocking_list_lock, flags); } +static void vmx_pi_blocking_cleanup(struct vcpu *v) +{ + unsigned long flags; + spinlock_t *pi_blocking_list_lock; + + if ( !iommu_intpost ) + return; + + spin_lock_irqsave(&v->arch.hvm_vmx.pi_hotplug_lock, flags); + v->arch.hvm_vmx.pi_blocking_cleaned_up = 1; + + pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock; + if (pi_blocking_list_lock == NULL) + { + spin_unlock_irqrestore(&v->arch.hvm_vmx.pi_hotplug_lock, flags); + return; + } + + spin_lock(pi_blocking_list_lock); + if ( v->arch.hvm_vmx.pi_blocking.lock != NULL ) + { + ASSERT(v->arch.hvm_vmx.pi_blocking.lock == pi_blocking_list_lock); + list_del(&v->arch.hvm_vmx.pi_blocking.list); + v->arch.hvm_vmx.pi_blocking.lock = NULL; + } + spin_unlock(pi_blocking_list_lock); + spin_unlock_irqrestore(&v->arch.hvm_vmx.pi_hotplug_lock, flags); +} + /* This function is called when pcidevs_lock is held */ void vmx_pi_hooks_assign(struct domain *d) { + struct vcpu *v; + if ( !iommu_intpost || !has_hvm_container_domain(d) ) return; - ASSERT(!d->arch.hvm_domain.vmx.vcpu_block); + for_each_vcpu ( d, v ) + v->arch.hvm_vmx.pi_blocking_cleaned_up = 0; - d->arch.hvm_domain.vmx.vcpu_block = vmx_vcpu_block; - d->arch.hvm_domain.vmx.pi_switch_from = vmx_pi_switch_from; - d->arch.hvm_domain.vmx.pi_switch_to = vmx_pi_switch_to; - d->arch.hvm_domain.vmx.pi_do_resume = vmx_pi_do_resume; + if ( !d->arch.hvm_domain.vmx.vcpu_block ) + { + d->arch.hvm_domain.vmx.vcpu_block = vmx_vcpu_block; + d->arch.hvm_domain.vmx.pi_switch_from = vmx_pi_switch_from; + d->arch.hvm_domain.vmx.pi_switch_to = vmx_pi_switch_to; + d->arch.hvm_domain.vmx.pi_do_resume = vmx_pi_do_resume; + } } /* This function is called when pcidevs_lock is held */ void vmx_pi_hooks_deassign(struct domain *d) { + struct vcpu *v; + if ( !iommu_intpost || !has_hvm_container_domain(d) ) return; - ASSERT(d->arch.hvm_domain.vmx.vcpu_block); - - d->arch.hvm_domain.vmx.vcpu_block = NULL; - d->arch.hvm_domain.vmx.pi_switch_from = NULL; - d->arch.hvm_domain.vmx.pi_switch_to = NULL; - d->arch.hvm_domain.vmx.pi_do_resume = NULL; + for_each_vcpu ( d, v ) + vmx_pi_blocking_cleanup(v); } static int vmx_domain_initialise(struct domain *d) @@ -256,6 +303,8 @@ static int vmx_vcpu_initialise(struct vcpu *v) INIT_LIST_HEAD(&v->arch.hvm_vmx.pi_blocking.list); + spin_lock_init(&v->arch.hvm_vmx.pi_hotplug_lock); + v->arch.schedule_tail = vmx_do_resume; v->arch.ctxt_switch_from = vmx_ctxt_switch_from; v->arch.ctxt_switch_to = vmx_ctxt_switch_to; diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h index b54f52f..3834f49 100644 --- a/xen/include/asm-x86/hvm/vmx/vmcs.h +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h @@ -231,6 +231,9 @@ struct arch_vmx_struct { * pCPU and wakeup the related vCPU. */ struct pi_blocking_vcpu pi_blocking; + + spinlock_t pi_hotplug_lock; + bool_t pi_blocking_cleaned_up; }; int vmx_create_vmcs(struct vcpu *v);