From patchwork Mon Nov 7 08:07:59 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Feng" X-Patchwork-Id: 9414483 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B0DCF6022E for ; Mon, 7 Nov 2016 08:42:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A0AFB28B04 for ; Mon, 7 Nov 2016 08:42:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 955C428C78; Mon, 7 Nov 2016 08:42:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0587128B04 for ; Mon, 7 Nov 2016 08:42:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1c3fTk-0005Ws-PL; Mon, 07 Nov 2016 08:40:32 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1c3fTj-0005WN-29 for xen-devel@lists.xen.org; Mon, 07 Nov 2016 08:40:31 +0000 Received: from [193.109.254.147] by server-1.bemta-6.messagelabs.com id 04/01-24503-EFD30285; Mon, 07 Nov 2016 08:40:30 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrLLMWRWlGSWpSXmKPExsVywNykWPevrUK EwdEGZYslHxezODB6HN39mymAMYo1My8pvyKBNWN/y26WgttqFVvP2zUw7pXrYuTkEBKokHgy ezIziC0hwCtxZNkMVgjbT2L3uX1AcS6gmj5GidaJ8xlBEmwCihIHLx4CKxIRkJa49vkyI0gRs 8ACRonGi0fBJgkLREq0ti8Esjk4WARUJb6+dAcJ8wo4SnQdvM4EsUBOYsPu/2AzOQWcJO7dmM AKcZCjxJX/85knMPIuYGRYxahRnFpUllqka2Sgl1SUmZ5RkpuYmaNraGCml5taXJyYnpqTmFS sl5yfu4kRGAwMQLCD8deygEOMkhxMSqK8lhYKEUJ8SfkplRmJxRnxRaU5qcWHGGU4OJQkeG/Z AOUEi1LTUyvSMnOAYQmTluDgURLhfQKS5i0uSMwtzkyHSJ1iVJQS53UCBrOQAEgiozQPrg0WC 5cYZaWEeRmBDhHiKUgtys0sQZV/xSjOwagkzHsaZDxPZl4J3PRXQIuZgBa7hciALC5JREhJNT Dy7XjWLe+5Z/IhfavyScX91xqCj9bd1uiftKy/UPjlfhbD1H3MiSKJbzZ5L1vcevl9tfuNwpX f6rdeOzh9xceHs4ojMkyf7LxxpLFYaTvXydexZ2zvBYlefMn2eCVfw/I63dx7osxOzyw9BfWV vprKb+uabnpu3d9tD3cU1hU+3FI4aZuL8s0JSizFGYmGWsxFxYkAQ97Wr4ACAAA= X-Env-Sender: feng.wu@intel.com X-Msg-Ref: server-12.tower-27.messagelabs.com!1478508024!69839345!3 X-Originating-IP: [192.55.52.115] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.0.13; banners=-,-,- X-VirusChecked: Checked Received: (qmail 32683 invoked from network); 7 Nov 2016 08:40:29 -0000 Received: from mga14.intel.com (HELO mga14.intel.com) (192.55.52.115) by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 7 Nov 2016 08:40:29 -0000 Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP; 07 Nov 2016 00:40:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,605,1473145200"; d="scan'208";a="28337033" Received: from unknown (HELO feng-bdw-de-pi.bj.intel.com) ([10.238.154.57]) by orsmga005.jf.intel.com with ESMTP; 07 Nov 2016 00:40:26 -0800 From: Feng Wu To: xen-devel@lists.xen.org Date: Mon, 7 Nov 2016 16:07:59 +0800 Message-Id: <1478506083-14560-3-git-send-email-feng.wu@intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1478506083-14560-1-git-send-email-feng.wu@intel.com> References: <1478506083-14560-1-git-send-email-feng.wu@intel.com> Cc: kevin.tian@intel.com, Feng Wu , george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, dario.faggioli@citrix.com, jbeulich@suse.com Subject: [Xen-devel] [PATCH v7 2/6] VMX: Properly handle pi when all the assigned devices are removed X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch handles some corner cases when the last assigned device is removed from the domain. In this case we should carefully handle pi descriptor and the per-cpu blocking list, to make sure: - all the PI descriptor are in the right state when next time a devices is assigned to the domain again. - No remaining vcpus of the domain in the per-cpu blocking list. Here we call vmx_pi_unblock_vcpu() to remove the vCPU from the blocking list if it is on the list. However, this could happen when vmx_vcpu_block() is being called, hence we might incorrectly add the vCPU to the blocking list while the last devcie is detached from the domain. Consider that the situation can only occur when detaching the last device from the domain and it is not a frequent operation, so we use domain_pause before that, which is considered as an clean and maintainable solution for the situation. Signed-off-by: Feng Wu Reviewed-by: Jan Beulich --- v7: - Prevent the domain from pausing itself. v6: - Comments changes - Rename vmx_pi_list_remove() to vmx_pi_unblock_vcpu() v5: - Remove a no-op wrapper v4: - Rename some functions: vmx_pi_remove_vcpu_from_blocking_list() -> vmx_pi_list_remove() vmx_pi_blocking_cleanup() -> vmx_pi_list_cleanup() - Remove the check in vmx_pi_list_cleanup() - Comments adjustment xen/arch/x86/hvm/vmx/vmx.c | 28 ++++++++++++++++++++++++---- xen/drivers/passthrough/pci.c | 14 ++++++++++++++ 2 files changed, 38 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 10546af..7e7bc8b 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -158,14 +158,12 @@ static void vmx_pi_switch_to(struct vcpu *v) pi_clear_sn(pi_desc); } -static void vmx_pi_do_resume(struct vcpu *v) +static void vmx_pi_unblock_vcpu(struct vcpu *v) { unsigned long flags; spinlock_t *pi_blocking_list_lock; struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc; - ASSERT(!test_bit(_VPF_blocked, &v->pause_flags)); - /* * Set 'NV' field back to posted_intr_vector, so the * Posted-Interrupts can be delivered to the vCPU when @@ -173,12 +171,12 @@ static void vmx_pi_do_resume(struct vcpu *v) */ write_atomic(&pi_desc->nv, posted_intr_vector); - /* The vCPU is not on any blocking list. */ pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock; /* Prevent the compiler from eliminating the local variable.*/ smp_rmb(); + /* The vCPU is not on any blocking list. */ if ( pi_blocking_list_lock == NULL ) return; @@ -198,6 +196,13 @@ static void vmx_pi_do_resume(struct vcpu *v) spin_unlock_irqrestore(pi_blocking_list_lock, flags); } +static void vmx_pi_do_resume(struct vcpu *v) +{ + ASSERT(!test_bit(_VPF_blocked, &v->pause_flags)); + + vmx_pi_unblock_vcpu(v); +} + /* This function is called when pcidevs_lock is held */ void vmx_pi_hooks_assign(struct domain *d) { @@ -215,11 +220,21 @@ void vmx_pi_hooks_assign(struct domain *d) /* This function is called when pcidevs_lock is held */ void vmx_pi_hooks_deassign(struct domain *d) { + struct vcpu *v; + if ( !iommu_intpost || !has_hvm_container_domain(d) ) return; ASSERT(d->arch.hvm_domain.vmx.vcpu_block); + /* + * Pausing the domain can make sure the vCPU is not + * running and hence not calling the hooks simultaneously + * when deassigning the PI hooks and removing the vCPU + * from the blocking list. + */ + domain_pause(d); + d->arch.hvm_domain.vmx.vcpu_block = NULL; d->arch.hvm_domain.vmx.pi_switch_from = NULL; d->arch.hvm_domain.vmx.pi_do_resume = NULL; @@ -229,6 +244,11 @@ void vmx_pi_hooks_deassign(struct domain *d) * is in the process of getting assigned and "from" hook is NULL. However, * it is not straightforward to find a clear solution, so just leave it here. */ + + for_each_vcpu ( d, v ) + vmx_pi_unblock_vcpu(v); + + domain_unpause(d); } static int vmx_domain_initialise(struct domain *d) diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index 8bce213..e71732f 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -1602,6 +1602,13 @@ int iommu_do_pci_domctl( break; case XEN_DOMCTL_assign_device: + /* no domain_pause() */ + if ( d == current->domain ) + { + ret = -EINVAL; + break; + } + ret = -ENODEV; if ( domctl->u.assign_device.dev != XEN_DOMCTL_DEV_PCI ) break; @@ -1642,6 +1649,13 @@ int iommu_do_pci_domctl( break; case XEN_DOMCTL_deassign_device: + /* no domain_pause() */ + if ( d == current->domain ) + { + ret = -EINVAL; + break; + } + ret = -ENODEV; if ( domctl->u.assign_device.dev != XEN_DOMCTL_DEV_PCI ) break;