From patchwork Mon Nov 7 08:08:03 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Feng" X-Patchwork-Id: 9414479 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2367D6022E for ; Mon, 7 Nov 2016 08:42:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1353D28B04 for ; Mon, 7 Nov 2016 08:42:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0834B28C78; Mon, 7 Nov 2016 08:42:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9B60B28B04 for ; Mon, 7 Nov 2016 08:42:48 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1c3fTt-0005dt-Rh; Mon, 07 Nov 2016 08:40:41 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1c3fTr-0005bl-T4 for xen-devel@lists.xen.org; Mon, 07 Nov 2016 08:40:39 +0000 Received: from [193.109.254.147] by server-1.bemta-6.messagelabs.com id 6E/11-24503-70E30285; Mon, 07 Nov 2016 08:40:39 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrPLMWRWlGSWpSXmKPExsVywNykWJfNTiH CYNlcdYslHxezODB6HN39mymAMYo1My8pvyKBNePqzcvsBdeUKtZvvsLUwHhGqouRk0NIoEKi 9cJKVhBbQoBX4siyGVC2n8S0ZY/Zuxi5gGr6GCX+fnrKDpJgE1CUOHjxEFiRiIC0xLXPlxlBi pgFFjBKNF48ygySEBZwkejZcYIRxGYRUJXYd2YRG4jNK+AocXdRHyPEBjmJDbv/g9mcAk4S92 5MYIW4yFHiyv/5zBMYeRcwMqxi1ChOLSpLLdI1MtBLKspMzyjJTczM0TU0MNPLTS0uTkxPzUl MKtZLzs/dxAgMCAYg2MH4a1nAIUZJDiYlUV5LC4UIIb6k/JTKjMTijPii0pzU4kOMMhwcShK8 t2yAcoJFqempFWmZOcDQhElLcPAoifA+AUnzFhck5hZnpkOkTjEqSonzOtkCJQRAEhmleXBts Hi4xCgrJczLCHSIEE9BalFuZgmq/CtGcQ5GJWHe0yDjeTLzSuCmvwJazAS02C1EBmRxSSJCSq qBUcrAnE+J+fhz7p5rpTHLdk+eEbLlhsatjie8Ni9TYx4LFh7XDF37OuWBbtWNSU/jxTK3v1m rcWVWY46z35M1hzoP253QODO137M450qgdYjkqYOap0/umTIryH1ZtK2a27lNJSvZ5PWr3vxv XWDSZ/N+QebNKvY+zQ3vdvJ/vOZi+TNHdEVpthJLcUaioRZzUXEiAIrklB+CAgAA X-Env-Sender: feng.wu@intel.com X-Msg-Ref: server-12.tower-27.messagelabs.com!1478508024!69839345!7 X-Originating-IP: [192.55.52.115] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.0.13; banners=-,-,- X-VirusChecked: Checked Received: (qmail 34194 invoked from network); 7 Nov 2016 08:40:38 -0000 Received: from mga14.intel.com (HELO mga14.intel.com) (192.55.52.115) by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 7 Nov 2016 08:40:38 -0000 Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP; 07 Nov 2016 00:40:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,605,1473145200"; d="scan'208";a="28337149" Received: from unknown (HELO feng-bdw-de-pi.bj.intel.com) ([10.238.154.57]) by orsmga005.jf.intel.com with ESMTP; 07 Nov 2016 00:40:35 -0800 From: Feng Wu To: xen-devel@lists.xen.org Date: Mon, 7 Nov 2016 16:08:03 +0800 Message-Id: <1478506083-14560-7-git-send-email-feng.wu@intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1478506083-14560-1-git-send-email-feng.wu@intel.com> References: <1478506083-14560-1-git-send-email-feng.wu@intel.com> Cc: kevin.tian@intel.com, Feng Wu , george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, dario.faggioli@citrix.com, jbeulich@suse.com Subject: [Xen-devel] [PATCH v7 6/6] VMX: Fixup PI descriptor when cpu is offline X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP When cpu is offline, we need to move all the vcpus in its blocking list to another online cpu, this patch handles it. Signed-off-by: Feng Wu Reviewed-by: Jan Beulich --- v7: - Pass unsigned int to vmx_pi_desc_fixup() v6: - Carefully suppress 'SN' to avoid missing notification event during moving the vcpu to the new list v5: - Add some comments to explain why it doesn't cause deadlock for the ABBA deadlock scenario. v4: - Remove the pointless check since we are in machine stop context and no other cpus go down in parallel. xen/arch/x86/hvm/vmx/vmcs.c | 1 + xen/arch/x86/hvm/vmx/vmx.c | 70 +++++++++++++++++++++++++++++++++++++++ xen/include/asm-x86/hvm/vmx/vmx.h | 1 + 3 files changed, 72 insertions(+) diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index e8e3616..1846e25 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -578,6 +578,7 @@ void vmx_cpu_dead(unsigned int cpu) vmx_free_vmcs(per_cpu(vmxon_region, cpu)); per_cpu(vmxon_region, cpu) = 0; nvmx_cpu_dead(cpu); + vmx_pi_desc_fixup(cpu); } int vmx_cpu_up(void) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index d70acec..2f5b2e7 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -203,6 +203,76 @@ static void vmx_pi_do_resume(struct vcpu *v) vmx_pi_unblock_vcpu(v); } +void vmx_pi_desc_fixup(unsigned int cpu) +{ + unsigned int new_cpu, dest; + unsigned long flags; + struct arch_vmx_struct *vmx, *tmp; + spinlock_t *new_lock, *old_lock = &per_cpu(vmx_pi_blocking, cpu).lock; + struct list_head *blocked_vcpus = &per_cpu(vmx_pi_blocking, cpu).list; + + if ( !iommu_intpost ) + return; + + /* + * We are in the context of CPU_DEAD or CPU_UP_CANCELED notification, + * and it is impossible for a second CPU go down in parallel. So we + * can safely acquire the old cpu's lock and then acquire the new_cpu's + * lock after that. + */ + spin_lock_irqsave(old_lock, flags); + + list_for_each_entry_safe(vmx, tmp, blocked_vcpus, pi_blocking.list) + { + /* + * Suppress notification or we may miss an interrupt when the + * target cpu is dying. + */ + pi_set_sn(&vmx->pi_desc); + + /* + * Check whether a notification is pending before doing the + * movement, if that is the case we need to wake up it directly + * other than moving it to the new cpu's list. + */ + if ( pi_test_on(&vmx->pi_desc) ) + { + list_del(&vmx->pi_blocking.list); + vmx->pi_blocking.lock = NULL; + vcpu_unblock(container_of(vmx, struct vcpu, arch.hvm_vmx)); + } + else + { + /* + * We need to find an online cpu as the NDST of the PI descriptor, it + * doesn't matter whether it is within the cpupool of the domain or + * not. As long as it is online, the vCPU will be woken up once the + * notification event arrives. + */ + new_cpu = cpumask_any(&cpu_online_map); + new_lock = &per_cpu(vmx_pi_blocking, new_cpu).lock; + + spin_lock(new_lock); + + ASSERT(vmx->pi_blocking.lock == old_lock); + + dest = cpu_physical_id(new_cpu); + write_atomic(&vmx->pi_desc.ndst, + x2apic_enabled ? dest : MASK_INSR(dest, PI_xAPIC_NDST_MASK)); + + list_move(&vmx->pi_blocking.list, + &per_cpu(vmx_pi_blocking, new_cpu).list); + vmx->pi_blocking.lock = new_lock; + + spin_unlock(new_lock); + } + + pi_clear_sn(&vmx->pi_desc); + } + + spin_unlock_irqrestore(old_lock, flags); +} + /* This function is called when pcidevs_lock is held */ void vmx_pi_hooks_assign(struct domain *d) { diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h index 2f0435c..5c8fe5d 100644 --- a/xen/include/asm-x86/hvm/vmx/vmx.h +++ b/xen/include/asm-x86/hvm/vmx/vmx.h @@ -569,6 +569,7 @@ void free_p2m_hap_data(struct p2m_domain *p2m); void p2m_init_hap_data(struct p2m_domain *p2m); void vmx_pi_per_cpu_init(unsigned int cpu); +void vmx_pi_desc_fixup(unsigned int cpu); void vmx_pi_hooks_assign(struct domain *d); void vmx_pi_hooks_deassign(struct domain *d);