From patchwork Mon Apr 29 11:23:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10921681 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D02541515 for ; Mon, 29 Apr 2019 11:24:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BAE032871E for ; Mon, 29 Apr 2019 11:24:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AC383287E7; Mon, 29 Apr 2019 11:24:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 491B22871E for ; Mon, 29 Apr 2019 11:24:44 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hL4Nb-0007o9-Ej; Mon, 29 Apr 2019 11:23:27 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hL4Nb-0007o2-0G for xen-devel@lists.xenproject.org; Mon, 29 Apr 2019 11:23:27 +0000 X-Inumbo-ID: 340a8c9a-6a71-11e9-843c-bc764e045a96 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 340a8c9a-6a71-11e9-843c-bc764e045a96; Mon, 29 Apr 2019 11:23:25 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Mon, 29 Apr 2019 05:23:24 -0600 Message-Id: <5CC6DEA80200007800229E9D@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Mon, 29 Apr 2019 05:23:20 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> In-Reply-To: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH 2/9] x86/IRQ: deal with move cleanup count state in fixup_irqs() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The cleanup IPI may get sent immediately before a CPU gets removed from the online map. In such a case the IPI would get handled on the CPU being offlined no earlier than in the interrupts disabled window after fixup_irqs()' main loop. This is too late, however, because a possible affinity change may incur the need for vector assignment, which will fail when the IRQ's move cleanup count is still non-zero. To fix this - record the set of CPUs the cleanup IPIs gets actually sent to alongside setting their count, - adjust the count in fixup_irqs(), accounting for all CPUs that the cleanup IPI was sent to, but that are no longer online, - bail early from the cleanup IPI handler when the CPU is no longer online, to prevent double accounting. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- TBD: The proper recording of the IPI destinations actually makes the move_cleanup_count field redundant. Do we want to drop it, at the price of a few more CPU-mask operations? --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -658,6 +658,9 @@ void irq_move_cleanup_interrupt(struct c ack_APIC_irq(); me = smp_processor_id(); + if ( !cpu_online(me) ) + return; + for ( vector = FIRST_DYNAMIC_VECTOR; vector <= LAST_HIPRIORITY_VECTOR; vector++) { @@ -717,11 +720,14 @@ unlock: static void send_cleanup_vector(struct irq_desc *desc) { - cpumask_t cleanup_mask; + cpumask_and(desc->arch.old_cpu_mask, desc->arch.old_cpu_mask, + &cpu_online_map); + desc->arch.move_cleanup_count = cpumask_weight(desc->arch.old_cpu_mask); - cpumask_and(&cleanup_mask, desc->arch.old_cpu_mask, &cpu_online_map); - desc->arch.move_cleanup_count = cpumask_weight(&cleanup_mask); - send_IPI_mask(&cleanup_mask, IRQ_MOVE_CLEANUP_VECTOR); + if ( desc->arch.move_cleanup_count ) + send_IPI_mask(desc->arch.old_cpu_mask, IRQ_MOVE_CLEANUP_VECTOR); + else + release_old_vec(desc); desc->arch.move_in_progress = 0; } @@ -2394,6 +2400,16 @@ void fixup_irqs(const cpumask_t *mask, b vector <= LAST_HIPRIORITY_VECTOR ) cpumask_and(desc->arch.cpu_mask, desc->arch.cpu_mask, mask); + if ( desc->arch.move_cleanup_count ) + { + /* The cleanup IPI may have got sent while we were still online. */ + cpumask_andnot(&affinity, desc->arch.old_cpu_mask, + &cpu_online_map); + desc->arch.move_cleanup_count -= cpumask_weight(&affinity); + if ( !desc->arch.move_cleanup_count ) + release_old_vec(desc); + } + cpumask_copy(&affinity, desc->affinity); if ( !desc->action || cpumask_subset(&affinity, mask) ) {