From patchwork Wed May 8 13:09:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10935635 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4D83F912 for ; Wed, 8 May 2019 13:10:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 39E78281C3 for ; Wed, 8 May 2019 13:10:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3883B28942; Wed, 8 May 2019 13:10:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B83F328913 for ; Wed, 8 May 2019 13:10:51 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hOMJt-0006Ze-4T; Wed, 08 May 2019 13:09:13 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hOMJr-0006ZR-S6 for xen-devel@lists.xenproject.org; Wed, 08 May 2019 13:09:11 +0000 X-Inumbo-ID: 780beacb-7192-11e9-843c-bc764e045a96 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 780beacb-7192-11e9-843c-bc764e045a96; Wed, 08 May 2019 13:09:10 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Wed, 08 May 2019 07:09:10 -0600 Message-Id: <5CD2D4F4020000780022CD3A@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Wed, 08 May 2019 07:09:08 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CC6DD090200007800229E80@prv1-mh.provo.novell.com> <5CD2D2C8020000780022CCF2@prv1-mh.provo.novell.com> In-Reply-To: <5CD2D2C8020000780022CCF2@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH v2 05/12] x86/IRQ: desc->affinity should strictly represent the requested value X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP desc->arch.cpu_mask reflects the actual set of target CPUs. Don't ever fiddle with desc->affinity itself, except to store caller requested values. Note that assign_irq_vector() now takes a NULL incoming CPU mask to mean "all CPUs" now, rather than just "all currently online CPUs". This way no further affinity adjustment is needed after onlining further CPUs. This renders both set_native_irq_info() uses (which weren't using proper locking anyway) redundant - drop the function altogether. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- a/xen/arch/x86/io_apic.c +++ b/xen/arch/x86/io_apic.c @@ -1042,7 +1042,6 @@ static void __init setup_IO_APIC_irqs(vo SET_DEST(entry, logical, cpu_mask_to_apicid(TARGET_CPUS)); spin_lock_irqsave(&ioapic_lock, flags); __ioapic_write_entry(apic, pin, 0, entry); - set_native_irq_info(irq, TARGET_CPUS); spin_unlock_irqrestore(&ioapic_lock, flags); } } @@ -2251,7 +2250,6 @@ int io_apic_set_pci_routing (int ioapic, spin_lock_irqsave(&ioapic_lock, flags); __ioapic_write_entry(ioapic, pin, 0, entry); - set_native_irq_info(irq, TARGET_CPUS); spin_unlock(&ioapic_lock); spin_lock(&desc->lock); --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -582,11 +582,16 @@ int assign_irq_vector(int irq, const cpu spin_lock_irqsave(&vector_lock, flags); ret = __assign_irq_vector(irq, desc, mask ?: TARGET_CPUS); - if (!ret) { + if ( !ret ) + { ret = desc->arch.vector; - cpumask_copy(desc->affinity, desc->arch.cpu_mask); + if ( mask ) + cpumask_copy(desc->affinity, mask); + else + cpumask_setall(desc->affinity); } spin_unlock_irqrestore(&vector_lock, flags); + return ret; } @@ -2328,9 +2333,10 @@ static void dump_irqs(unsigned char key) spin_lock_irqsave(&desc->lock, flags); - printk(" IRQ:%4d aff:%*pb vec:%02x %-15s status=%03x ", - irq, nr_cpu_ids, cpumask_bits(desc->affinity), desc->arch.vector, - desc->handler->typename, desc->status); + printk(" IRQ:%4d aff:%*pb/%*pb vec:%02x %-15s status=%03x ", + irq, nr_cpu_ids, cpumask_bits(desc->affinity), + nr_cpu_ids, cpumask_bits(desc->arch.cpu_mask), + desc->arch.vector, desc->handler->typename, desc->status); if ( ssid ) printk("Z=%-25s ", ssid); @@ -2418,8 +2424,7 @@ void fixup_irqs(const cpumask_t *mask, b release_old_vec(desc); } - cpumask_copy(&affinity, desc->affinity); - if ( !desc->action || cpumask_subset(&affinity, mask) ) + if ( !desc->action || cpumask_subset(desc->affinity, mask) ) { spin_unlock(&desc->lock); continue; @@ -2452,12 +2457,13 @@ void fixup_irqs(const cpumask_t *mask, b desc->arch.move_in_progress = 0; } - cpumask_and(&affinity, &affinity, mask); - if ( cpumask_empty(&affinity) ) + if ( !cpumask_intersects(mask, desc->affinity) ) { break_affinity = true; - cpumask_copy(&affinity, mask); + cpumask_setall(&affinity); } + else + cpumask_copy(&affinity, desc->affinity); if ( desc->handler->disable ) desc->handler->disable(desc); --- a/xen/include/xen/irq.h +++ b/xen/include/xen/irq.h @@ -162,11 +162,6 @@ extern irq_desc_t *domain_spin_lock_irq_ extern irq_desc_t *pirq_spin_lock_irq_desc( const struct pirq *, unsigned long *pflags); -static inline void set_native_irq_info(unsigned int irq, const cpumask_t *mask) -{ - cpumask_copy(irq_to_desc(irq)->affinity, mask); -} - unsigned int set_desc_affinity(struct irq_desc *, const cpumask_t *); #ifndef arch_hwdom_irqs