From patchwork Thu Feb 14 20:48:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 10813843 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 077DE1390 for ; Thu, 14 Feb 2019 21:37:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EB5EE2E955 for ; Thu, 14 Feb 2019 21:37:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DFC8C2EC18; Thu, 14 Feb 2019 21:37:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 823D52E955 for ; Thu, 14 Feb 2019 21:37:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2395092AbfBNVhV (ORCPT ); Thu, 14 Feb 2019 16:37:21 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:51355 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2440278AbfBNVgt (ORCPT ); Thu, 14 Feb 2019 16:36:49 -0500 Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1guOgG-0002Ce-Q7; Thu, 14 Feb 2019 22:36:29 +0100 Message-Id: <20190214211759.981965829@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 14 Feb 2019 21:48:02 +0100 From: Thomas Gleixner To: LKML Cc: Ming Lei , Christoph Hellwig , Bjorn Helgaas , Jens Axboe , linux-block@vger.kernel.org, Sagi Grimberg , linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, Keith Busch , Marc Zyngier , Sumit Saxena , Kashyap Desai , Shivasharan Srikanteshwara Subject: [patch V5 7/8] genirq/affinity: Set is_managed in the spreading function References: <20190214204755.819014197@linutronix.de> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Some drivers need an extra set of interrupts which are not marked managed, but should get initial interrupt spreading. To achieve this it is simpler to set the is_managed bit of the affinity descriptor in the spreading function instead of having yet another loop and tons of conditionals. No functional change. Signed-off-by: Thomas Gleixner --- kernel/irq/affinity.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -98,6 +98,7 @@ static int __irq_build_affinity_masks(co unsigned int startvec, unsigned int numvecs, unsigned int firstvec, + bool managed, cpumask_var_t *node_to_cpumask, const struct cpumask *cpu_mask, struct cpumask *nmsk, @@ -154,6 +155,7 @@ static int __irq_build_affinity_masks(co } irq_spread_init_one(&masks[curvec].mask, nmsk, cpus_per_vec); + masks[curvec].is_managed = managed; } done += v; @@ -173,7 +175,7 @@ static int __irq_build_affinity_masks(co */ static int irq_build_affinity_masks(const struct irq_affinity *affd, unsigned int startvec, unsigned int numvecs, - unsigned int firstvec, + unsigned int firstvec, bool managed, struct irq_affinity_desc *masks) { unsigned int curvec = startvec, nr_present, nr_others; @@ -197,8 +199,8 @@ static int irq_build_affinity_masks(cons build_node_to_cpumask(node_to_cpumask); /* Spread on present CPUs starting from affd->pre_vectors */ - nr_present = __irq_build_affinity_masks(affd, curvec, numvecs, - firstvec, node_to_cpumask, + nr_present = __irq_build_affinity_masks(affd, curvec, numvecs, firstvec, + managed, node_to_cpumask, cpu_present_mask, nmsk, masks); /* @@ -212,8 +214,8 @@ static int irq_build_affinity_masks(cons else curvec = firstvec + nr_present; cpumask_andnot(npresmsk, cpu_possible_mask, cpu_present_mask); - nr_others = __irq_build_affinity_masks(affd, curvec, numvecs, - firstvec, node_to_cpumask, + nr_others = __irq_build_affinity_masks(affd, curvec, numvecs, firstvec, + managed, node_to_cpumask, npresmsk, nmsk, masks); put_online_cpus(); @@ -290,7 +292,7 @@ irq_create_affinity_masks(unsigned int n int ret; ret = irq_build_affinity_masks(affd, curvec, this_vecs, - curvec, masks); + true, curvec, masks); if (ret) { kfree(masks); return NULL; @@ -307,10 +309,6 @@ irq_create_affinity_masks(unsigned int n for (; curvec < nvecs; curvec++) cpumask_copy(&masks[curvec].mask, irq_default_affinity); - /* Mark the managed interrupts */ - for (i = affd->pre_vectors; i < nvecs - affd->post_vectors; i++) - masks[i].is_managed = 1; - return masks; }