From patchwork Sat May 1 02:18:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jesse Brandeburg X-Patchwork-Id: 12234595 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70BDDC433B4 for ; Sat, 1 May 2021 02:18:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4B78E613CC for ; Sat, 1 May 2021 02:18:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231487AbhEACTj (ORCPT ); Fri, 30 Apr 2021 22:19:39 -0400 Received: from mga09.intel.com ([134.134.136.24]:20878 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230508AbhEACTj (ORCPT ); Fri, 30 Apr 2021 22:19:39 -0400 IronPort-SDR: TlVpk4yjaAuBYCWXl5h9M0tlKyTXtHw/uK3ya+YKpa+4JNznusyHHgEYSA7VXYHoK8M5KgyVvh JL2dZ5eZ69yg== X-IronPort-AV: E=McAfee;i="6200,9189,9970"; a="197508593" X-IronPort-AV: E=Sophos;i="5.82,264,1613462400"; d="scan'208";a="197508593" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2021 19:18:50 -0700 IronPort-SDR: JlpZ7xyrmFpR/5vTZHzOHISvtpryqsXmldHmo04zipfTYW/bB4m2M5kAjy/pZyyKFydt34Y8NK 5ziTmsOj11yQ== X-IronPort-AV: E=Sophos;i="5.82,264,1613462400"; d="scan'208";a="467070538" Received: from jbrandeb-saw1.jf.intel.com ([10.166.28.56]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2021 19:18:50 -0700 From: Jesse Brandeburg To: Thomas Gleixner Cc: Ingo Molnar , linux-kernel@vger.kernel.org, intel-wired-lan@lists.osuosl.org, jbrandeb@kernel.org, "frederic@kernel.org" , "juri.lelli@redhat.com" , Marcelo Tosatti , abelits@marvell.com, Robin Murphy , "linux-api@vger.kernel.org" , "bhelgaas@google.com" , "linux-pci@vger.kernel.org" , "rostedt@goodmis.org" , "peterz@infradead.org" , "davem@davemloft.net" , "akpm@linux-foundation.org" , "sfr@canb.auug.org.au" , "stephen@networkplumber.org" , "rppt@linux.vnet.ibm.com" , "jinyuqi@huawei.com" , "zhangshaokun@hisilicon.com" , netdev@vger.kernel.org, chris.friesen@windriver.com, Jesse Brandeburg , Nitesh Lal Subject: [PATCH tip:irq/core v1] genirq: remove auto-set of the mask when setting the hint Date: Fri, 30 Apr 2021 19:18:32 -0700 Message-Id: <20210501021832.743094-1-jesse.brandeburg@intel.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org It was pointed out by Nitesh that the original work I did in 2014 to automatically set the interrupt affinity when requesting a mask is no longer necessary. The kernel has moved on and no longer has the original problem, BUT the original patch introduced a subtle bug when booting a system with reserved or excluded CPUs. Drivers calling this function with a mask value that included a CPU that was currently or in the future unavailable would generally not update the hint. I'm sure there are a million ways to solve this, but the simplest one is to just remove a little code that tries to force the affinity, as Nitesh has shown it fixes the bug and doesn't seem to introduce immediate side effects. While I'm here, introduce a kernel-doc for the hint function. Ref: https://lore.kernel.org/lkml/CAFki+L=_dd+JgAR12_eBPX0kZO2_6=1dGdgkwHE=u=K6chMeLQ@mail.gmail.com/ Cc: netdev@vger.kernel.org Fixes: 4fe7ffb7e17c ("genirq: Fix null pointer reference in irq_set_affinity_hint()") Fixes: e2e64a932556 ("genirq: Set initial affinity in irq_set_affinity_hint()") Reported-by: Nitesh Lal Signed-off-by: Jesse Brandeburg --- !!! NOTE: Compile tested only, would appreciate feedback --- kernel/irq/manage.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) base-commit: 765822e1569a37aab5e69736c52d4ad4a289eba6 diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index e976c4927b25..a31df64662d5 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -456,6 +456,16 @@ int __irq_set_affinity(unsigned int irq, const struct cpumask *mask, bool force) return ret; } +/** + * irq_set_affinity_hint - set the hint for an irq + * @irq: Interrupt for which to set the hint + * @m: Mask to indicate which CPUs to suggest for the interrupt, use + * NULL here to indicate to clear the value. + * + * Use this function to recommend which CPU should handle the + * interrupt to any userspace that uses /proc/irq/nn/smp_affinity_hint + * in order to align interrupts. Pass NULL as the mask to clear the hint. + */ int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m) { unsigned long flags; @@ -465,9 +475,6 @@ int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m) return -EINVAL; desc->affinity_hint = m; irq_put_desc_unlock(desc, flags); - /* set the initial affinity to prevent every interrupt being on CPU0 */ - if (m) - __irq_set_affinity(irq, m, false); return 0; } EXPORT_SYMBOL_GPL(irq_set_affinity_hint);