From patchwork Sat Apr 16 01:35:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 8861031 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C46F49F1E6 for ; Sat, 16 Apr 2016 01:38:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DB1A52017D for ; Sat, 16 Apr 2016 01:38:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 419D520259 for ; Sat, 16 Apr 2016 01:38:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752494AbcDPBh6 (ORCPT ); Fri, 15 Apr 2016 21:37:58 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:48867 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751159AbcDPBfz (ORCPT ); Fri, 15 Apr 2016 21:35:55 -0400 Received: from [107.17.164.132] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux)) id 1arF9O-00055C-JE; Sat, 16 Apr 2016 01:35:54 +0000 From: Christoph Hellwig To: tglx@linutronix.de, linux-block@vger.kernel.org, linux-pci@vger.kernel.org Cc: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/8] genirq: add a helper spread an affinity mask for MSI/MSI-X vectors Date: Fri, 15 Apr 2016 18:35:47 -0700 Message-Id: <1460770552-31260-4-git-send-email-hch@lst.de> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1460770552-31260-1-git-send-email-hch@lst.de> References: <1460770552-31260-1-git-send-email-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-5.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, SUSPICIOUS_RECIPS, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Christoph Hellwig --- include/linux/interrupt.h | 10 +++++++++ kernel/irq/Makefile | 1 + kernel/irq/affinity.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 65 insertions(+) create mode 100644 kernel/irq/affinity.c diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index 9fcabeb..67bc1e1f 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -278,6 +278,9 @@ extern int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m); extern int irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify); +int irq_create_affinity_mask(struct cpumask **affinity_mask, + unsigned int nr_vecs); + #else /* CONFIG_SMP */ static inline int irq_set_affinity(unsigned int irq, const struct cpumask *m) @@ -308,6 +311,13 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify) { return 0; } + +static inline int irq_create_affinity_mask(struct cpumask **affinity_mask, + unsigned int nr_vecs) +{ + *affinity_mask = NULL; + return 0; +} #endif /* CONFIG_SMP */ /* diff --git a/kernel/irq/Makefile b/kernel/irq/Makefile index 2ee42e9..1d3ee31 100644 --- a/kernel/irq/Makefile +++ b/kernel/irq/Makefile @@ -9,3 +9,4 @@ obj-$(CONFIG_GENERIC_IRQ_MIGRATION) += cpuhotplug.o obj-$(CONFIG_PM_SLEEP) += pm.o obj-$(CONFIG_GENERIC_MSI_IRQ) += msi.o obj-$(CONFIG_GENERIC_IRQ_IPI) += ipi.o +obj-$(CONFIG_SMP) += affinity.o diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c new file mode 100644 index 0000000..ecb8915 --- /dev/null +++ b/kernel/irq/affinity.c @@ -0,0 +1,54 @@ + +#include +#include +#include + +static int get_first_sibling(unsigned int cpu) +{ + unsigned int ret; + + ret = cpumask_first(topology_sibling_cpumask(cpu)); + if (ret < nr_cpu_ids) + return ret; + return cpu; +} + +/* + * Take a map of online CPUs and the number of available interrupt vectors + * and generate an output cpumask suitable for spreading MSI/MSI-X vectors + * so that they are distributed as good as possible around the CPUs. If + * more vectors than CPUs are available we'll map one to each CPU, + * otherwise we map one to the first sibling of each socket. + * + * If there are more vectors than CPUs we will still only have one bit + * set per CPU, but interrupt code will keep on assining the vectors from + * the start of the bitmap until we run out of vectors. + */ +int irq_create_affinity_mask(struct cpumask **affinity_mask, + unsigned int nr_vecs) +{ + if (nr_vecs == 1) { + *affinity_mask = NULL; + return 0; + } + + *affinity_mask = kzalloc(cpumask_size(), GFP_KERNEL); + if (!*affinity_mask) + return -ENOMEM; + + if (nr_vecs >= num_online_cpus()) { + cpumask_copy(*affinity_mask, cpu_online_mask); + } else { + unsigned int cpu; + + for_each_online_cpu(cpu) { + if (cpu == get_first_sibling(cpu)) + cpumask_set_cpu(cpu, *affinity_mask); + + if (--nr_vecs == 0) + break; + } + } + + return 0; +}