From patchwork Fri Jul 9 08:09:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12366947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EA1FC07E99 for ; Fri, 9 Jul 2021 08:10:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F32A5613C8 for ; Fri, 9 Jul 2021 08:10:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231402AbhGIINV (ORCPT ); Fri, 9 Jul 2021 04:13:21 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:26958 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231278AbhGIINV (ORCPT ); Fri, 9 Jul 2021 04:13:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818237; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ztDTYYWkp+t+kb7GPKkSbcrbttoIILQY+N+mzBGgSjg=; b=IM4IELSSy2ztNXULy0Tb7/Ekdk+p/T4/SmdO9HhAqoY7/gmxvxWDwAUx/xdaG0PsL6QIOd 0+2cKnJbyJH2rtKohPQdbtRT0Ti3Jc9XbYxK0Lae1j7K1jpwlLDpwsynOEnKTpDMLm+eEJ HIcP2i9jY5Sr7mfHo4aPqRvlptNdvdk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-374-aYu-RRKjM52p_c725-P8Ew-1; Fri, 09 Jul 2021 04:10:36 -0400 X-MC-Unique: aYu-RRKjM52p_c725-P8Ew-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C9809192FDA0; Fri, 9 Jul 2021 08:10:34 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1B5B160C13; Fri, 9 Jul 2021 08:10:29 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 01/10] blk-mq: rename blk-mq-cpumap.c as blk-mq-map.c Date: Fri, 9 Jul 2021 16:09:56 +0800 Message-Id: <20210709081005.421340-2-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Firstly the name of cpumap isn't very useful because all kinds of map helpers(pci, rdma, virtio) are for mapping cpu(s) to hw queue. Secondly prepare for moving physical device related mapping into its own subsystems, and we will put all map related functions/helpers into this renamed source file. Signed-off-by: Ming Lei --- block/Makefile | 2 +- block/{blk-mq-cpumap.c => blk-mq-map.c} | 0 2 files changed, 1 insertion(+), 1 deletion(-) rename block/{blk-mq-cpumap.c => blk-mq-map.c} (100%) diff --git a/block/Makefile b/block/Makefile index bfbe4e13ca1e..0f31c7e8a475 100644 --- a/block/Makefile +++ b/block/Makefile @@ -7,7 +7,7 @@ obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-sysfs.o \ blk-flush.o blk-settings.o blk-ioc.o blk-map.o \ blk-exec.o blk-merge.o blk-timeout.o \ blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \ - blk-mq-sysfs.o blk-mq-cpumap.o blk-mq-sched.o ioctl.o \ + blk-mq-sysfs.o blk-mq-map.o blk-mq-sched.o ioctl.o \ genhd.o ioprio.o badblocks.o partitions/ blk-rq-qos.o \ disk-events.o diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-map.c similarity index 100% rename from block/blk-mq-cpumap.c rename to block/blk-mq-map.c From patchwork Fri Jul 9 08:09:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12366949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DB2AC07E99 for ; Fri, 9 Jul 2021 08:10:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6DB2E613CA for ; Fri, 9 Jul 2021 08:10:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231404AbhGIINf (ORCPT ); Fri, 9 Jul 2021 04:13:35 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:20418 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231403AbhGIINf (ORCPT ); Fri, 9 Jul 2021 04:13:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818252; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hTFmuCN1nhaQls6kJgr9OGlbYp4H896z+x88MfyTwr8=; b=SK8mApN4IEuQx0lP6miJWywiijc5aCyNZtRXBGDbUjC7JgbPCxKNDzHGTAMVXmWJq8kXxz 1XHIf0IfVDPz1lrHgzmNYTlwWx+NoVnC3UN88WrC1iJpAOww0ZiyE4+VvKnuC9Eunu+AgP oqyLKUM/1zhM5dWNnivVnKJZiQkh4fg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-433-bX2_Ubk3MfGti4POoUDefg-1; Fri, 09 Jul 2021 04:10:51 -0400 X-MC-Unique: bX2_Ubk3MfGti4POoUDefg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 139871084F40; Fri, 9 Jul 2021 08:10:49 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id C14EB5C1D5; Fri, 9 Jul 2021 08:10:37 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 02/10] blk-mq: Introduce blk_mq_dev_map_queues Date: Fri, 9 Jul 2021 16:09:57 +0800 Message-Id: <20210709081005.421340-3-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Introduce blk_mq_dev_map_queues so that we can remove all kinds of map_queues implementation(pci, virtio, rdma, ...) out of block layer. Signed-off-by: Ming Lei --- block/blk-mq-map.c | 53 ++++++++++++++++++++++++++++++++++++++++++ include/linux/blk-mq.h | 5 ++++ 2 files changed, 58 insertions(+) diff --git a/block/blk-mq-map.c b/block/blk-mq-map.c index 3db84d3197f1..e3ba2ef1e9e2 100644 --- a/block/blk-mq-map.c +++ b/block/blk-mq-map.c @@ -94,3 +94,56 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index) return NUMA_NO_NODE; } + +/** + * blk_mq_dev_map_queues - provide generic queue mapping + * @qmap: CPU to hardware queue map. + * @dev_off: Offset to use for the device + * @get_queue_affinity: Callback to retrieve queue affinity + * @dev_data: Device data passed to get_queue_affinity() + * @fallback: If true, fallback to default blk-mq mapping in case of + * any failure + * + * Generic function to setup each queue mapping in @qmap. It will query + * each queue's affinity via @get_queue_affinity and built queue mapping + * that maps a queue to the CPUs in the queue affinity. + * + * Driver has to set correct @dev_data, so that the driver callback + * of @get_queue_affinity can work correctly. + */ +int blk_mq_dev_map_queues(struct blk_mq_queue_map *qmap, void *dev_data, + int dev_off, get_queue_affinty_fn *get_queue_affinity, + bool fallback) +{ + const struct cpumask *mask; + unsigned int queue, cpu; + + /* + * fallback to default mapping if driver doesn't provide + * get_queue_affinity callback + */ + if (!get_queue_affinity) { + fallback = true; + goto fallback; + } + + for (queue = 0; queue < qmap->nr_queues; queue++) { + mask = get_queue_affinity(dev_data, dev_off, queue); + if (!mask) + goto fallback; + + for_each_cpu(cpu, mask) + qmap->mq_map[cpu] = qmap->queue_offset + queue; + } + + return 0; + +fallback: + if (!fallback) { + WARN_ON_ONCE(qmap->nr_queues > 1); + blk_mq_clear_mq_map(qmap); + return 0; + } + return blk_mq_map_queues(qmap); +} +EXPORT_SYMBOL_GPL(blk_mq_dev_map_queues); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index fd2de2b422ed..b6090d691594 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -553,7 +553,12 @@ void blk_mq_freeze_queue_wait(struct request_queue *q); int blk_mq_freeze_queue_wait_timeout(struct request_queue *q, unsigned long timeout); +typedef const struct cpumask * (get_queue_affinty_fn)(void *dev_data, + int dev_off, int queue_idx); int blk_mq_map_queues(struct blk_mq_queue_map *qmap); +int blk_mq_dev_map_queues(struct blk_mq_queue_map *qmap, void *dev_data, + int dev_off, get_queue_affinty_fn *get_queue_affinity, + bool fallback); void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues); void blk_mq_quiesce_queue_nowait(struct request_queue *q); From patchwork Fri Jul 9 08:09:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12366951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72C59C07E99 for ; Fri, 9 Jul 2021 08:11:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5522A613C3 for ; Fri, 9 Jul 2021 08:11:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231414AbhGIINr (ORCPT ); Fri, 9 Jul 2021 04:13:47 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:40936 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231347AbhGIINp (ORCPT ); Fri, 9 Jul 2021 04:13:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818262; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ae4rwY22KJlGMWYfTdNuML5h3kZw5x2TXcHut6UGzGc=; b=WdbloYBM5Vt3yesD1waPqkra7UyVAov6Wt4Ugz7dfJUR3qYvh2nszSuNqlSlJD2XVM4j/H v3akiMxy+kUAZeomjB3/v719jogJt00GFHyod+tUt7IPi74qhIn0B2tnL8QSD4dwTTwian gMWga395ejUIjSCmtPFcyYeeFFbg4H8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-559-jMNzBXjnNbKEGRrQ4kao1A-1; Fri, 09 Jul 2021 04:10:58 -0400 X-MC-Unique: jMNzBXjnNbKEGRrQ4kao1A-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 51761804302; Fri, 9 Jul 2021 08:10:56 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 349DE189C7; Fri, 9 Jul 2021 08:10:51 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 03/10] blk-mq: pass use managed irq info to blk_mq_dev_map_queues Date: Fri, 9 Jul 2021 16:09:58 +0800 Message-Id: <20210709081005.421340-4-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Managed irq is special because genirq core will shut down it when all cpus in its affinity mask are offline, so blk-mq has to drain requests and prevent new allocation on the hw queue before its managed irq is shutdown. In current implementation, we drain all hctx when the last cpu in hctx->cpumask is going to be offline. However, we need to avoid the draining of hw queues which don't use managed irq, one kind of user is nvme fc/rdma/tcp because these controllers require to submit connection request successfully even though all cpus in hctx->cpumask are offline. And we have lots of kernel panic reports on blk_mq_alloc_request_hctx(). Once we know if one qmap uses managed irq or not, we needn't to drain requests for hctx which doesn't use managed irq, and we can allow to allocate request on hctx in which all CPUs in hctx->cpumask are offline, then not only fix kernel panic in blk_mq_alloc_request_hctx(), but also meet nvme fc/rdma/tcp's requirement. Signed-off-by: Ming Lei --- block/blk-mq-map.c | 6 +++++- include/linux/blk-mq.h | 5 +++-- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/block/blk-mq-map.c b/block/blk-mq-map.c index e3ba2ef1e9e2..6b453f8d7965 100644 --- a/block/blk-mq-map.c +++ b/block/blk-mq-map.c @@ -103,6 +103,8 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index) * @dev_data: Device data passed to get_queue_affinity() * @fallback: If true, fallback to default blk-mq mapping in case of * any failure + * @managed_irq: If driver is likely to use managed irq, pass @managed_irq + * as true. * * Generic function to setup each queue mapping in @qmap. It will query * each queue's affinity via @get_queue_affinity and built queue mapping @@ -113,7 +115,7 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index) */ int blk_mq_dev_map_queues(struct blk_mq_queue_map *qmap, void *dev_data, int dev_off, get_queue_affinty_fn *get_queue_affinity, - bool fallback) + bool fallback, bool managed_irq) { const struct cpumask *mask; unsigned int queue, cpu; @@ -136,6 +138,8 @@ int blk_mq_dev_map_queues(struct blk_mq_queue_map *qmap, void *dev_data, qmap->mq_map[cpu] = qmap->queue_offset + queue; } + qmap->use_managed_irq = managed_irq; + return 0; fallback: diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index b6090d691594..a2cd85ac0354 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -192,7 +192,8 @@ struct blk_mq_hw_ctx { struct blk_mq_queue_map { unsigned int *mq_map; unsigned int nr_queues; - unsigned int queue_offset; + unsigned int queue_offset:31; + unsigned int use_managed_irq:1; }; /** @@ -558,7 +559,7 @@ typedef const struct cpumask * (get_queue_affinty_fn)(void *dev_data, int blk_mq_map_queues(struct blk_mq_queue_map *qmap); int blk_mq_dev_map_queues(struct blk_mq_queue_map *qmap, void *dev_data, int dev_off, get_queue_affinty_fn *get_queue_affinity, - bool fallback); + bool fallback, bool managed_irq); void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues); void blk_mq_quiesce_queue_nowait(struct request_queue *q); From patchwork Fri Jul 9 08:09:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12366953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A285C07E99 for ; Fri, 9 Jul 2021 08:11:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 820B1613C7 for ; Fri, 9 Jul 2021 08:11:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231430AbhGIINv (ORCPT ); Fri, 9 Jul 2021 04:13:51 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:55957 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231347AbhGIINu (ORCPT ); Fri, 9 Jul 2021 04:13:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818267; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hXmytCtIBRT+jdUWTLU3ov1fJoDR09oz6V0WpXtS98E=; b=aA/YuJIYqt04NufBlCDW1G65eXEMYryIGCPbs0b5FctqIM3n8g+V//sAGqfaYuQzcGdTu/ Bt8voD86GhMqDHjlSB42V+DuIhqdW4ofTKbheJ9R8HaDO5n9eF8rxQKqNeGcKtDbirxE9v JPB14zgmklnC/sXI1mdVddVvJKYVzJo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-155--4UUOZOFPY66i2jwlLzNhg-1; Fri, 09 Jul 2021 04:11:05 -0400 X-MC-Unique: -4UUOZOFPY66i2jwlLzNhg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8C0591084F40; Fri, 9 Jul 2021 08:11:03 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3F9FF100EBAD; Fri, 9 Jul 2021 08:10:58 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 04/10] scsi: replace blk_mq_pci_map_queues with blk_mq_dev_map_queues Date: Fri, 9 Jul 2021 16:09:59 +0800 Message-Id: <20210709081005.421340-5-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Replace blk_mq_pci_map_queues with blk_mq_dev_map_queues which is more generic from blk-mq viewpoint, so we can unify all map queue via blk_mq_dev_map_queues(). Meantime we can pass 'use_manage_irq' info to blk-mq via blk_mq_dev_map_queues(), this info needn't be 100% accurate, and what we need is that true has to be passed in if the hba really uses managed irq. Signed-off-by: Ming Lei Reported-by: kernel test robot Reported-by: kernel test robot Reported-by: kernel test robot --- drivers/scsi/hisi_sas/hisi_sas_v2_hw.c | 21 ++++++++++----------- drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 5 +++-- drivers/scsi/megaraid/megaraid_sas_base.c | 4 +++- drivers/scsi/mpi3mr/mpi3mr_os.c | 9 +++++---- drivers/scsi/mpt3sas/mpt3sas_scsih.c | 6 ++++-- drivers/scsi/qla2xxx/qla_os.c | 4 +++- drivers/scsi/scsi_priv.h | 9 +++++++++ drivers/scsi/smartpqi/smartpqi_init.c | 7 +++++-- 8 files changed, 42 insertions(+), 23 deletions(-) diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c index 49d2723ef34c..4d3a698e2e4c 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c +++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c @@ -3547,6 +3547,14 @@ static struct device_attribute *host_attrs_v2_hw[] = { NULL }; +static inline const struct cpumask *hisi_hba_get_queue_affinity( + void *dev_data, int offset, int idx) +{ + struct hisi_hba *hba = dev_data; + + return irq_get_affinity_mask(hba->irq_map[offset + idx]); +} + static int map_queues_v2_hw(struct Scsi_Host *shost) { struct hisi_hba *hisi_hba = shost_priv(shost); @@ -3554,17 +3562,8 @@ static int map_queues_v2_hw(struct Scsi_Host *shost) const struct cpumask *mask; unsigned int queue, cpu; - for (queue = 0; queue < qmap->nr_queues; queue++) { - mask = irq_get_affinity_mask(hisi_hba->irq_map[96 + queue]); - if (!mask) - continue; - - for_each_cpu(cpu, mask) - qmap->mq_map[cpu] = qmap->queue_offset + queue; - } - - return 0; - + return blk_mq_dev_map_queues(qmap, hisi_hba, 96, + hisi_hba_get_queue_affinity, false, true); } static struct scsi_host_template sht_v2_hw = { diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c index 5c3b1dfcb37c..f4370c43ba05 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c +++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c @@ -3132,8 +3132,9 @@ static int hisi_sas_map_queues(struct Scsi_Host *shost) struct hisi_hba *hisi_hba = shost_priv(shost); struct blk_mq_queue_map *qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT]; - return blk_mq_pci_map_queues(qmap, hisi_hba->pci_dev, - BASE_VECTORS_V3_HW); + return blk_mq_dev_map_queues(qmap, hisi_hba->pci_dev, + BASE_VECTORS_V3_HW, + scsi_pci_get_queue_affinity, false, true); } static struct scsi_host_template sht_v3_hw = { diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c index ec10b2497310..1bb3d522e305 100644 --- a/drivers/scsi/megaraid/megaraid_sas_base.c +++ b/drivers/scsi/megaraid/megaraid_sas_base.c @@ -47,6 +47,7 @@ #include #include "megaraid_sas_fusion.h" #include "megaraid_sas.h" +#include "../scsi_priv.h" /* * Number of sectors per IO command @@ -3185,7 +3186,8 @@ static int megasas_map_queues(struct Scsi_Host *shost) map = &shost->tag_set.map[HCTX_TYPE_DEFAULT]; map->nr_queues = instance->msix_vectors - offset; map->queue_offset = 0; - blk_mq_pci_map_queues(map, instance->pdev, offset); + blk_mq_dev_map_queues(map, instance->pdev, offset, + scsi_pci_get_queue_affinity, false, true); qoff += map->nr_queues; offset += map->nr_queues; diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c index 40676155e62d..7eed125ec66b 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_os.c +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c @@ -2787,17 +2787,18 @@ static int mpi3mr_bios_param(struct scsi_device *sdev, * mpi3mr_map_queues - Map queues callback handler * @shost: SCSI host reference * - * Call the blk_mq_pci_map_queues with from which operational + * Call the blk_mq_dev_map_queues with from which operational * queue the mapping has to be done * - * Return: return of blk_mq_pci_map_queues + * Return: return of blk_mq_dev_map_queues */ static int mpi3mr_map_queues(struct Scsi_Host *shost) { struct mpi3mr_ioc *mrioc = shost_priv(shost); - return blk_mq_pci_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT], - mrioc->pdev, mrioc->op_reply_q_offset); + return blk_mq_dev_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT], + mrioc->pdev, mrioc->op_reply_q_offset, + scsi_pci_get_queue_affinity, false, true); } /** diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c index 866d118f7931..dded3cfa1115 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c +++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c @@ -57,6 +57,7 @@ #include #include +#include "../scsi_priv.h" #include "mpt3sas_base.h" #define RAID_CHANNEL 1 @@ -11784,8 +11785,9 @@ static int scsih_map_queues(struct Scsi_Host *shost) if (ioc->shost->nr_hw_queues == 1) return 0; - return blk_mq_pci_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT], - ioc->pdev, ioc->high_iops_queues); + return blk_mq_dev_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT], + ioc->pdev, ioc->high_iops_queues, scsi_pci_get_queue_affinity, + false, true); } /* shost template for SAS 2.0 HBA devices */ diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index 4eab564ea6a0..dc8c27052382 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -21,6 +21,7 @@ #include #include "qla_target.h" +#include "../scsi_priv.h" /* * Driver version @@ -7696,7 +7697,8 @@ static int qla2xxx_map_queues(struct Scsi_Host *shost) if (USER_CTRL_IRQ(vha->hw) || !vha->hw->mqiobase) rc = blk_mq_map_queues(qmap); else - rc = blk_mq_pci_map_queues(qmap, vha->hw->pdev, vha->irq_offset); + rc = blk_mq_dev_map_queues(qmap, vha->hw->pdev, vha->irq_offset, + scsi_pci_get_queue_affinity, false, true); return rc; } diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h index 75d6f23e4fff..cc1bd9ce6e2c 100644 --- a/drivers/scsi/scsi_priv.h +++ b/drivers/scsi/scsi_priv.h @@ -6,6 +6,7 @@ #include #include #include +#include struct request_queue; struct request; @@ -190,4 +191,12 @@ extern int scsi_device_max_queue_depth(struct scsi_device *sdev); #define SCSI_DEVICE_BLOCK_MAX_TIMEOUT 600 /* units in seconds */ +static inline const struct cpumask *scsi_pci_get_queue_affinity( + void *dev_data, int offset, int queue) +{ + struct pci_dev *pdev = dev_data; + + return pci_irq_get_affinity(pdev, offset + queue); +} + #endif /* _SCSI_PRIV_H */ diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index dcc0b9618a64..fd66260061c1 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -26,6 +26,7 @@ #include #include #include +#include "../scsi_priv.h" #include "smartpqi.h" #include "smartpqi_sis.h" @@ -6104,8 +6105,10 @@ static int pqi_map_queues(struct Scsi_Host *shost) { struct pqi_ctrl_info *ctrl_info = shost_to_hba(shost); - return blk_mq_pci_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT], - ctrl_info->pci_dev, 0); + return blk_mq_dev_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT], + ctrl_info->pci_dev, 0, + scsi_pci_get_queue_affinity, false, + true); } static int pqi_slave_configure(struct scsi_device *sdev) From patchwork Fri Jul 9 08:10:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12366955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F3FCC07E99 for ; Fri, 9 Jul 2021 08:11:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 85A78613C3 for ; Fri, 9 Jul 2021 08:11:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231452AbhGIINy (ORCPT ); Fri, 9 Jul 2021 04:13:54 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:59339 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231382AbhGIINx (ORCPT ); Fri, 9 Jul 2021 04:13:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818270; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Jm4OK+kJW+IbN57HQi1K1eGMDSFWIwNIJiAc3mqthkg=; b=B8yxGxDRpztFHt1NKRr1jnxOqvTpt+6WaghgLhuF7Wz/PiIXCqzqvetCy06hOUqZPbJ7ND //VXm93Lbok2z8bN0g6DH3Mq6Rx+7JxahFYrR9eWhVZZ6CZQFNI+S649hcoVt3+gImOsas NOh7q62r6/orvrgXePTcgJ+efyJvEJg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-302-wEX9Y4jMMsSWbad_rSB2pQ-1; Fri, 09 Jul 2021 04:11:08 -0400 X-MC-Unique: wEX9Y4jMMsSWbad_rSB2pQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 054061007273; Fri, 9 Jul 2021 08:11:07 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 30F031346F; Fri, 9 Jul 2021 08:11:05 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 05/10] nvme: replace blk_mq_pci_map_queues with blk_mq_dev_map_queues Date: Fri, 9 Jul 2021 16:10:00 +0800 Message-Id: <20210709081005.421340-6-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Replace blk_mq_pci_map_queues with blk_mq_dev_map_queues which is more generic from blk-mq viewpoint, so we can unify all map queue via blk_mq_dev_map_queues(). Meantime we can pass 'use_manage_irq' info to blk-mq via blk_mq_dev_map_queues(), this info needn't be 100% accurate, and what we need is that true has to be passed in if the hba really uses managed irq. Signed-off-by: Ming Lei --- drivers/nvme/host/pci.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index d3c5086673bc..d16ba661560d 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -433,6 +433,14 @@ static int nvme_init_request(struct blk_mq_tag_set *set, struct request *req, return 0; } +static const struct cpumask *nvme_pci_get_queue_affinity( + void *dev_data, int offset, int queue) +{ + struct pci_dev *pdev = dev_data; + + return pci_irq_get_affinity(pdev, offset + queue); +} + static int queue_irq_offset(struct nvme_dev *dev) { /* if we have more than 1 vec, admin queue offsets us by 1 */ @@ -463,7 +471,9 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set) */ map->queue_offset = qoff; if (i != HCTX_TYPE_POLL && offset) - blk_mq_pci_map_queues(map, to_pci_dev(dev->dev), offset); + blk_mq_dev_map_queues(map, to_pci_dev(dev->dev), offset, + nvme_pci_get_queue_affinity, false, + true); else blk_mq_map_queues(map); qoff += map->nr_queues; From patchwork Fri Jul 9 08:10:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12366957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B31B8C07E9C for ; Fri, 9 Jul 2021 08:11:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98C85613C8 for ; Fri, 9 Jul 2021 08:11:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231454AbhGIIOE (ORCPT ); Fri, 9 Jul 2021 04:14:04 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:22253 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231414AbhGIIOD (ORCPT ); Fri, 9 Jul 2021 04:14:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818280; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vk9L1G0njXZFju3Oa4mrApnhj5rcqMsMKonFv8gUeQA=; b=a3ArBK4bvx9bnNTEB7zmp97c75+UNzZQ/zTnZJnr/M4sU+Trnhcu4/GzHBWlNw0pyi0dP1 6XCXkQZszmH4vJDyjMgcoR/PnGjEYBarV8mbuj+9Sg/tdFBTL9XCF0/xMthAHJo6RLdkPg uKfnaWAHj+YZ71yQBw9AURNM1uNpyR0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-469-aA_Q4guMN7ur9y0N5-yJzg-1; Fri, 09 Jul 2021 04:11:16 -0400 X-MC-Unique: aA_Q4guMN7ur9y0N5-yJzg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B34D81084F40; Fri, 9 Jul 2021 08:11:14 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4CE625D6D3; Fri, 9 Jul 2021 08:11:09 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 06/10] virito: add APIs for retrieving vq affinity Date: Fri, 9 Jul 2021 16:10:01 +0800 Message-Id: <20210709081005.421340-7-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org virtio-blk/virtio-scsi needs this API for retrieving vq's affinity. Signed-off-by: Ming Lei --- drivers/virtio/virtio.c | 10 ++++++++++ include/linux/virtio.h | 2 ++ 2 files changed, 12 insertions(+) diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c index 4b15c00c0a0a..ab593a8350d4 100644 --- a/drivers/virtio/virtio.c +++ b/drivers/virtio/virtio.c @@ -448,6 +448,16 @@ int virtio_device_restore(struct virtio_device *dev) EXPORT_SYMBOL_GPL(virtio_device_restore); #endif +const struct cpumask *virtio_get_vq_affinity(struct virtio_device *dev, + int index) +{ + if (!dev->config->get_vq_affinity) + return NULL; + + return dev->config->get_vq_affinity(dev, index); +} +EXPORT_SYMBOL_GPL(virtio_get_vq_affinity); + static int virtio_init(void) { if (bus_register(&virtio_bus) != 0) diff --git a/include/linux/virtio.h b/include/linux/virtio.h index b1894e0323fa..99fbba9981cc 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -139,6 +139,8 @@ int virtio_device_restore(struct virtio_device *dev); #endif size_t virtio_max_dma_size(struct virtio_device *vdev); +const struct cpumask *virtio_get_vq_affinity(struct virtio_device *dev, + int index); #define virtio_device_for_each_vq(vdev, vq) \ list_for_each_entry(vq, &vdev->vqs, list) From patchwork Fri Jul 9 08:10:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12366959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31C6CC07E9B for ; Fri, 9 Jul 2021 08:11:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1A33D613C9 for ; Fri, 9 Jul 2021 08:11:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231480AbhGIIOI (ORCPT ); Fri, 9 Jul 2021 04:14:08 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:38964 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231458AbhGIIOI (ORCPT ); Fri, 9 Jul 2021 04:14:08 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818284; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SrR2xOhsWwMcxQ7gA7t7ElNB6dEuVROBnVSLBdn7FcE=; b=OVls1Z70dSXzsgTPDhhYF0MnytuitdDl14YKj5V/e7q5fHdJf9Z+NzmFnZKXugQ8SWOKIG qxh0PoSKCX3gMcvUpLIMXF3kNRFQEvWciBIg1+pTrNK8ceEjQxDenkwVqNtwDZddXUq8+a LWg9s3q7/0Uyt4vHkoND2i2tm50cIy0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-53-pVL0D9-uO_-Xmp8BVC4Ijg-1; Fri, 09 Jul 2021 04:11:20 -0400 X-MC-Unique: pVL0D9-uO_-Xmp8BVC4Ijg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8A12A192FDA0; Fri, 9 Jul 2021 08:11:18 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id C07D05D6D3; Fri, 9 Jul 2021 08:11:17 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 07/10] virtio: blk/scsi: replace blk_mq_virtio_map_queues with blk_mq_dev_map_queues Date: Fri, 9 Jul 2021 16:10:02 +0800 Message-Id: <20210709081005.421340-8-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Replace blk_mq_virtio_map_queues with blk_mq_dev_map_queues which is more generic from blk-mq viewpoint, so we can unify all map queue implementation. Meantime we can pass 'use_manage_irq' info to blk-mq via blk_mq_dev_map_queues(), this info needn't be 100% accurate, and what we need is that true has to be passed in if the hba really uses managed irq. Signed-off-by: Ming Lei --- drivers/block/virtio_blk.c | 12 ++++++++++-- drivers/scsi/virtio_scsi.c | 11 ++++++++++- 2 files changed, 20 insertions(+), 3 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index e4bd3b1fc3c2..9188b5bcbe78 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -677,12 +677,20 @@ static int virtblk_init_request(struct blk_mq_tag_set *set, struct request *rq, return 0; } +static const struct cpumask *virtblk_get_vq_affinity(void *dev_data, + int offset, int queue) +{ + struct virtio_device *vdev = dev_data; + + return virtio_get_vq_affinity(vdev, offset + queue); +} + static int virtblk_map_queues(struct blk_mq_tag_set *set) { struct virtio_blk *vblk = set->driver_data; - return blk_mq_virtio_map_queues(&set->map[HCTX_TYPE_DEFAULT], - vblk->vdev, 0); + return blk_mq_dev_map_queues(&set->map[HCTX_TYPE_DEFAULT], vblk->vdev, + 0, virtblk_get_vq_affinity, true, true); } static const struct blk_mq_ops virtio_mq_ops = { diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c index fd69a03d6137..c4b97a0926df 100644 --- a/drivers/scsi/virtio_scsi.c +++ b/drivers/scsi/virtio_scsi.c @@ -712,12 +712,21 @@ static int virtscsi_abort(struct scsi_cmnd *sc) return virtscsi_tmf(vscsi, cmd); } +static const struct cpumask *virtscsi_get_vq_affinity(void *dev_data, + int offset, int queue) +{ + struct virtio_device *vdev = dev_data; + + return virtio_get_vq_affinity(vdev, offset + queue); +} + static int virtscsi_map_queues(struct Scsi_Host *shost) { struct virtio_scsi *vscsi = shost_priv(shost); struct blk_mq_queue_map *qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT]; - return blk_mq_virtio_map_queues(qmap, vscsi->vdev, 2); + return blk_mq_dev_map_queues(qmap, vscsi->vdev, 2, + virtscsi_get_vq_affinity, true, true); } static void virtscsi_commit_rqs(struct Scsi_Host *shost, u16 hwq) From patchwork Fri Jul 9 08:10:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12366961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78D57C07E99 for ; Fri, 9 Jul 2021 08:11:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 627AB613C7 for ; Fri, 9 Jul 2021 08:11:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231458AbhGIIOK (ORCPT ); Fri, 9 Jul 2021 04:14:10 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:53663 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231493AbhGIIOK (ORCPT ); Fri, 9 Jul 2021 04:14:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818287; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2pWtjFqz4jcPDiQr4SNjA0YIVAycnQdj9d42Z4ShVyU=; b=ItSE0U0mrYa+2llLr7uIkin7GGrlZVWqbceCsPgh6YVVYtXIPFJ0PgPkQYsoc4i91twF7w 18JQMyuC2F1b1JVIAHR4CZvLDgzEQJZo1eSUPPdyG692fvzzcMhDhxQZXYwsRoJWonvNoG nGWdwBrT9zt7Le4d/XRZzbfiNJflxzs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-529-9u95PXq0PKmRu83R5m5DqA-1; Fri, 09 Jul 2021 04:11:23 -0400 X-MC-Unique: 9u95PXq0PKmRu83R5m5DqA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CD89C8042FE; Fri, 9 Jul 2021 08:11:21 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0E73B5D6D3; Fri, 9 Jul 2021 08:11:20 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 08/10] nvme: rdma: replace blk_mq_rdma_map_queues with blk_mq_dev_map_queues Date: Fri, 9 Jul 2021 16:10:03 +0800 Message-Id: <20210709081005.421340-9-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Replace blk_mq_virtio_map_queues with blk_mq_dev_map_queues which is more generic from blk-mq viewpoint, so we can unify all map queue implementation. Meantime we can pass 'use_manage_irq' info to blk-mq via blk_mq_dev_map_queues(), this info needn't be 100% accurate, and what we need is that true has to be passed in if the hba really uses managed irq. Signed-off-by: Ming Lei --- drivers/nvme/host/rdma.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index a9e70cefd7ed..dc47df03a39a 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -2169,6 +2169,14 @@ static void nvme_rdma_complete_rq(struct request *rq) nvme_complete_rq(rq); } +static const struct cpumask *nvme_rdma_get_queue_affinity( + void *dev_data, int offset, int queue) +{ + struct ib_device *dev = dev_data; + + return ib_get_vector_affinity(dev, offset + queue); +} + static int nvme_rdma_map_queues(struct blk_mq_tag_set *set) { struct nvme_rdma_ctrl *ctrl = set->driver_data; @@ -2192,10 +2200,12 @@ static int nvme_rdma_map_queues(struct blk_mq_tag_set *set) ctrl->io_queues[HCTX_TYPE_DEFAULT]; set->map[HCTX_TYPE_READ].queue_offset = 0; } - blk_mq_rdma_map_queues(&set->map[HCTX_TYPE_DEFAULT], - ctrl->device->dev, 0); - blk_mq_rdma_map_queues(&set->map[HCTX_TYPE_READ], - ctrl->device->dev, 0); + blk_mq_dev_map_queues(&set->map[HCTX_TYPE_DEFAULT], + ctrl->device->dev, 0, nvme_rdma_get_queue_affinity, + true, false); + blk_mq_dev_map_queues(&set->map[HCTX_TYPE_READ], + ctrl->device->dev, 0, nvme_rdma_get_queue_affinity, + true, false); if (opts->nr_poll_queues && ctrl->io_queues[HCTX_TYPE_POLL]) { /* map dedicated poll queues only if we have queues left */ From patchwork Fri Jul 9 08:10:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12366963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B685BC07E99 for ; Fri, 9 Jul 2021 08:11:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 985CC613C7 for ; Fri, 9 Jul 2021 08:11:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231516AbhGIIOR (ORCPT ); Fri, 9 Jul 2021 04:14:17 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:33700 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231477AbhGIIOQ (ORCPT ); Fri, 9 Jul 2021 04:14:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818293; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uAae/n/zle4IpDOSCa7FGlf3dZkUVsBnDfub+cgN0WQ=; b=V0b9S0tJy8WoWywNmXQrbUCV+V5ymP2z3AcO+Hq+lkBJ4eGQAwSdyLgW4Ot/9Cqeo+3CrD eqiPgEQComusZmV2JsqokpSYll8lv+NlTzmKJ1q3JPIxIVuBCMUa3YTrQosmNKEkUt/om3 iscs9EhMTbZ/K2qXjRIhsxaG26t4/Ek= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-58-DXw8UBVePIamzHzY9TYfDg-1; Fri, 09 Jul 2021 04:11:32 -0400 X-MC-Unique: DXw8UBVePIamzHzY9TYfDg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 18847A40C0; Fri, 9 Jul 2021 08:11:30 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9CF5260C04; Fri, 9 Jul 2021 08:11:24 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 09/10] blk-mq: remove map queue helpers for pci, rdma and virtio Date: Fri, 9 Jul 2021 16:10:04 +0800 Message-Id: <20210709081005.421340-10-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Now we have switched to blk_mq_dev_map_queues(), so remove these helpers and source files. Signed-off-by: Ming Lei --- block/Makefile | 3 --- block/blk-mq-pci.c | 48 ------------------------------------------- block/blk-mq-rdma.c | 44 --------------------------------------- block/blk-mq-virtio.c | 46 ----------------------------------------- 4 files changed, 141 deletions(-) delete mode 100644 block/blk-mq-pci.c delete mode 100644 block/blk-mq-rdma.c delete mode 100644 block/blk-mq-virtio.c diff --git a/block/Makefile b/block/Makefile index 0f31c7e8a475..9437518a16ae 100644 --- a/block/Makefile +++ b/block/Makefile @@ -31,9 +31,6 @@ obj-$(CONFIG_IOSCHED_BFQ) += bfq.o obj-$(CONFIG_BLK_CMDLINE_PARSER) += cmdline-parser.o obj-$(CONFIG_BLK_DEV_INTEGRITY) += bio-integrity.o blk-integrity.o obj-$(CONFIG_BLK_DEV_INTEGRITY_T10) += t10-pi.o -obj-$(CONFIG_BLK_MQ_PCI) += blk-mq-pci.o -obj-$(CONFIG_BLK_MQ_VIRTIO) += blk-mq-virtio.o -obj-$(CONFIG_BLK_MQ_RDMA) += blk-mq-rdma.o obj-$(CONFIG_BLK_DEV_ZONED) += blk-zoned.o obj-$(CONFIG_BLK_WBT) += blk-wbt.o obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c deleted file mode 100644 index b595a94c4d16..000000000000 --- a/block/blk-mq-pci.c +++ /dev/null @@ -1,48 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (c) 2016 Christoph Hellwig. - */ -#include -#include -#include -#include -#include -#include - -#include "blk-mq.h" - -/** - * blk_mq_pci_map_queues - provide a default queue mapping for PCI device - * @qmap: CPU to hardware queue map. - * @pdev: PCI device associated with @set. - * @offset: Offset to use for the pci irq vector - * - * This function assumes the PCI device @pdev has at least as many available - * interrupt vectors as @set has queues. It will then query the vector - * corresponding to each queue for it's affinity mask and built queue mapping - * that maps a queue to the CPUs that have irq affinity for the corresponding - * vector. - */ -int blk_mq_pci_map_queues(struct blk_mq_queue_map *qmap, struct pci_dev *pdev, - int offset) -{ - const struct cpumask *mask; - unsigned int queue, cpu; - - for (queue = 0; queue < qmap->nr_queues; queue++) { - mask = pci_irq_get_affinity(pdev, queue + offset); - if (!mask) - goto fallback; - - for_each_cpu(cpu, mask) - qmap->mq_map[cpu] = qmap->queue_offset + queue; - } - - return 0; - -fallback: - WARN_ON_ONCE(qmap->nr_queues > 1); - blk_mq_clear_mq_map(qmap); - return 0; -} -EXPORT_SYMBOL_GPL(blk_mq_pci_map_queues); diff --git a/block/blk-mq-rdma.c b/block/blk-mq-rdma.c deleted file mode 100644 index 14f968e58b8f..000000000000 --- a/block/blk-mq-rdma.c +++ /dev/null @@ -1,44 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (c) 2017 Sagi Grimberg. - */ -#include -#include -#include - -/** - * blk_mq_rdma_map_queues - provide a default queue mapping for rdma device - * @map: CPU to hardware queue map. - * @dev: rdma device to provide a mapping for. - * @first_vec: first interrupt vectors to use for queues (usually 0) - * - * This function assumes the rdma device @dev has at least as many available - * interrupt vetors as @set has queues. It will then query it's affinity mask - * and built queue mapping that maps a queue to the CPUs that have irq affinity - * for the corresponding vector. - * - * In case either the driver passed a @dev with less vectors than - * @set->nr_hw_queues, or @dev does not provide an affinity mask for a - * vector, we fallback to the naive mapping. - */ -int blk_mq_rdma_map_queues(struct blk_mq_queue_map *map, - struct ib_device *dev, int first_vec) -{ - const struct cpumask *mask; - unsigned int queue, cpu; - - for (queue = 0; queue < map->nr_queues; queue++) { - mask = ib_get_vector_affinity(dev, first_vec + queue); - if (!mask) - goto fallback; - - for_each_cpu(cpu, mask) - map->mq_map[cpu] = map->queue_offset + queue; - } - - return 0; - -fallback: - return blk_mq_map_queues(map); -} -EXPORT_SYMBOL_GPL(blk_mq_rdma_map_queues); diff --git a/block/blk-mq-virtio.c b/block/blk-mq-virtio.c deleted file mode 100644 index 7b8a42c35102..000000000000 --- a/block/blk-mq-virtio.c +++ /dev/null @@ -1,46 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (c) 2016 Christoph Hellwig. - */ -#include -#include -#include -#include -#include -#include "blk-mq.h" - -/** - * blk_mq_virtio_map_queues - provide a default queue mapping for virtio device - * @qmap: CPU to hardware queue map. - * @vdev: virtio device to provide a mapping for. - * @first_vec: first interrupt vectors to use for queues (usually 0) - * - * This function assumes the virtio device @vdev has at least as many available - * interrupt vectors as @set has queues. It will then query the vector - * corresponding to each queue for it's affinity mask and built queue mapping - * that maps a queue to the CPUs that have irq affinity for the corresponding - * vector. - */ -int blk_mq_virtio_map_queues(struct blk_mq_queue_map *qmap, - struct virtio_device *vdev, int first_vec) -{ - const struct cpumask *mask; - unsigned int queue, cpu; - - if (!vdev->config->get_vq_affinity) - goto fallback; - - for (queue = 0; queue < qmap->nr_queues; queue++) { - mask = vdev->config->get_vq_affinity(vdev, first_vec + queue); - if (!mask) - goto fallback; - - for_each_cpu(cpu, mask) - qmap->mq_map[cpu] = qmap->queue_offset + queue; - } - - return 0; -fallback: - return blk_mq_map_queues(qmap); -} -EXPORT_SYMBOL_GPL(blk_mq_virtio_map_queues); From patchwork Fri Jul 9 08:10:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12366965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D08CC07E99 for ; Fri, 9 Jul 2021 08:11:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 04CF0613C8 for ; Fri, 9 Jul 2021 08:11:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231490AbhGIIOY (ORCPT ); Fri, 9 Jul 2021 04:14:24 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:24227 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231405AbhGIIOY (ORCPT ); Fri, 9 Jul 2021 04:14:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818300; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iT5qAbZzpLMpsoq75G/ZOQir94LXNCD21dvckqRvoz8=; b=XNxpssc9wKEqrPm9QAY131R/3ObDpqAWVFbpnhtB74So31mmzfYafpWWJPR7+KQK8afvOa Y+9UFqFhWGjPE15spubY0tVWdyTnPb5bXdbj8jrFqDAGgy97Yu5jtHIuNtM+Z5SW9/SVwG gL2Ij246W0jqjLoHenVxnm5eqL3fgh4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-457-ucmI_8ZvNZWVexsyOIdx3g-1; Fri, 09 Jul 2021 04:11:39 -0400 X-MC-Unique: ucmI_8ZvNZWVexsyOIdx3g-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 53F94804141; Fri, 9 Jul 2021 08:11:37 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8EA9119D9F; Fri, 9 Jul 2021 08:11:32 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 10/10] blk-mq: don't deactivate hctx if managed irq isn't used Date: Fri, 9 Jul 2021 16:10:05 +0800 Message-Id: <20210709081005.421340-11-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org blk-mq deactivates one hctx when the last CPU in hctx->cpumask become offline by draining all requests originated from this hctx and moving new allocation on other active hctx. This way is for avoiding inflight IO in case of managed irq because managed irq is shutdown when the last CPU in the irq's affinity becomes offline. However, lots of drivers(nvme fc, rdma, tcp, loop, ...) don't use managed irq, so they needn't to deactivate hctx when the last CPU becomes offline. Also, some of them are the only user of blk_mq_alloc_request_hctx() which is used for connecting io queue. And their requirement is that the connect request needs to be submitted successfully via one specified hctx even though all CPUs in this hctx->cpumask have become offline. Addressing the requirement for nvme fc/rdma/loop by allowing to allocate request from one hctx when all CPUs in this hctx are offline, since these drivers don't use managed irq. Finally don't deactivate one hctx when it doesn't use managed irq. Signed-off-by: Ming Lei --- block/blk-mq.c | 27 +++++++++++++++++---------- block/blk-mq.h | 5 +++++ 2 files changed, 22 insertions(+), 10 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 2e9fd0ec63d7..d00546d3b757 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -427,6 +427,15 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op, } EXPORT_SYMBOL(blk_mq_alloc_request); +static inline int blk_mq_first_mapped_cpu(struct blk_mq_hw_ctx *hctx) +{ + int cpu = cpumask_first_and(hctx->cpumask, cpu_online_mask); + + if (cpu >= nr_cpu_ids) + cpu = cpumask_first(hctx->cpumask); + return cpu; +} + struct request *blk_mq_alloc_request_hctx(struct request_queue *q, unsigned int op, blk_mq_req_flags_t flags, unsigned int hctx_idx) { @@ -468,7 +477,10 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q, data.hctx = q->queue_hw_ctx[hctx_idx]; if (!blk_mq_hw_queue_mapped(data.hctx)) goto out_queue_exit; - cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask); + + WARN_ON_ONCE(blk_mq_hctx_use_managed_irq(data.hctx)); + + cpu = blk_mq_first_mapped_cpu(data.hctx); data.ctx = __blk_mq_get_ctx(q, cpu); if (!q->elevator) @@ -1501,15 +1513,6 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx) hctx_unlock(hctx, srcu_idx); } -static inline int blk_mq_first_mapped_cpu(struct blk_mq_hw_ctx *hctx) -{ - int cpu = cpumask_first_and(hctx->cpumask, cpu_online_mask); - - if (cpu >= nr_cpu_ids) - cpu = cpumask_first(hctx->cpumask); - return cpu; -} - /* * It'd be great if the workqueue API had a way to pass * in a mask and had some smarts for more clever placement. @@ -2556,6 +2559,10 @@ static int blk_mq_hctx_notify_offline(unsigned int cpu, struct hlist_node *node) struct blk_mq_hw_ctx *hctx = hlist_entry_safe(node, struct blk_mq_hw_ctx, cpuhp_online); + /* hctx needn't to be deactivated in case managed irq isn't used */ + if (!blk_mq_hctx_use_managed_irq(hctx)) + return 0; + if (!cpumask_test_cpu(cpu, hctx->cpumask) || !blk_mq_last_cpu_in_hctx(cpu, hctx)) return 0; diff --git a/block/blk-mq.h b/block/blk-mq.h index d08779f77a26..bee755ed0903 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -119,6 +119,11 @@ static inline struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *q, return ctx->hctxs[type]; } +static inline bool blk_mq_hctx_use_managed_irq(struct blk_mq_hw_ctx *hctx) +{ + return hctx->queue->tag_set->map[hctx->type].use_managed_irq; +} + /* * sysfs helpers */