From patchwork Thu Jul 13 21:12:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 9839563 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 617FF602BD for ; Thu, 13 Jul 2017 18:10:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CE6BE2872A for ; Thu, 13 Jul 2017 21:12:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C35DF2875F; Thu, 13 Jul 2017 21:12:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 66ADB2872A for ; Thu, 13 Jul 2017 21:12:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752446AbdGMVM3 (ORCPT ); Thu, 13 Jul 2017 17:12:29 -0400 Received: from mail-qk0-f195.google.com ([209.85.220.195]:33879 "EHLO mail-qk0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751153AbdGMVM1 (ORCPT ); Thu, 13 Jul 2017 17:12:27 -0400 Received: by mail-qk0-f195.google.com with SMTP id q66so9764796qki.1; Thu, 13 Jul 2017 14:12:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=4oT+J9I4StF7QEEhOAapykJBOR87wKOea+hCAKV+MCI=; b=aVl5usG9xJZ0/Mqb+EDHk2YzHdP3xpeAZKn4RY7ApLuQPabZ85PrHhH0gFTCW9j+7Y dkVbgJb/TxFGbJ0sJZP8RB4PUy2LlUhaMkjm85mlLXaDUZga62DhuzVmDXcKKhnd5uM8 hJo+qLp+C3eseRTUSDlQzRhGrKQmjrhFMNEUiI/Xmzdxq04lbTkCkzW1c9z+CAWMC74a uJxw+Zd3u185qnLkF8MvV8hosl45FPK6iC6i5u04AihrsBZVr3gwdSGS3hV/E3P+vEaI TmyPYLH531m82u2SDStZDEWHXu6zt/+6twgg3CrCUw0Scwky7VELpRsnBEqD1zTUeET7 kX9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=4oT+J9I4StF7QEEhOAapykJBOR87wKOea+hCAKV+MCI=; b=GXJjH7p/51L3lR07tEdiZ/sNPTV8V4rqyF71LKyc2OJzjs2ESVa8YQbiR2AM4Jf1MT RaSeyLELnqGnCQBaGY7nm94Yc2BE9rcNbls2ZvPxhnvfiozEXQwa0QsVixJF3skk/wAB 3yk1ZISAtKcNImUNTEg51Mkag5QfELLS45gjkpM+UJyXaH8hlaHuqAoQTnFgpwEWBG7E S7dp9zbVJNtAF5iey3QlcukaoA5SmbdApcc+9rqo6ZEI8hysHyzbjBNPxEpF4wAWyit3 IgwyZ0havtJN6yrZ8npwfOosRoxVSPXQ4ZU1y0jQ6wzcVwlNtQK9j5SSSZZZlqNSxfc5 w4pw== X-Gm-Message-State: AIVw110NY6P0fgvexAx9ku9ysy8aBFTgxp6hmu/28ZpWdN/DNKyV7ga1 4tD9u/S8o+TuGw== X-Received: by 10.55.76.193 with SMTP id z184mr7048091qka.146.1499980346071; Thu, 13 Jul 2017 14:12:26 -0700 (PDT) Received: from localhost ([66.187.232.66]) by smtp.gmail.com with ESMTPSA id q40sm5382383qtf.42.2017.07.13.14.12.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 13 Jul 2017 14:12:24 -0700 (PDT) From: Mike Snitzer To: hch@lst.de Cc: dm-devel@redhat.com, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org Subject: [for-4.14 RFC PATCH 2/2] dm rq: eliminate historic blk-mq and .request_fn queue stacking restrictions Date: Thu, 13 Jul 2017 17:12:17 -0400 Message-Id: <20170713211217.52361-3-snitzer@redhat.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20170713211217.52361-1-snitzer@redhat.com> References: <20170713211217.52361-1-snitzer@redhat.com> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently if dm_mod.use_blk_mq=Y (or a DM-multipath table is loaded with queue_mode=mq) and all underlying devices are not blk-mq, DM core will fail with the error: "table load rejected: all devices are not blk-mq request-stackable" This all-blk-mq-or-nothing approach is too cut-throat because it prevents access to data stored on what could have been a previously working multipath setup (e.g. if user decides to try dm_mod.use_blk_mq=Y or queue_mode=mq only to find their underlying devices aren't blk-mq). This restriction, and others like not being able to stack a top-level blk-mq request-queue ontop of old .request_fn device(s), can be removed thanks to commit eb8db831be ("dm: always defer request allocation to the owner of the request_queue"). Now that request-based DM will always rely on the target (multipath) to call blk_get_request() to create a clone request it is possible to support all 4 permutations of stacking old .request_fn and blk-mq request_queues. Depends-on: eb8db831be ("dm: always defer request allocation to the owner of the request_queue") Reported-by: Ewan Milne Signed-off-by: Mike Snitzer --- drivers/md/dm-rq.c | 5 ----- drivers/md/dm-table.c | 31 +++---------------------------- drivers/md/dm.h | 1 - 3 files changed, 3 insertions(+), 34 deletions(-) diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c index 95bb44c..d64677b 100644 --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -782,11 +782,6 @@ int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t) struct dm_target *immutable_tgt; int err; - if (!dm_table_all_blk_mq_devices(t)) { - DMERR("request-based dm-mq may only be stacked on blk-mq device(s)"); - return -EINVAL; - } - md->tag_set = kzalloc_node(sizeof(struct blk_mq_tag_set), GFP_KERNEL, md->numa_node_id); if (!md->tag_set) return -ENOMEM; diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index a39bcd9..e630768 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -46,7 +46,6 @@ struct dm_table { bool integrity_supported:1; bool singleton:1; - bool all_blk_mq:1; unsigned integrity_added:1; /* @@ -910,7 +909,6 @@ static int dm_table_determine_type(struct dm_table *t) { unsigned i; unsigned bio_based = 0, request_based = 0, hybrid = 0; - unsigned sq_count = 0, mq_count = 0; struct dm_target *tgt; struct dm_dev_internal *dd; struct list_head *devices = dm_table_get_devices(t); @@ -985,11 +983,9 @@ static int dm_table_determine_type(struct dm_table *t) int srcu_idx; struct dm_table *live_table = dm_get_live_table(t->md, &srcu_idx); - /* inherit live table's type and all_blk_mq */ - if (live_table) { + /* inherit live table's type */ + if (live_table) t->type = live_table->type; - t->all_blk_mq = live_table->all_blk_mq; - } dm_put_live_table(t->md, srcu_idx); return 0; } @@ -999,25 +995,9 @@ static int dm_table_determine_type(struct dm_table *t) struct request_queue *q = bdev_get_queue(dd->dm_dev->bdev); if (!blk_queue_stackable(q)) { - DMERR("table load rejected: including" - " non-request-stackable devices"); + DMERR("table load rejected: includes non-request-stackable devices"); return -EINVAL; } - - if (q->mq_ops) - mq_count++; - else - sq_count++; - } - if (sq_count && mq_count) { - DMERR("table load rejected: not all devices are blk-mq request-stackable"); - return -EINVAL; - } - t->all_blk_mq = mq_count > 0; - - if (t->type == DM_TYPE_MQ_REQUEST_BASED && !t->all_blk_mq) { - DMERR("table load rejected: all devices are not blk-mq request-stackable"); - return -EINVAL; } return 0; @@ -1067,11 +1047,6 @@ bool dm_table_request_based(struct dm_table *t) return __table_type_request_based(dm_table_get_type(t)); } -bool dm_table_all_blk_mq_devices(struct dm_table *t) -{ - return t->all_blk_mq; -} - static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *md) { enum dm_queue_mode type = dm_table_get_type(t); diff --git a/drivers/md/dm.h b/drivers/md/dm.h index 38c84c0..c484c4d 100644 --- a/drivers/md/dm.h +++ b/drivers/md/dm.h @@ -70,7 +70,6 @@ struct dm_target *dm_table_get_immutable_target(struct dm_table *t); struct dm_target *dm_table_get_wildcard_target(struct dm_table *t); bool dm_table_bio_based(struct dm_table *t); bool dm_table_request_based(struct dm_table *t); -bool dm_table_all_blk_mq_devices(struct dm_table *t); void dm_table_free_md_mempools(struct dm_table *t); struct dm_md_mempools *dm_table_get_md_mempools(struct dm_table *t);