From patchwork Sun Jun 18 15:21:35 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sagi Grimberg X-Patchwork-Id: 9794849 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D5780601C8 for ; Sun, 18 Jun 2017 15:22:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C0DF4283AF for ; Sun, 18 Jun 2017 15:22:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B43F828435; Sun, 18 Jun 2017 15:22:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 01866283AF for ; Sun, 18 Jun 2017 15:22:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753153AbdFRPWR (ORCPT ); Sun, 18 Jun 2017 11:22:17 -0400 Received: from merlin.infradead.org ([205.233.59.134]:52064 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752992AbdFRPWP (ORCPT ); Sun, 18 Jun 2017 11:22:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=RnQ4xqH9RMXsAHi9SYKv7OEB34sS5F3/j5Lsx+hVWkM=; b=GTW8oQ6DFwR2TzpULg8hpdMua NG8p7mqRMqfLRszcbXntZeC6D1j4cD3JbxjmSHnll50Jb99GQPEcklol4uiiR3s9Qrj2jY99vXMWe Pksy9MdhhBzCcp0AVxn5PzcmkUVwa1wX6DV7GpP1mc91ID2snqtojC8Io4/R5ENyAVgw7t3iu0BRE CiHoUVeTQsPIfrYEFNIvGU/2BVBaqhZnifziJrgcb3hffQV0THGIXCBYZUGRWWWk12WmNvL6m9LTr HYjQyrl6c0kyeMLO6l4EzMUW7o1sryYvCpV7Ivcko6TbL0bLgT5qKdNo93aqOVW/vX6M4UPc1MxKI /ZVRc2+ng==; Received: from bzq-82-81-101-184.red.bezeqint.net ([82.81.101.184] helo=bombadil.infradead.org) by merlin.infradead.org with esmtpsa (Exim 4.87 #1 (Red Hat Linux)) id 1dMc1h-0006WN-7Y; Sun, 18 Jun 2017 15:22:09 +0000 From: Sagi Grimberg To: linux-nvme@lists.infradead.org Cc: Christoph Hellwig , Keith Busch , linux-block@vger.kernel.org Subject: [PATCH rfc 01/30] nvme: Add admin connect request queue Date: Sun, 18 Jun 2017 18:21:35 +0300 Message-Id: <1497799324-19598-2-git-send-email-sagi@grimberg.me> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497799324-19598-1-git-send-email-sagi@grimberg.me> References: <1497799324-19598-1-git-send-email-sagi@grimberg.me> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In case we reconnect with inflight admin IO we need to make sure that the connect comes before the admin command. This can be only achieved by using a seperate request queue for admin connects. Signed-off-by: Sagi Grimberg --- drivers/nvme/host/fabrics.c | 2 +- drivers/nvme/host/fc.c | 11 ++++++++++- drivers/nvme/host/nvme.h | 1 + drivers/nvme/host/rdma.c | 19 ++++++++++++++----- drivers/nvme/target/loop.c | 17 +++++++++++++---- 5 files changed, 39 insertions(+), 11 deletions(-) diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index 64db2c46c5ea..bd99bbb1faa3 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -412,7 +412,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl) strncpy(data->subsysnqn, ctrl->opts->subsysnqn, NVMF_NQN_SIZE); strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE); - ret = __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, &res, + ret = __nvme_submit_sync_cmd(ctrl->admin_connect_q, &cmd, &res, data, sizeof(*data), 0, NVME_QID_ANY, 1, BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT); if (ret) { diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c index 5d5ecefd8dbe..25ee49037edb 100644 --- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -1703,6 +1703,7 @@ nvme_fc_ctrl_free(struct kref *ref) list_del(&ctrl->ctrl_list); spin_unlock_irqrestore(&ctrl->rport->lock, flags); + blk_cleanup_queue(ctrl->ctrl.admin_connect_q); blk_cleanup_queue(ctrl->ctrl.admin_q); blk_mq_free_tag_set(&ctrl->admin_tag_set); @@ -2745,6 +2746,12 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, goto out_free_admin_tag_set; } + ctrl->ctrl.admin_connect_q = blk_mq_init_queue(&ctrl->admin_tag_set); + if (IS_ERR(ctrl->ctrl.admin_connect_q)) { + ret = PTR_ERR(ctrl->ctrl.admin_connect_q); + goto out_cleanup_admin_q; + } + /* * Would have been nice to init io queues tag set as well. * However, we require interaction from the controller @@ -2754,7 +2761,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, ret = nvme_init_ctrl(&ctrl->ctrl, dev, &nvme_fc_ctrl_ops, 0); if (ret) - goto out_cleanup_admin_q; + goto out_cleanup_admin_connect_q; /* at this point, teardown path changes to ref counting on nvme ctrl */ @@ -2791,6 +2798,8 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, return &ctrl->ctrl; +out_cleanup_admin_connect_q: + blk_cleanup_queue(ctrl->ctrl.admin_connect_q); out_cleanup_admin_q: blk_cleanup_queue(ctrl->ctrl.admin_q); out_free_admin_tag_set: diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index f27c58b860f4..67147b49d992 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -121,6 +121,7 @@ struct nvme_ctrl { const struct nvme_ctrl_ops *ops; struct request_queue *admin_q; struct request_queue *connect_q; + struct request_queue *admin_connect_q; struct device *dev; struct kref kref; int instance; diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 7533138d2244..cb7f81d9098f 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -661,6 +661,7 @@ static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl) nvme_rdma_free_qe(ctrl->queues[0].device->dev, &ctrl->async_event_sqe, sizeof(struct nvme_command), DMA_TO_DEVICE); nvme_rdma_stop_and_free_queue(&ctrl->queues[0]); + blk_cleanup_queue(ctrl->ctrl.admin_connect_q); blk_cleanup_queue(ctrl->ctrl.admin_q); blk_mq_free_tag_set(&ctrl->admin_tag_set); nvme_rdma_dev_put(ctrl->device); @@ -1583,9 +1584,15 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl) goto out_free_tagset; } + ctrl->ctrl.admin_connect_q = blk_mq_init_queue(&ctrl->admin_tag_set); + if (IS_ERR(ctrl->ctrl.admin_connect_q)) { + error = PTR_ERR(ctrl->ctrl.admin_connect_q); + goto out_cleanup_queue; + } + error = nvmf_connect_admin_queue(&ctrl->ctrl); if (error) - goto out_cleanup_queue; + goto out_cleanup_connect_queue; set_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags); @@ -1593,7 +1600,7 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl) if (error) { dev_err(ctrl->ctrl.device, "prop_get NVME_REG_CAP failed\n"); - goto out_cleanup_queue; + goto out_cleanup_connect_queue; } ctrl->ctrl.sqsize = @@ -1601,25 +1608,27 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl) error = nvme_enable_ctrl(&ctrl->ctrl, ctrl->cap); if (error) - goto out_cleanup_queue; + goto out_cleanup_connect_queue; ctrl->ctrl.max_hw_sectors = (ctrl->max_fr_pages - 1) << (PAGE_SHIFT - 9); error = nvme_init_identify(&ctrl->ctrl); if (error) - goto out_cleanup_queue; + goto out_cleanup_connect_queue; error = nvme_rdma_alloc_qe(ctrl->queues[0].device->dev, &ctrl->async_event_sqe, sizeof(struct nvme_command), DMA_TO_DEVICE); if (error) - goto out_cleanup_queue; + goto out_cleanup_connect_queue; nvme_start_keep_alive(&ctrl->ctrl); return 0; +out_cleanup_connect_queue: + blk_cleanup_queue(ctrl->ctrl.admin_connect_q); out_cleanup_queue: blk_cleanup_queue(ctrl->ctrl.admin_q); out_free_tagset: diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c index 86c09e2a1490..edd9ee04de02 100644 --- a/drivers/nvme/target/loop.c +++ b/drivers/nvme/target/loop.c @@ -278,6 +278,7 @@ static const struct blk_mq_ops nvme_loop_admin_mq_ops = { static void nvme_loop_destroy_admin_queue(struct nvme_loop_ctrl *ctrl) { nvmet_sq_destroy(&ctrl->queues[0].nvme_sq); + blk_cleanup_queue(ctrl->ctrl.admin_connect_q); blk_cleanup_queue(ctrl->ctrl.admin_q); blk_mq_free_tag_set(&ctrl->admin_tag_set); } @@ -384,15 +385,21 @@ static int nvme_loop_configure_admin_queue(struct nvme_loop_ctrl *ctrl) goto out_free_tagset; } + ctrl->ctrl.admin_connect_q = blk_mq_init_queue(&ctrl->admin_tag_set); + if (IS_ERR(ctrl->ctrl.admin_connect_q)) { + error = PTR_ERR(ctrl->ctrl.admin_connect_q); + goto out_cleanup_queue; + } + error = nvmf_connect_admin_queue(&ctrl->ctrl); if (error) - goto out_cleanup_queue; + goto out_cleanup_connect_queue; error = nvmf_reg_read64(&ctrl->ctrl, NVME_REG_CAP, &ctrl->cap); if (error) { dev_err(ctrl->ctrl.device, "prop_get NVME_REG_CAP failed\n"); - goto out_cleanup_queue; + goto out_cleanup_connect_queue; } ctrl->ctrl.sqsize = @@ -400,19 +407,21 @@ static int nvme_loop_configure_admin_queue(struct nvme_loop_ctrl *ctrl) error = nvme_enable_ctrl(&ctrl->ctrl, ctrl->cap); if (error) - goto out_cleanup_queue; + goto out_cleanup_connect_queue; ctrl->ctrl.max_hw_sectors = (NVME_LOOP_MAX_SEGMENTS - 1) << (PAGE_SHIFT - 9); error = nvme_init_identify(&ctrl->ctrl); if (error) - goto out_cleanup_queue; + goto out_cleanup_connect_queue; nvme_start_keep_alive(&ctrl->ctrl); return 0; +out_cleanup_connect_queue: + blk_cleanup_queue(ctrl->ctrl.admin_connect_q); out_cleanup_queue: blk_cleanup_queue(ctrl->ctrl.admin_q); out_free_tagset: