From patchwork Thu Jun 10 21:44:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keith Busch X-Patchwork-Id: 12314097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDA13C48BE6 for ; Thu, 10 Jun 2021 21:44:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C6B6D6141F for ; Thu, 10 Jun 2021 21:44:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229941AbhFJVqu (ORCPT ); Thu, 10 Jun 2021 17:46:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:53904 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230392AbhFJVqt (ORCPT ); Thu, 10 Jun 2021 17:46:49 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6420C61414; Thu, 10 Jun 2021 21:44:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623361493; bh=U0uZCNmLiIP5o+5Of/2tZr5Nhk05OgI6O4Ki2vMfDjg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VDVENoL+/WTEQm311Xvdz18EoVjtGrkd6Ho+g9OGiL8wPZMeulu7zfZaBDLhyZJbl ap9KpsTZnR/86cXDVz01ueaZlLm90HrgYs9X70yATAaKATwPLxQp3wpT41PD2nJ1YD OFID5nG8gctmIU80cY+QefawjZy7I89AZ5eGZBbA/Gktl5AzPPzjXQ4cR1UO4ScLaa lrfTN6m6ftiZK2ajNCm6pwSFOyZdPyC/iWugNCVXpnxkG/bOVOVyk/8R6GtTodh1mn Kl2EntjumYgJ9MK+WDMUEcE223hVWC4l7TSxe6EVf3/J0vzbWT0sxdSNZqIficUzZn ONZp5pUZT0SqA== From: Keith Busch To: linux-nvme@lists.infradead.org, sagi@grimberg.me, hch@lst.de, linux-block@vger.kernel.org Cc: axboe@kernel.dk, Yuanyuan Zhong , Casey Chen , Ming Lei , Keith Busch Subject: [PATCHv4 1/4] block: support polling through blk_execute_rq Date: Thu, 10 Jun 2021 14:44:34 -0700 Message-Id: <20210610214437.641245-2-kbusch@kernel.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20210610214437.641245-1-kbusch@kernel.org> References: <20210610214437.641245-1-kbusch@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Poll for completions if the request's hctx is a polling type. Reviewed-by: Christoph Hellwig Reviewed-by: Ming Lei Signed-off-by: Keith Busch --- block/blk-exec.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/block/blk-exec.c b/block/blk-exec.c index beae70a0e5e5..38f88552aa31 100644 --- a/block/blk-exec.c +++ b/block/blk-exec.c @@ -63,6 +63,19 @@ void blk_execute_rq_nowait(struct gendisk *bd_disk, struct request *rq, } EXPORT_SYMBOL_GPL(blk_execute_rq_nowait); +static bool blk_rq_is_poll(struct request *rq) +{ + return rq->mq_hctx && rq->mq_hctx->type == HCTX_TYPE_POLL; +} + +static void blk_rq_poll_completion(struct request *rq, struct completion *wait) +{ + do { + blk_poll(rq->q, request_to_qc_t(rq->mq_hctx, rq), true); + cond_resched(); + } while (!completion_done(wait)); +} + /** * blk_execute_rq - insert a request into queue for execution * @bd_disk: matching gendisk @@ -83,7 +96,10 @@ void blk_execute_rq(struct gendisk *bd_disk, struct request *rq, int at_head) /* Prevent hang_check timer from firing at us during very long I/O */ hang_check = sysctl_hung_task_timeout_secs; - if (hang_check) + + if (blk_rq_is_poll(rq)) + blk_rq_poll_completion(rq, &wait); + else if (hang_check) while (!wait_for_completion_io_timeout(&wait, hang_check * (HZ/2))); else wait_for_completion_io(&wait); From patchwork Thu Jun 10 21:44:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keith Busch X-Patchwork-Id: 12314099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E59DC48BE8 for ; Thu, 10 Jun 2021 21:44:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E4EE361431 for ; Thu, 10 Jun 2021 21:44:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230406AbhFJVqu (ORCPT ); Thu, 10 Jun 2021 17:46:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:53948 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230392AbhFJVqu (ORCPT ); Thu, 10 Jun 2021 17:46:50 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 39C1861419; Thu, 10 Jun 2021 21:44:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623361493; bh=AJa1+AbAYD4/I+Ym+g0MRUtaCWw6hzec2vDiAdQiT2w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=g7tDKxdAOafE0iyRWGf3HmGHovBoEdQh52w7oBld/JbxXu7ruR5pKlo93EZf3ohm6 +PHlzIILcXeJme1jzM2MwMZaVBrfyPjY3MI9/5OHsdinaWGa8VwaWKZvuHtuNQoejp VDNERsLTr7Zx39Pd6WyWMK+MnZknWj0WLCkKeJ/hbzVyUI2YJm3PB2CTwSLNkWS3Nm 5KCaz0nwT9KzmM2SJGefIt4wky7UztbEMUT0xS+XWt24H8p/3q2aEclcIyzw7XUbGU 4cYL+X8+ygZYMWV0kbfRYMIuPv2DMj9GPBkzzq1gNjYXFPCH9p7yNxgeaGnm5ArHsn /01UK4o44jIlw== From: Keith Busch To: linux-nvme@lists.infradead.org, sagi@grimberg.me, hch@lst.de, linux-block@vger.kernel.org Cc: axboe@kernel.dk, Yuanyuan Zhong , Casey Chen , Ming Lei , Keith Busch Subject: [PATCHv4 2/4] nvme: use blk_execute_rq() for passthrough commands Date: Thu, 10 Jun 2021 14:44:35 -0700 Message-Id: <20210610214437.641245-3-kbusch@kernel.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20210610214437.641245-1-kbusch@kernel.org> References: <20210610214437.641245-1-kbusch@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The generic blk_execute_rq() knows how to handle polled completions. Use that instead of implementing an nvme specific handler. Signed-off-by: Keith Busch Reviewed-by: Chaitanya Kulkarni Reviewed-by: Christoph Hellwig --- v2->v3: Retain the REQ_HIPRI flag for nvme polled passthrough requests drivers/nvme/host/core.c | 40 +++++++------------------------------ drivers/nvme/host/fabrics.c | 13 ++++++------ drivers/nvme/host/fabrics.h | 2 +- drivers/nvme/host/fc.c | 2 +- drivers/nvme/host/nvme.h | 2 +- drivers/nvme/host/rdma.c | 3 +-- drivers/nvme/host/tcp.c | 2 +- drivers/nvme/target/loop.c | 2 +- 8 files changed, 19 insertions(+), 47 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 784679307c03..2b9deaffa4e1 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -631,6 +631,8 @@ static inline void nvme_init_request(struct request *req, cmd->common.flags &= ~NVME_CMD_SGL_ALL; req->cmd_flags |= REQ_FAILFAST_DRIVER; + if (req->mq_hctx->type == HCTX_TYPE_POLL) + req->cmd_flags |= REQ_HIPRI; nvme_clear_nvme_request(req); memcpy(nvme_req(req)->cmd, cmd, sizeof(*cmd)); } @@ -1032,31 +1034,6 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req) } EXPORT_SYMBOL_GPL(nvme_setup_cmd); -static void nvme_end_sync_rq(struct request *rq, blk_status_t error) -{ - struct completion *waiting = rq->end_io_data; - - rq->end_io_data = NULL; - complete(waiting); -} - -static void nvme_execute_rq_polled(struct request_queue *q, - struct gendisk *bd_disk, struct request *rq, int at_head) -{ - DECLARE_COMPLETION_ONSTACK(wait); - - WARN_ON_ONCE(!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)); - - rq->cmd_flags |= REQ_HIPRI; - rq->end_io_data = &wait; - blk_execute_rq_nowait(bd_disk, rq, at_head, nvme_end_sync_rq); - - while (!completion_done(&wait)) { - blk_poll(q, request_to_qc_t(rq->mq_hctx, rq), true); - cond_resched(); - } -} - /* * Returns 0 on success. If the result is negative, it's a Linux error code; * if the result is positive, it's an NVM Express status code @@ -1064,7 +1041,7 @@ static void nvme_execute_rq_polled(struct request_queue *q, int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, union nvme_result *result, void *buffer, unsigned bufflen, unsigned timeout, int qid, int at_head, - blk_mq_req_flags_t flags, bool poll) + blk_mq_req_flags_t flags) { struct request *req; int ret; @@ -1085,10 +1062,7 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, goto out; } - if (poll) - nvme_execute_rq_polled(req->q, NULL, req, at_head); - else - blk_execute_rq(NULL, req, at_head); + blk_execute_rq(NULL, req, at_head); if (result) *result = nvme_req(req)->result; if (nvme_req(req)->flags & NVME_REQ_CANCELLED) @@ -1105,7 +1079,7 @@ int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, void *buffer, unsigned bufflen) { return __nvme_submit_sync_cmd(q, cmd, NULL, buffer, bufflen, 0, - NVME_QID_ANY, 0, 0, false); + NVME_QID_ANY, 0, 0); } EXPORT_SYMBOL_GPL(nvme_submit_sync_cmd); @@ -1469,7 +1443,7 @@ static int nvme_features(struct nvme_ctrl *dev, u8 op, unsigned int fid, c.features.dword11 = cpu_to_le32(dword11); ret = __nvme_submit_sync_cmd(dev->admin_q, &c, &res, - buffer, buflen, 0, NVME_QID_ANY, 0, 0, false); + buffer, buflen, 0, NVME_QID_ANY, 0, 0); if (ret >= 0 && result) *result = le32_to_cpu(res.u32); return ret; @@ -2054,7 +2028,7 @@ int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len, cmd.common.cdw11 = cpu_to_le32(len); return __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, NULL, buffer, len, 0, - NVME_QID_ANY, 1, 0, false); + NVME_QID_ANY, 1, 0); } EXPORT_SYMBOL_GPL(nvme_sec_submit); #endif /* CONFIG_BLK_SED_OPAL */ diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index 1239a63e3ac2..7f189f7da617 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -154,7 +154,7 @@ int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val) cmd.prop_get.offset = cpu_to_le32(off); ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0, 0, - NVME_QID_ANY, 0, 0, false); + NVME_QID_ANY, 0, 0); if (ret >= 0) *val = le64_to_cpu(res.u64); @@ -201,7 +201,7 @@ int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val) cmd.prop_get.offset = cpu_to_le32(off); ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0, 0, - NVME_QID_ANY, 0, 0, false); + NVME_QID_ANY, 0, 0); if (ret >= 0) *val = le64_to_cpu(res.u64); @@ -247,7 +247,7 @@ int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val) cmd.prop_set.value = cpu_to_le64(val); ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, NULL, NULL, 0, 0, - NVME_QID_ANY, 0, 0, false); + NVME_QID_ANY, 0, 0); if (unlikely(ret)) dev_err(ctrl->device, "Property Set error: %d, offset %#x\n", @@ -394,7 +394,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl) ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, data, sizeof(*data), 0, NVME_QID_ANY, 1, - BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT, false); + BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT); if (ret) { nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32), &cmd, data); @@ -418,7 +418,6 @@ EXPORT_SYMBOL_GPL(nvmf_connect_admin_queue); * @qid: NVMe I/O queue number for the new I/O connection between * host and target (note qid == 0 is illegal as this is * the Admin queue, per NVMe standard). - * @poll: Whether or not to poll for the completion of the connect cmd. * * This function issues a fabrics-protocol connection * of a NVMe I/O queue (via NVMe Fabrics "Connect" command) @@ -430,7 +429,7 @@ EXPORT_SYMBOL_GPL(nvmf_connect_admin_queue); * > 0: NVMe error status code * < 0: Linux errno error code */ -int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid, bool poll) +int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid) { struct nvme_command cmd; struct nvmf_connect_data *data; @@ -457,7 +456,7 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid, bool poll) ret = __nvme_submit_sync_cmd(ctrl->connect_q, &cmd, &res, data, sizeof(*data), 0, qid, 1, - BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT, poll); + BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT); if (ret) { nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32), &cmd, data); diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h index c31dad69a773..a146cb903869 100644 --- a/drivers/nvme/host/fabrics.h +++ b/drivers/nvme/host/fabrics.h @@ -182,7 +182,7 @@ int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val); int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val); int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val); int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl); -int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid, bool poll); +int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid); int nvmf_register_transport(struct nvmf_transport_ops *ops); void nvmf_unregister_transport(struct nvmf_transport_ops *ops); void nvmf_free_options(struct nvmf_ctrl_options *opts); diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c index 256e87721a01..e3d7b636b961 100644 --- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -2346,7 +2346,7 @@ nvme_fc_connect_io_queues(struct nvme_fc_ctrl *ctrl, u16 qsize) (qsize / 5)); if (ret) break; - ret = nvmf_connect_io_queue(&ctrl->ctrl, i, false); + ret = nvmf_connect_io_queue(&ctrl->ctrl, i); if (ret) break; diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 1f397ecba16c..499681bc0286 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -658,7 +658,7 @@ int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, union nvme_result *result, void *buffer, unsigned bufflen, unsigned timeout, int qid, int at_head, - blk_mq_req_flags_t flags, bool poll); + blk_mq_req_flags_t flags); int nvme_set_features(struct nvme_ctrl *dev, unsigned int fid, unsigned int dword11, void *buffer, size_t buflen, u32 *result); diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 74bf2c7f2b80..855a33e0d1b1 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -680,11 +680,10 @@ static void nvme_rdma_stop_io_queues(struct nvme_rdma_ctrl *ctrl) static int nvme_rdma_start_queue(struct nvme_rdma_ctrl *ctrl, int idx) { struct nvme_rdma_queue *queue = &ctrl->queues[idx]; - bool poll = nvme_rdma_poll_queue(queue); int ret; if (idx) - ret = nvmf_connect_io_queue(&ctrl->ctrl, idx, poll); + ret = nvmf_connect_io_queue(&ctrl->ctrl, idx); else ret = nvmf_connect_admin_queue(&ctrl->ctrl); diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 5fc6c568c626..01056722a5e4 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1574,7 +1574,7 @@ static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx) int ret; if (idx) - ret = nvmf_connect_io_queue(nctrl, idx, false); + ret = nvmf_connect_io_queue(nctrl, idx); else ret = nvmf_connect_admin_queue(nctrl); diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c index cb30cb942e1d..2a7bf1019952 100644 --- a/drivers/nvme/target/loop.c +++ b/drivers/nvme/target/loop.c @@ -335,7 +335,7 @@ static int nvme_loop_connect_io_queues(struct nvme_loop_ctrl *ctrl) int i, ret; for (i = 1; i < ctrl->ctrl.queue_count; i++) { - ret = nvmf_connect_io_queue(&ctrl->ctrl, i, false); + ret = nvmf_connect_io_queue(&ctrl->ctrl, i); if (ret) return ret; set_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[i].flags); From patchwork Thu Jun 10 21:44:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keith Busch X-Patchwork-Id: 12314101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57E97C48BE5 for ; Thu, 10 Jun 2021 21:44:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 43ACC61431 for ; Thu, 10 Jun 2021 21:44:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230409AbhFJVqv (ORCPT ); Thu, 10 Jun 2021 17:46:51 -0400 Received: from mail.kernel.org ([198.145.29.99]:54014 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230396AbhFJVqv (ORCPT ); Thu, 10 Jun 2021 17:46:51 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0FC6B6141E; Thu, 10 Jun 2021 21:44:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623361494; bh=5NnqWgdUSmnhJ4VMWKmSo+mlPmzoI8B/UqaWYbWWaps=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WvPTO7Mz0UF9UdLSO9BiLwCmtYGquZX98UyuoExZXQCDuss7WAnxaTg30VmwrXbrP +Qn0bIKLdlfACUFlckMLxsFvpCuSppm6cGAxgaWbwpf+rpwiEJl5rtkCef9zeEvH5a XgvieHa2WvwdRMo7v7+N4vE76FpwZrIpeInj4FW+ECYC7vRYD+V5MbkN6xmXr+nqx5 fX/nr2rWG7tSfsdPRBSTxRQBXVAeLqe3NmOlWUBxpdCXTb48naSlGq1HrSTbzmA3W2 UdGTOPGFi9V4f7HfcblvE8cIQvs2T1zatxPIBzAz+usit2OtzZeo1OIZDaudA8/Qcc FQXOm+By7c5+w== From: Keith Busch To: linux-nvme@lists.infradead.org, sagi@grimberg.me, hch@lst.de, linux-block@vger.kernel.org Cc: axboe@kernel.dk, Yuanyuan Zhong , Casey Chen , Ming Lei , Keith Busch Subject: [PATCHv4 3/4] block: return errors from blk_execute_rq() Date: Thu, 10 Jun 2021 14:44:36 -0700 Message-Id: <20210610214437.641245-4-kbusch@kernel.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20210610214437.641245-1-kbusch@kernel.org> References: <20210610214437.641245-1-kbusch@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The synchronous blk_execute_rq() had not provided a way for its callers to know if its request was successful or not. Return the blk_status_t result of the request. Reviewed-by: Christoph Hellwig Reviewed-by: Ming Lei Signed-off-by: Keith Busch Reviewed-by: Chaitanya Kulkarni --- block/blk-exec.c | 7 +++++-- include/linux/blkdev.h | 4 +++- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/block/blk-exec.c b/block/blk-exec.c index 38f88552aa31..d6cd501c0d34 100644 --- a/block/blk-exec.c +++ b/block/blk-exec.c @@ -21,7 +21,7 @@ static void blk_end_sync_rq(struct request *rq, blk_status_t error) { struct completion *waiting = rq->end_io_data; - rq->end_io_data = NULL; + rq->end_io_data = (void *)(uintptr_t)error; /* * complete last, if this is a stack request the process (and thus @@ -85,8 +85,9 @@ static void blk_rq_poll_completion(struct request *rq, struct completion *wait) * Description: * Insert a fully prepared request at the back of the I/O scheduler queue * for execution and wait for completion. + * Return: The blk_status_t result provided to blk_mq_end_request(). */ -void blk_execute_rq(struct gendisk *bd_disk, struct request *rq, int at_head) +blk_status_t blk_execute_rq(struct gendisk *bd_disk, struct request *rq, int at_head) { DECLARE_COMPLETION_ONSTACK(wait); unsigned long hang_check; @@ -103,5 +104,7 @@ void blk_execute_rq(struct gendisk *bd_disk, struct request *rq, int at_head) while (!wait_for_completion_io_timeout(&wait, hang_check * (HZ/2))); else wait_for_completion_io(&wait); + + return (blk_status_t)(uintptr_t)rq->end_io_data; } EXPORT_SYMBOL(blk_execute_rq); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index d66d0da72529..1261e1fe88bd 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -936,10 +936,12 @@ extern int blk_rq_map_kern(struct request_queue *, struct request *, void *, uns extern int blk_rq_map_user_iov(struct request_queue *, struct request *, struct rq_map_data *, const struct iov_iter *, gfp_t); -extern void blk_execute_rq(struct gendisk *, struct request *, int); extern void blk_execute_rq_nowait(struct gendisk *, struct request *, int, rq_end_io_fn *); +blk_status_t blk_execute_rq(struct gendisk *bd_disk, struct request *rq, + int at_head); + /* Helper to convert REQ_OP_XXX to its string format XXX */ extern const char *blk_op_str(unsigned int op); From patchwork Thu Jun 10 21:44:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keith Busch X-Patchwork-Id: 12314103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A43AFC48BD1 for ; Thu, 10 Jun 2021 21:44:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 915D561410 for ; Thu, 10 Jun 2021 21:44:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230444AbhFJVqw (ORCPT ); Thu, 10 Jun 2021 17:46:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:54062 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230434AbhFJVqw (ORCPT ); Thu, 10 Jun 2021 17:46:52 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id DADA361421; Thu, 10 Jun 2021 21:44:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623361495; bh=65lHkVM9pO4Mm3GE+Y0x4B9DmcZVAocGeR0mSXe2xaE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=k/dYamzeNZtusk83vlu+jcbUcDLLQvmZTbKrqQdaVlveg4+3A6EALuKw4Vh734vVM pev2ulkiMU5mOQS8dtpTglwchG9ixKMMvSel5kKQlP9TPYs0MbnmcpGbjEp6abpCFj mxEWjJwUOLXYpsjPoeCKKN8uUjRfMO1THVqsokHY1Ed/LNBUPOoqqST42DKIC27i9f jsMeDjdLUrvzbwLJhMpnmgf+32C1c80TxWXACKIDGXrRDb1CniQiXqYN4XVUnC6JBq begQS9noNufHd8N68KhD6efrpkuXVns7x2gChcvN5UlSSHZ+U105plusbrYoAbA0W7 4Zbr4fO/0xoag== From: Keith Busch To: linux-nvme@lists.infradead.org, sagi@grimberg.me, hch@lst.de, linux-block@vger.kernel.org Cc: axboe@kernel.dk, Yuanyuan Zhong , Casey Chen , Ming Lei , Keith Busch Subject: [PATCHv4 4/4] nvme: use return value from blk_execute_rq() Date: Thu, 10 Jun 2021 14:44:37 -0700 Message-Id: <20210610214437.641245-5-kbusch@kernel.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20210610214437.641245-1-kbusch@kernel.org> References: <20210610214437.641245-1-kbusch@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We don't have an nvme status to report if the driver's .queue_rq() returns an error without dispatching the requested nvme command. Check the return value from blk_execute_rq() for all passthrough commands so the caller may know their command was not successful. If the command is from the target passthrough interface and fails to dispatch, synthesize the response back to the host as a internal target error. Signed-off-by: Keith Busch Reviewed-by: Chaitanya Kulkarni Reviewed-by: Christoph Hellwig --- v2->v3: Combined request dispatch with the status decoding into single function drivers/nvme/host/core.c | 35 ++++++++++++++++++++++++++-------- drivers/nvme/host/ioctl.c | 6 +----- drivers/nvme/host/nvme.h | 2 +- drivers/nvme/target/passthru.c | 8 ++++---- 4 files changed, 33 insertions(+), 18 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 2b9deaffa4e1..f4a690808010 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -609,6 +609,7 @@ EXPORT_SYMBOL_NS_GPL(nvme_put_ns, NVME_TARGET_PASSTHRU); static inline void nvme_clear_nvme_request(struct request *req) { + nvme_req(req)->status = 0; nvme_req(req)->retries = 0; nvme_req(req)->flags = 0; req->rq_flags |= RQF_DONTPREP; @@ -1034,6 +1035,25 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req) } EXPORT_SYMBOL_GPL(nvme_setup_cmd); +/* + * Return values: + * 0: success + * >0: nvme controller's cqe status response + * <0: kernel error in lieu of controller response + */ +static int nvme_execute_rq(struct gendisk *disk, struct request *rq, + bool at_head) +{ + blk_status_t status; + + status = blk_execute_rq(disk, rq, at_head); + if (nvme_req(rq)->flags & NVME_REQ_CANCELLED) + return -EINTR; + if (nvme_req(rq)->status) + return nvme_req(rq)->status; + return blk_status_to_errno(status); +} + /* * Returns 0 on success. If the result is negative, it's a Linux error code; * if the result is positive, it's an NVM Express status code @@ -1062,13 +1082,9 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, goto out; } - blk_execute_rq(NULL, req, at_head); - if (result) + ret = nvme_execute_rq(NULL, req, at_head); + if (result && ret >= 0) *result = nvme_req(req)->result; - if (nvme_req(req)->flags & NVME_REQ_CANCELLED) - ret = -EINTR; - else - ret = nvme_req(req)->status; out: blk_mq_free_request(req); return ret; @@ -1156,18 +1172,21 @@ static void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects) } } -void nvme_execute_passthru_rq(struct request *rq) +int nvme_execute_passthru_rq(struct request *rq) { struct nvme_command *cmd = nvme_req(rq)->cmd; struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl; struct nvme_ns *ns = rq->q->queuedata; struct gendisk *disk = ns ? ns->disk : NULL; u32 effects; + int ret; effects = nvme_passthru_start(ctrl, ns, cmd->common.opcode); - blk_execute_rq(disk, rq, 0); + ret = nvme_execute_rq(disk, rq, false); if (effects) /* nothing to be done for zero cmd effects */ nvme_passthru_end(ctrl, effects); + + return ret; } EXPORT_SYMBOL_NS_GPL(nvme_execute_passthru_rq, NVME_TARGET_PASSTHRU); diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c index 2e7780ea0354..85cfad3dd692 100644 --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c @@ -93,11 +93,7 @@ static int nvme_submit_user_cmd(struct request_queue *q, } } - nvme_execute_passthru_rq(req); - if (nvme_req(req)->flags & NVME_REQ_CANCELLED) - ret = -EINTR; - else - ret = nvme_req(req)->status; + ret = nvme_execute_passthru_rq(req); if (result) *result = le64_to_cpu(nvme_req(req)->result.u64); if (meta && !ret && !write) { diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 499681bc0286..7557f3404615 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -871,7 +871,7 @@ static inline void nvme_hwmon_exit(struct nvme_ctrl *ctrl) u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u8 opcode); -void nvme_execute_passthru_rq(struct request *rq); +int nvme_execute_passthru_rq(struct request *rq); struct nvme_ctrl *nvme_ctrl_from_file(struct file *file); struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid); void nvme_put_ns(struct nvme_ns *ns); diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index 39b1473f7204..ba28ce42cb55 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -153,11 +153,10 @@ static void nvmet_passthru_execute_cmd_work(struct work_struct *w) { struct nvmet_req *req = container_of(w, struct nvmet_req, p.work); struct request *rq = req->p.rq; - u16 status; + int status; - nvme_execute_passthru_rq(rq); + status = nvme_execute_passthru_rq(rq); - status = nvme_req(rq)->status; if (status == NVME_SC_SUCCESS && req->cmd->common.opcode == nvme_admin_identify) { switch (req->cmd->identify.cns) { @@ -168,7 +167,8 @@ static void nvmet_passthru_execute_cmd_work(struct work_struct *w) nvmet_passthru_override_id_ns(req); break; } - } + } else if (status < 0) + status = NVME_SC_INTERNAL; req->cqe->result = nvme_req(rq)->result; nvmet_req_complete(req, status);