From patchwork Tue Nov 10 02:24:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11892939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB914C2D0A3 for ; Tue, 10 Nov 2020 02:24:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E89C206F1 for ; Tue, 10 Nov 2020 02:24:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="ZgoRyK4Q" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729661AbgKJCYQ (ORCPT ); Mon, 9 Nov 2020 21:24:16 -0500 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:55022 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727311AbgKJCYQ (ORCPT ); Mon, 9 Nov 2020 21:24:16 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1604975056; x=1636511056; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ts7dWhWksChe+Ge9MmVR/giRlJQuhaWDRlUWiBwj91A=; b=ZgoRyK4QpMuW2rCoZC/0HzHJxcFZJaKjqzLT9/GQpz6nw/T3+b07W+Xb CsfwV4Z1OhtYn0DYHW/8zF3vFhhMRTk5PaDjRfNW+QM29zyUdvAuNh92H YXR2NXPGIzRfeqSpGA0EGDhR7aDEmnwfeechOggXW3VF1blkjE6E6pHev JSNHTmGrI0Og52hkL6klmQc+pxWL3/dd/Ij05NVF4e1x8L7fQSWtxKYGa JoWLcOb86AaBXpN26jcJ/ciJREUj+dM8B886WC4IjNIuPGnvfVog7JoND kmYmyeFx30ekI7lwHcfIuMukC3oJVDf6zGxPJtuG9TlmBZryMOD6QdqxH w==; IronPort-SDR: 4qqUMnfNcmpCqmIQnn2HetZj716VB1xPYElVeqU0knIw+lBzrdp5n9MfWSrilZXiLxUJpjDi9m lxv21oLRi6VGZvjmt8mKLrbYLfqtzsvT76s1r4/m45SEQ+ubVzpLtVIwaFdXP5AhX3FWlVruqa zvW+wk89jz2xB3SciGF24097cXfet5b6SmUfygaorxHGeEH06By8rLfGlt4j022KK/g/bwhxgk WUCjw+A5fBMjngYObybbay0N1tZGHnXFE8RYP9TsgKDFrprQkhArfz8UgFBumPXKiPxULmhpPm ruE= X-IronPort-AV: E=Sophos;i="5.77,465,1596470400"; d="scan'208";a="152339802" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 10 Nov 2020 10:24:15 +0800 IronPort-SDR: Z9PH9Od83RTnnjHFKWQ3ta5PiyXks07i/ivHf1dWpRw8jgtil6L5AVEVVZhd3GkDe07piYxNou 9GN07xZIOymdWe+eTGCYX0FXaKaakrnroSh8l5XmFYkAORNHdgmDpy5qnFxRa/PF6yRm7M//Kf AarQaSaNQvOe/pMdv+H0s3Vt+Ui7DqxL1oIGgxxtoRpXIr6g6jFkXEmgjhu7l9V+imR4UdF40t TQeDcKZZmGa+EQ6XITOpQes1rmb5MqIOPRoOBasqHT2G0QAX9qj5VcKITTgBwKtTJ50EKV+jXt N7nTzy1GorRSeIqaVmnLSDjW Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2020 18:10:17 -0800 IronPort-SDR: +vJOryP7guqQjvrSL95yyrjEW62tWwoNWTJk9KKDadTS704hqNHuh+28OpFp9gJun/WoEejMa6 gYDMddWI7t0thd694WZyTxqUQYm6y80Q5U+BZRKvGiQNo8m1iAcA1VDnqic7j1S/B61RQHrTNp ZhhMi1TCHaLPChu6DFZhLC7g1TT/mJYqKz/cHOBDokXivGadJhQWNf4F1Vr9A2NojCnZvzyNGF ioQ0Q1D2lYGhHy9EAqdjNWT96IVWl3Qwl4s+OWrzZS9Q/PG+51606bbv7YPAcQ7zDyG2tsezvE +UY= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 09 Nov 2020 18:24:15 -0800 From: Chaitanya Kulkarni To: linux-nvme@lists.infradead.org, linux-block@vger.kernel.org Cc: axboe@kernel.dk, kbusch@kernel.org, sagi@grimberg.me, hch@lst.de, Chaitanya Kulkarni , Logan Gunthorpe Subject: [PATCH V4 1/6] nvme-core: add req init helpers Date: Mon, 9 Nov 2020 18:24:00 -0800 Message-Id: <20201110022405.6707-2-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201110022405.6707-1-chaitanya.kulkarni@wdc.com> References: <20201110022405.6707-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This is a preparation patch which adds a helper that we use in the next patch that splits the nvme_alloc_request() into nvme_alloc_request_qid_any() and nvme_alloc_request_qid(). The new functions shares the code to initialize the allocated request from NVMe cmd, initializing the REQ_OP_XXX from nvme command and setting the default timeout after request allocation. Signed-off-by: Chaitanya Kulkarni Reviewed-by: Logan Gunthorpe --- drivers/nvme/host/core.c | 33 ++++++++++++++++++++++++--------- 1 file changed, 24 insertions(+), 9 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 98bea150e5dc..315ea958d1b7 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -518,10 +518,31 @@ static inline void nvme_clear_nvme_request(struct request *req) } } +static inline void nvme_init_req_from_cmd(struct request *req, + struct nvme_command *cmd) +{ + req->cmd_flags |= REQ_FAILFAST_DRIVER; + nvme_clear_nvme_request(req); + nvme_req(req)->cmd = cmd; +} + +static inline unsigned int nvme_req_op(struct nvme_command *cmd) +{ + return nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN; +} + +static inline void nvme_init_req_default_timeout(struct request *req) +{ + if (req->q->queuedata) + req->timeout = NVME_IO_TIMEOUT; + else /* no queuedata implies admin queue */ + req->timeout = NVME_ADMIN_TIMEOUT; +} + struct request *nvme_alloc_request(struct request_queue *q, struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid) { - unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN; + unsigned op = nvme_req_op(cmd); struct request *req; if (qid == NVME_QID_ANY) { @@ -533,14 +554,8 @@ struct request *nvme_alloc_request(struct request_queue *q, if (IS_ERR(req)) return req; - if (req->q->queuedata) - req->timeout = NVME_IO_TIMEOUT; - else /* no queuedata implies admin queue */ - req->timeout = NVME_ADMIN_TIMEOUT; - - req->cmd_flags |= REQ_FAILFAST_DRIVER; - nvme_clear_nvme_request(req); - nvme_req(req)->cmd = cmd; + nvme_init_req_timeout(req); + nvme_init_req_from_cmd(req, cmd); return req; } From patchwork Tue Nov 10 02:24:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11892941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 141A4C2D0A3 for ; Tue, 10 Nov 2020 02:24:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B4E81206F1 for ; Tue, 10 Nov 2020 02:24:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=permerror (0-bit key) header.d=wdc.com header.i=@wdc.com header.b="adElzKZZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729874AbgKJCYX (ORCPT ); Mon, 9 Nov 2020 21:24:23 -0500 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:31575 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729777AbgKJCYW (ORCPT ); Mon, 9 Nov 2020 21:24:22 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1604975754; x=1636511754; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=htBo/T+51yFNf/aP20lfx/LKM07TCey+64vVFBRYtC8=; b=adElzKZZRXIlIzCsiO5B1eytO04pv8lRmog+E5FJY2YaJWd8MfB0liP3 fLYkHvgYIEt4ljnWNUwgprPVQQWjC+yhaqOFCCPhfgYZQ+gOx5Gy0yYrM HTVwEshU7xt2YvbAiEKVXyuvWxZzr2Nu43N/tkFOz+50WFWEkrABbW1hB xAoSOnM+upv/wJ8sM6Q4EXDmOmzUIaqkv9mQY7prtdknzIbo/NEmEGL++ f3JaklZVHB+XAbBIFqgybjWKWxBPJOIp8YETJ+7govmGLvlxUT3ymwuCB bbLRCobT8Qr816aAKD+R1JyKJ3CnKzYQUUM3HWYBZ0zQ0KiXLE1KqvNhn Q==; IronPort-SDR: OjoQvpHzyppyG8S0rcm8TOsYGxCmOyaRN3dSPM8tDVFcAUNliZ807rJY8dSkCVEydchHlZqEjQ I7IbewGh+/EJJe+w7INkBRniVBpHAL6FVnrIrRg3m3fKE/JZXwz0gaIcDCznGdnF2qtBclbzba O0KU20fGBj+FfWaC8gC1a/Kukvmvq9aXRYDYoCPfmA+6VmdIBnHgBpZUMvFoVskZA2UBR2Bd9e 6asLSmrEOSpRTlAakRKS9diHJFt3Ngv9icDMJAcQ3p4OPpbPRTCzmrQbfIZaNiQ9oKt6X/moEb hNM= X-IronPort-AV: E=Sophos;i="5.77,465,1596470400"; d="scan'208";a="255796332" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 10 Nov 2020 10:35:54 +0800 IronPort-SDR: ZdODBHgrE+4IU/mcI+R6BpPyfJvRzP/nIA6j8fIiCV0Rn7C//y0xjq6Z5VLjJ0flbJTpayTigb X3KIAQM84zZcK0OUu2qCoh3TjdeqlA/OpqDZ4a4fX8kaeChPNyYzGZsXFhkuB6f78LfiTijVu8 FZ1VdrnGGAdO1xtwIeSVFT1GCAaJnpfdAgG+XiqLvC4diGU7QfZU8/lUAP3x8Ljk+ba0Ey4NLb UKNfJSktbS71T/KGRxSHS9LTBT0lmby0XOTloYj3bYlOcEK4Lgwed9mYfFQb2OyzQzSpMWMVmP 8gccmJqFX1AwwvjxJAd4tr8f Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2020 18:10:23 -0800 IronPort-SDR: Sea2stDnWVAs+NRyLpMU5L3sNHddbbkuizv7ct1Um9A3BVH4y+GtdxDK2eddQmkCZwvh8qYDR7 3fmS+MV0oOCvyUte7yupPIxUorIib9h95snnuAgS9HpIg9eGup2NOTR7lg/5sGddHlQkh9MiPG 4QygbVHDF+37WgUfLRNzwPlbu3WU1BGGXAzI3zR4Wcy2QGh3ZGsAIdicjUS6B4/GcTWRhpJfyJ 2I5lP4/o7m5SKQQPx5cbyZT95RA0eeZ8F26F/89SBZb+dxX8zL+Azmnv3SPag3FbbHPCUNqbop dxI= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 09 Nov 2020 18:24:21 -0800 From: Chaitanya Kulkarni To: linux-nvme@lists.infradead.org, linux-block@vger.kernel.org Cc: axboe@kernel.dk, kbusch@kernel.org, sagi@grimberg.me, hch@lst.de, Chaitanya Kulkarni , Logan Gunthorpe Subject: [PATCH V4 2/6] nvme-core: split nvme_alloc_request() Date: Mon, 9 Nov 2020 18:24:01 -0800 Message-Id: <20201110022405.6707-3-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201110022405.6707-1-chaitanya.kulkarni@wdc.com> References: <20201110022405.6707-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Right now nvme_alloc_request() allocates a request from block layer based on the value of the qid. When qid set to NVME_QID_ANY it used blk_mq_alloc_request() else blk_mq_alloc_request_hctx(). The function nvme_alloc_request() is called from different context, The only place where it uses non NVME_QID_ANY value is for fabrics connect commands :- nvme_submit_sync_cmd() NVME_QID_ANY nvme_features() NVME_QID_ANY nvme_sec_submit() NVME_QID_ANY nvmf_reg_read32() NVME_QID_ANY nvmf_reg_read64() NVME_QID_ANY nvmf_reg_write32() NVME_QID_ANY nvmf_connect_admin_queue() NVME_QID_ANY nvme_submit_user_cmd() NVME_QID_ANY nvme_alloc_request() nvme_keep_alive() NVME_QID_ANY nvme_alloc_request() nvme_timeout() NVME_QID_ANY nvme_alloc_request() nvme_delete_queue() NVME_QID_ANY nvme_alloc_request() nvmet_passthru_execute_cmd() NVME_QID_ANY nvme_alloc_request() nvmf_connect_io_queue() QID __nvme_submit_sync_cmd() nvme_alloc_request() With passthru nvme_alloc_request() now falls into the I/O fast path such that blk_mq_alloc_request_hctx() is never gets called and that adds additional branch check in fast path. Split the nvme_alloc_request() into nvme_alloc_request() and nvme_alloc_request_qid(). Replace each call of the nvme_alloc_request() with NVME_QID_ANY param with a call to newly added nvme_alloc_request() without NVME_QID_ANY. Replace a call to nvme_alloc_request() with QID param with a call to newly added nvme_alloc_request() and nvme_alloc_request_qid() based on the qid value set in the __nvme_submit_sync_cmd(). Signed-off-by: Chaitanya Kulkarni Reviewed-by: Logan Gunthorpe Signed-off-by: Chaitanya Kulkarni Reviewed-by: Logan Gunthorpe Signed-off-by: Christoph Hellwig --- drivers/nvme/host/core.c | 56 +++++++++++++++++++++------------- drivers/nvme/host/lightnvm.c | 5 ++- drivers/nvme/host/nvme.h | 2 ++ drivers/nvme/host/pci.c | 4 +-- drivers/nvme/target/passthru.c | 2 +- 5 files changed, 41 insertions(+), 28 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 315ea958d1b7..99aa0863502f 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -526,12 +526,7 @@ static inline void nvme_init_req_from_cmd(struct request *req, nvme_req(req)->cmd = cmd; } -static inline unsigned int nvme_req_op(struct nvme_command *cmd) -{ - return nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN; -} - -static inline void nvme_init_req_default_timeout(struct request *req) +static inline void nvme_init_req_timeout(struct request *req) { if (req->q->queuedata) req->timeout = NVME_IO_TIMEOUT; @@ -539,28 +534,42 @@ static inline void nvme_init_req_default_timeout(struct request *req) req->timeout = NVME_ADMIN_TIMEOUT; } +static inline unsigned int nvme_req_op(struct nvme_command *cmd) +{ + return nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN; +} + struct request *nvme_alloc_request(struct request_queue *q, - struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid) + struct nvme_command *cmd, blk_mq_req_flags_t flags) { - unsigned op = nvme_req_op(cmd); struct request *req; - if (qid == NVME_QID_ANY) { - req = blk_mq_alloc_request(q, op, flags); - } else { - req = blk_mq_alloc_request_hctx(q, op, flags, - qid ? qid - 1 : 0); + req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags); + if (!IS_ERR(req)) { + nvme_init_req_timeout(req); + nvme_init_req_from_cmd(req, cmd); } - if (IS_ERR(req)) - return req; - - nvme_init_req_timeout(req); - nvme_init_req_from_cmd(req, cmd); return req; } EXPORT_SYMBOL_GPL(nvme_alloc_request); +struct request *nvme_alloc_request_qid(struct request_queue *q, + struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid) +{ + struct request *req; + + req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags, + qid ? qid - 1 : 0); + if (!IS_ERR(req)) { + nvme_init_req_timeout(req); + nvme_init_req_from_cmd(req, cmd); + } + + return req; +} +EXPORT_SYMBOL_GPL(nvme_alloc_request_qid); + static int nvme_toggle_streams(struct nvme_ctrl *ctrl, bool enable) { struct nvme_command c; @@ -917,7 +926,10 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, struct request *req; int ret; - req = nvme_alloc_request(q, cmd, flags, qid); + if (qid == NVME_QID_ANY) + req = nvme_alloc_request(q, cmd, flags); + else + req = nvme_alloc_request_qid(q, cmd, flags, qid); if (IS_ERR(req)) return PTR_ERR(req); @@ -1088,7 +1100,7 @@ static int nvme_submit_user_cmd(struct request_queue *q, void *meta = NULL; int ret; - req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY); + req = nvme_alloc_request(q, cmd, 0); if (IS_ERR(req)) return PTR_ERR(req); @@ -1163,8 +1175,8 @@ static int nvme_keep_alive(struct nvme_ctrl *ctrl) { struct request *rq; - rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd, BLK_MQ_REQ_RESERVED, - NVME_QID_ANY); + rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd, + BLK_MQ_REQ_RESERVED); if (IS_ERR(rq)) return PTR_ERR(rq); diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 88a7c8eac455..470cef3abec3 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -653,7 +653,7 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q, nvme_nvm_rqtocmd(rqd, ns, cmd); - rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0, NVME_QID_ANY); + rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0); if (IS_ERR(rq)) return rq; @@ -767,8 +767,7 @@ static int nvme_nvm_submit_user_cmd(struct request_queue *q, DECLARE_COMPLETION_ONSTACK(wait); int ret = 0; - rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0, - NVME_QID_ANY); + rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0); if (IS_ERR(rq)) { ret = -ENOMEM; goto err_cmd; diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 53783358d62b..d2ccd3473b20 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -610,6 +610,8 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl); #define NVME_QID_ANY -1 struct request *nvme_alloc_request(struct request_queue *q, + struct nvme_command *cmd, blk_mq_req_flags_t flags); +struct request *nvme_alloc_request_qid(struct request_queue *q, struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid); void nvme_cleanup_cmd(struct request *req); blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 6123040ff872..5e6365dd0c8e 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1304,7 +1304,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) req->tag, nvmeq->qid); abort_req = nvme_alloc_request(dev->ctrl.admin_q, &cmd, - BLK_MQ_REQ_NOWAIT, NVME_QID_ANY); + BLK_MQ_REQ_NOWAIT); if (IS_ERR(abort_req)) { atomic_inc(&dev->ctrl.abort_limit); return BLK_EH_RESET_TIMER; @@ -2218,7 +2218,7 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode) cmd.delete_queue.opcode = opcode; cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid); - req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY); + req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT); if (IS_ERR(req)) return PTR_ERR(req); diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index a062398305a7..be8ae59dcb71 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -248,7 +248,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req) timeout = req->sq->ctrl->subsys->admin_timeout; } - rq = nvme_alloc_request(q, req->cmd, 0, NVME_QID_ANY); + rq = nvme_alloc_request(q, req->cmd, 0); if (IS_ERR(rq)) { status = NVME_SC_INTERNAL; goto out_put_ns; From patchwork Tue Nov 10 02:24:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11892935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCD3AC4741F for ; Tue, 10 Nov 2020 02:24:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7601D206C0 for ; Tue, 10 Nov 2020 02:24:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="Pdkw+uze" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729965AbgKJCYa (ORCPT ); Mon, 9 Nov 2020 21:24:30 -0500 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:31468 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729777AbgKJCYa (ORCPT ); Mon, 9 Nov 2020 21:24:30 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1604975069; x=1636511069; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XoQ22JwDeYI/Y/DWLZcgR5ylbUQpu3lntdI38exCX0U=; b=Pdkw+uze9l5FFYGMtLUl25j833/76Fj35rIGeBSapIexxTY0GdOnia8e tP/8skVq+5T4SbgtNOZX0CqMgH0oQjHDqY3AmznzGftzmdr437cxHMVvs UEitbczqGy4Mg6mkVj0wzFQsy7WCe4HGo/DqmBDhiSMPhGByrs3KVgzM7 rD2J8jjYZ/loAqibUFKADkVoCYTeD6958eYLrUha9YxBfrcOVMEBBJe4k mEkeBQqAJAunCaxpgmR0H6osArlYr7+tbnHQZ37InAkjxTpPLFpVG8Yfy NertsAMm0d4DIxCNTH/wBGrX6QOYanoTEAspU36PU++18rKxqoAeLelMv A==; IronPort-SDR: TEPiBkTuI89GnmtuLgWcXx/KkbH1T6uTLswULUHqNpaROqslISqXFP5tHnHParvM+3f0UhTtvO QHaz5c1aYRrHzNQWtm44XQFtEHZSEvjGAWbgcix7Ee9fccycplyl5668MPfK5pghonsSo9lStE 6ej4bMQTjSng0n6zzjGvWay+ebBDm2IZkZdJGFnNT7pkpL0rdSFFyHl8h134r+BZ77AN77OOEv XszW9YgWw4MSirCiHovcaaoWuIf7JU7X0NYmrUdmIYN1y4ezIAJDWRvU/cyoWofy+Cc57Uo8v7 MgQ= X-IronPort-AV: E=Sophos;i="5.77,465,1596470400"; d="scan'208";a="152138846" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 10 Nov 2020 10:24:29 +0800 IronPort-SDR: /dcTPkIfbjLwLPpt6chfaR0k7Vz4XPlpCrciXkBFiPW8GmUGYo70NrkNHh4EM4M/t6erkHhpEo rdQ1V06f0hhbAFeW4zfgZfhI2LE608RJ01jb4SR/SWRZieA76HNJJDvlS5sn2ZKCj/6jl35e5Z kpyM6NWdc2n9i+J9V15HmEt7DDNMa9t2c0OF3Z4pthiokxXB3FyjmMuwByGwmkT0OLZfzwM36P N3FnY1G9QI+iP/k2dVXhs7rfMbUKCEpRyo5ybZoRXcT2u0g+/MK6W8L/sghNj9JWsOCrH72tQJ A5uLnOWiaVlGdhf0OS5d2H0J Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2020 18:10:30 -0800 IronPort-SDR: k/oTFRZ0M7LN/uh6B+sFf+F/T1+gy4L387SrYXmBm4vrxBZpsmbuOqaJ9OUzv/pUXW0lHaU4kR tABtCSeGGpcIFJ6qPqjA9azj5fbbvhDnaWJuvY5TZPbMfgCJq3o76qftchjAnZVqQG42K0se1R 8eObAWZkcetWaWrLnrPgQOLx777z278+yJDFiRP+RabwQtHwODhp7Q/DDlHTeoh9tbMAkeiDoV K51A74o6H6ZRCzcFZZ2GS2n7gad1CUU0kWtmxZwkhITqXWKRQ/Ycn2eumRd9YcxFgcjtHrAyWD YQ0= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 09 Nov 2020 18:24:28 -0800 From: Chaitanya Kulkarni To: linux-nvme@lists.infradead.org, linux-block@vger.kernel.org Cc: axboe@kernel.dk, kbusch@kernel.org, sagi@grimberg.me, hch@lst.de, Chaitanya Kulkarni , Logan Gunthorpe Subject: [PATCH V4 3/6] nvmet: remove op_flags for passthru commands Date: Mon, 9 Nov 2020 18:24:02 -0800 Message-Id: <20201110022405.6707-4-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201110022405.6707-1-chaitanya.kulkarni@wdc.com> References: <20201110022405.6707-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org For passthru commands setting op_flags has no meaning. Remove the code that sets the op flags in nvmet_passthru_map_sg(). Signed-off-by: Chaitanya Kulkarni Reviewed-by: Logan Gunthorpe --- drivers/nvme/target/passthru.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index be8ae59dcb71..1c84dadfb38f 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -188,21 +188,15 @@ static void nvmet_passthru_req_done(struct request *rq, static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) { struct scatterlist *sg; - int op_flags = 0; struct bio *bio; int i, ret; if (req->sg_cnt > BIO_MAX_PAGES) return -EINVAL; - if (req->cmd->common.opcode == nvme_cmd_flush) - op_flags = REQ_FUA; - else if (nvme_is_write(req->cmd)) - op_flags = REQ_SYNC | REQ_IDLE; - bio = bio_alloc(GFP_KERNEL, req->sg_cnt); bio->bi_end_io = bio_put; - bio->bi_opf = req_op(rq) | op_flags; + bio->bi_opf = req_op(rq); for_each_sg(req->sg, sg, req->sg_cnt, i) { if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length, From patchwork Tue Nov 10 02:24:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11892947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 196FCC2D0A3 for ; Tue, 10 Nov 2020 02:24:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B872220789 for ; Tue, 10 Nov 2020 02:24:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="MjjVk3CU" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730460AbgKJCYh (ORCPT ); Mon, 9 Nov 2020 21:24:37 -0500 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:31474 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730151AbgKJCYg (ORCPT ); Mon, 9 Nov 2020 21:24:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1604975076; x=1636511076; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/RfTDNkwgpWkWBDLqJjIjFnT/dbeT/R3kr2Wn8mjMuQ=; b=MjjVk3CUhFfxDtuGTizIW1qFQ8oCsvianONmNZIMcr2hdBO37e+wkrln Cpvt5bfZ8UCtEFtwFyoqPCWMGjgRjWbegoVk2qCjqo577m11FoZojv9FF ARcHOZpHH8dKpJArdSBdqNg0BP0x5U9vnZ1cFqIPlDPffIHZad6EWFkNX DS+8AJs3UeVSe2FduCeXeljcT9SIye01Tp14QocJzv28tFIsdSM9e+rJY Ovk96bXLznEDhVos9odfAFAlkY6DvH+cNHh9xnbbl8t2BPwh6nhA4SJrH t41AudeJ5bBUg80U2VEE3HI8ioZ/cPZQ4J6OS1XtSGb+PwLPJGnvoB33P Q==; IronPort-SDR: oceGT7T+NiUzw1Qxc803Tw6whgBKbgU/noTMt+uxFGu58H+5CPC6GcqHWRJyqAN42hIVrjseKa LdLh4KPBLWELMikuXsobXjr+6LYieHTT6tIsUCiNv3JVEGYAXsNukHtm8ug+1lme5BI+5A4QwH H9U2f3cl+gzCgMR0arWAi1oe3KC4g6zG/2g5ULsYZa5JaDcp8TjUyRBGLhDGAInZc4REPS364u P+g9y2nKeX+JqcMFXS7W8sz40VI0tH0GRrX7FeRqUzdBvcO3cTOrB41na0HQ43QjAC41y4yec/ RwE= X-IronPort-AV: E=Sophos;i="5.77,465,1596470400"; d="scan'208";a="152138853" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 10 Nov 2020 10:24:36 +0800 IronPort-SDR: Tf27/cLySCjj7MxZudF5o6lvO0yzBLnALc60qLbYUL7byxG9fKUFeRplvKpQhzCE/HV3L6k9xR vV9ULeDRKxcD6GzLCfEucrL7W8ttM9dgSRJU4NW7eDd5sy+V40Qa5Y15TuLQwMs6nKADikTRcg TXZ3IIxMI3GYAjp+DYdk7QOReCWtnT7tOrwXbyaPggYEhd00YEOjRQMOV4T36FUQLKE3H7krZn H3JlnlO545Zky7W19xYeUnObzZFZx9jtYn+DI6jXCHQVK/Nk4aCXImMf2q/ibO5Z65c1sDhy4w 5VoPndgzxQmTTHir078BOiTZ Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2020 18:10:37 -0800 IronPort-SDR: fDganRMlCaQp3MSRVYAu8NWNLjqdHtujaw/NaGfjGAdpEVDU3gDumfmKd8N38DXOACk3t+iLy+ ELUxRGG5cOedh6rH8wzXVq+NcDF8zy0y1/uUrl/pSF6OSqJEaX9LPsu8p89nrME4LSSuH+ee2c l97/rkz1j7rjRe9iQubX32xfgXXEovRQYlx7fQwULRbuvgi160JiDLuMriN7ZZvGiCAzpM8Lw3 pRm5x96I+AlYVePekG6YOAPLtuiXJR2xNfhtrDD+A2SosNC4b55RZCx2tavOFhYVlyDS5TN44C 6dk= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 09 Nov 2020 18:24:35 -0800 From: Chaitanya Kulkarni To: linux-nvme@lists.infradead.org, linux-block@vger.kernel.org Cc: axboe@kernel.dk, kbusch@kernel.org, sagi@grimberg.me, hch@lst.de, Chaitanya Kulkarni , Logan Gunthorpe Subject: [PATCH V4 4/6] block: move blk_rq_bio_prep() to linux/blk-mq.h Date: Mon, 9 Nov 2020 18:24:03 -0800 Message-Id: <20201110022405.6707-5-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201110022405.6707-1-chaitanya.kulkarni@wdc.com> References: <20201110022405.6707-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This is a preparation patch to have minimal block layer request bio append functionality in the context of the NVMeOF Passthru driver which falls in the fast path and doesn't need calls from blk_rq_append_bio(). Signed-off-by: Chaitanya Kulkarni Reviewed-by: Logan Gunthorpe --- block/blk.h | 12 ------------ include/linux/blk-mq.h | 12 ++++++++++++ 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/block/blk.h b/block/blk.h index dfab98465db9..e05507a8d1e3 100644 --- a/block/blk.h +++ b/block/blk.h @@ -91,18 +91,6 @@ static inline bool bvec_gap_to_prev(struct request_queue *q, return __bvec_gap_to_prev(q, bprv, offset); } -static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio, - unsigned int nr_segs) -{ - rq->nr_phys_segments = nr_segs; - rq->__data_len = bio->bi_iter.bi_size; - rq->bio = rq->biotail = bio; - rq->ioprio = bio_prio(bio); - - if (bio->bi_disk) - rq->rq_disk = bio->bi_disk; -} - #ifdef CONFIG_BLK_DEV_INTEGRITY void blk_flush_integrity(void); bool __bio_integrity_endio(struct bio *); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index b23eeca4d677..d1d277073761 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -591,6 +591,18 @@ static inline void blk_mq_cleanup_rq(struct request *rq) rq->q->mq_ops->cleanup_rq(rq); } +static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio, + unsigned int nr_segs) +{ + rq->nr_phys_segments = nr_segs; + rq->__data_len = bio->bi_iter.bi_size; + rq->bio = rq->biotail = bio; + rq->ioprio = bio_prio(bio); + + if (bio->bi_disk) + rq->rq_disk = bio->bi_disk; +} + blk_qc_t blk_mq_submit_bio(struct bio *bio); #endif From patchwork Tue Nov 10 02:24:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11892937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38908C4741F for ; Tue, 10 Nov 2020 02:24:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E1349207D3 for ; Tue, 10 Nov 2020 02:24:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=permerror (0-bit key) header.d=wdc.com header.i=@wdc.com header.b="PxOyYkZl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730562AbgKJCYr (ORCPT ); Mon, 9 Nov 2020 21:24:47 -0500 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:31618 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729777AbgKJCYr (ORCPT ); Mon, 9 Nov 2020 21:24:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1604975791; x=1636511791; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Y1LQfy4MRoLNkEPF2WpyasCEthNejxsNzCQhgw0uUJ4=; b=PxOyYkZlug9eCHiLCdkr6hosTdvBzsjyrwqDwQc8CZGF2FJhummc3JDz ZZrhVlnrbHXk7UuInwIYqrMllzgh0CrPwSTM9VhE+nlG9IMltqljBcyCI NZHzL/dny381BUOaGNUpWr23Amj46BwAf/xsZK20J1KEzV81+m2A/K6OS eNo9wXM3pguPGNVS5IK2/Gg1klrRrd9X0c3Tb235T1bicp/JlqkRtG5uM vp53owFRer598in0gAKNpgc0DdFY87MvAJqoRBbuzNr8jg3yxYEyftpCO ncPZRQKhxzJNcR08EAMnH+ixfmG3oKMuYOkFIILcG1dI11r1ay7CDSDGo Q==; IronPort-SDR: mM2eSRvbLt0wnLxp5jpBeheuj508O1r0UcqZdmoHXZzxYBWXOg/RcR4y0mwKVOnnEEkur58pha J3UxjcwcwvQsCYLxL+56tVOwhYtrOEBo44isc7FkGkPqAqIVpUD+r4PJ6Wj/JqDGBfvCb74yBe 0IWf8xaReMQZiwxRE2WaCnKGaEDMDvaixl96a0ro9L5OpWuGGWrIjV2dvQFGQlme9VqsS4+oWm 1D9XPkrPD1UsNGh+N0uW41BHb9MKkfmgC0PysmA4cvBIPAKfdYMEoh9NlXmiBWeQYmDoSiou9j 3/Q= X-IronPort-AV: E=Sophos;i="5.77,465,1596470400"; d="scan'208";a="255796356" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 10 Nov 2020 10:36:26 +0800 IronPort-SDR: AAktyGr5Dj92aGItWfpY50L0ciB93BNx6IUskCSkhqac6CaHZefYzFzbONlhMOwChq/GY9GMAr t/aJZ0qvsjm29+idEvEYqHrek39DFklX4D8uhnZ+JD1gE2+vTPzZmqIBLRl7bDyqIGtRelXdCR 4/868UOzUGoLKpA+ZsXe/SiCHEpmaRfRgi7fr8nnpfMJfRMS9SLGacJI21Y3rARvD5U8SHon9Z i2zRGE+uHIngPimzFrJ1n2Zn6qL9KdAOWn2Xa4xEZ1BWNk6r+CP669VS5WBIJt6mcD8Ql5HPXW oKCD9wII5JmYUqz7uzg6UDuE Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2020 18:10:44 -0800 IronPort-SDR: P8mz4sKNOXfIpKb8h8P194X/N7Oaeqr06qSPde+ZymKxBLG91oXcpOAjn4YZnr40gtDXWrHRXN bvIzeKzgdNfzuBbWmw8OVuwI4i0kqj+u3sqUUDvuUnDZ+yX12rahpH9KnBgAEYFe/N3D+/L6ka QX66VbLpzNUsebw953vQlluu8ePzZNTh3NiZAJIYirdf+dDDpBg1SOnrw1B2Snjm+PlFRZRGLz vBgiStmcTGtc0c6rfOch3kcOxxGcWwi379bcJScomLimKHoItwMiugXbS1vxsTOQkxnw7aw5CV Cek= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 09 Nov 2020 18:24:43 -0800 From: Chaitanya Kulkarni To: linux-nvme@lists.infradead.org, linux-block@vger.kernel.org Cc: axboe@kernel.dk, kbusch@kernel.org, sagi@grimberg.me, hch@lst.de, Chaitanya Kulkarni , Logan Gunthorpe Subject: [PATCH V4 5/6] nvmet: use minimized version of blk_rq_append_bio Date: Mon, 9 Nov 2020 18:24:04 -0800 Message-Id: <20201110022405.6707-6-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201110022405.6707-1-chaitanya.kulkarni@wdc.com> References: <20201110022405.6707-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The function blk_rq_append_bio() is a genereric API written for all types driver (having bounce buffers) and different context (where request is already having a bio i.e. rq->bio != NULL). It does mainly three things: calculating the segments, bounce queue and if req->bio == NULL call blk_rq_bio_prep() or handle low level merge() case. The NVMe PCIe and fabrics transports currently does not use queue bounce mechanism. In order to find this for each request processing in the passthru blk_rq_append_bio() does extra work in the fast path for each request. When I ran I/Os with different block sizes on the passthru controller I found that we can reuse the req->sg_cnt instead of iterating over the bvecs to find out nr_segs in blk_rq_append_bio(). This calculation in blk_rq_append_bio() is a duplication of work given that we have the value in req->sg_cnt. (correct me here if I'm wrong). With NVMe passthru request based driver we allocate fresh request each time, so every call to blk_rq_append_bio() rq->bio will be NULL i.e. we don't really need the second condition in the blk_rq_append_bio() and the resulting error condition in the caller of blk_rq_append_bio(). So for NVMeOF passthru driver recalculating the segments, bounce check and ll_back_merge code is not needed such that we can get away with the minimal version of the blk_rq_append_bio() which removes the error check in the fast path along with extra variable in nvmet_passthru_map_sg(). This patch updates the nvmet_passthru_map_sg() such that it does only appending the bio to the request in the context of the NVMeOF Passthru driver. Following are perf numbers :- With current implementation (blk_rq_append_bio()) :- ---------------------------------------------------- + 5.80% 0.02% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 5.44% 0.01% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 4.88% 0.00% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 5.44% 0.01% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 4.86% 0.01% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 5.17% 0.00% kworker/0:2-eve [nvmet] [k] nvmet_passthru_execute_cmd With this patch using blk_rq_bio_prep() :- ---------------------------------------------------- + 3.14% 0.02% kworker/0:2-eve [nvmet] [k] nvmet_passthru_execute_cmd + 3.26% 0.01% kworker/0:2-eve [nvmet] [k] nvmet_passthru_execute_cmd + 5.37% 0.01% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 5.18% 0.02% kworker/0:2-eve [nvmet] [k] nvmet_passthru_execute_cmd + 4.84% 0.02% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 4.87% 0.01% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd Signed-off-by: Chaitanya Kulkarni Reviewed-by: Logan Gunthorpe --- drivers/nvme/target/passthru.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index 1c84dadfb38f..2b24205ee79d 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -189,7 +189,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) { struct scatterlist *sg; struct bio *bio; - int i, ret; + int i; if (req->sg_cnt > BIO_MAX_PAGES) return -EINVAL; @@ -206,11 +206,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) } } - ret = blk_rq_append_bio(rq, &bio); - if (unlikely(ret)) { - bio_put(bio); - return ret; - } + blk_rq_bio_prep(rq, bio, req->sg_cnt); return 0; } From patchwork Tue Nov 10 02:24:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11892945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BBE9C2D0A3 for ; Tue, 10 Nov 2020 02:24:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DBBDC20789 for ; Tue, 10 Nov 2020 02:24:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="djLNE3RK" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730324AbgKJCYw (ORCPT ); Mon, 9 Nov 2020 21:24:52 -0500 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:35875 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730565AbgKJCYw (ORCPT ); Mon, 9 Nov 2020 21:24:52 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1604975091; x=1636511091; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=G0ZQSIMWivGujGltCMNIh/G4c4WD0kenneuIGVJ9JRA=; b=djLNE3RKchnDGdxJ2WDnAO1IXTHCQ3Pbl2JDyUYTvyMj+lATTSx+HQpD UktgcyaX09CS8CHFcwrfgsrlHHJlukjUbVIQPCu2BRTDRJAeCSKwioMQE EMiQz9Jhrmontw9qjtadS9Mmrrb0n0iOfln+GZolconTLlND5QcMxkUrC le0TmibTq2txqAbqhg5yzIcQB2xnD33dAK8LJNfIzLwydp+HP2C6kaAzD aXYVbGmQHVOhfnt31tqeUyGHd+MR6xKycmoZUGpYkrTRwPmPS1WX+0of4 nljYC5yCEIdXoJRrGOWJo6o7Kh/eyJJZGmsPWsWwA6a60jqd5GGTta3Jx g==; IronPort-SDR: B8qILLFTpEi1k31JtBNaSIPNT9sKAChduV5wG0r/PUwqu4gkpkc0/Gt1jUyyEgTSuCcSiDBJRH RW7rDPKlqvvSRn+bamBb5d/IXBpRVoYgR52NCyRfgMLRXES36HJ7sNFPfaCcnRjE0c5oQjQ5cf LUMPIFrt1hAnYdTjAn6weMaIAK75Pf0D7wdg0b7KMk6sXuuzLm2GwYTCgPnug4bJSmryMMiYDN 78iF04GxhlQHj2T1md+CqmZaiez0x9nDC0OmzXDuEtnAaLT9AEUY4dXdSVhmzktJ2TIKL98Ym2 iV4= X-IronPort-AV: E=Sophos;i="5.77,465,1596470400"; d="scan'208";a="262243737" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 10 Nov 2020 10:24:51 +0800 IronPort-SDR: ApY558ypq+wfbBiQ0Va90BpxyXSUsShHIAfyBoVAQNHBdzaVJ6e1yD3gCS6pucaTGhV3HYgrhb QjcqWfvNGdXh56lyJA0+7YCCQduH0KJxGOafmpuPC21Pk0EiHoZo+hTCrPX0MX3ev0o0yf0oya +5NWGtqGpC/IRvt4g/mP/HNVpfAv3+1XBav6NkQcYX0Ba45l5LLPUxd3zHJF/5yuMmJ64wQFKO NrpPg4bOA3Tu5sYRvMAsgAC3Iawd+p2v+2jqFmGBDM9LOxOJUOQheh9PtIHtbEDbRHYFGmFOqW IDEEv3VUkUkQL6Jdu8PftBry Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2020 18:09:39 -0800 IronPort-SDR: OvIUWGUTzFbd1FDzlHGeb5gRqm8aYl3bmZrGrTGqz6hx/+hVf2zkg8cSM6eS8qoqk2/4Ks9jmt qaTcPDHoToubJGXTMF1rjkA9brgdpbIG3eW5TTbQ/xviQteqUxNdVZ1NRTrkjuq1QfLd+GMIns J5dcQWtj+kjGzBIBQ6NItfTYykiMyai8cNZset+qcaJ+IKOnUL/krsoS3OCeEw2N4mZR1tybu4 jynsXksQfgJLeS41HWukAOOK9lVZzR0eMX24w/YgYoFAkyjljmjWwcH6O50f2QlvxKXSyRxKp+ R0w= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 09 Nov 2020 18:24:51 -0800 From: Chaitanya Kulkarni To: linux-nvme@lists.infradead.org, linux-block@vger.kernel.org Cc: axboe@kernel.dk, kbusch@kernel.org, sagi@grimberg.me, hch@lst.de, Chaitanya Kulkarni , Logan Gunthorpe Subject: [PATCH V4 6/6] nvmet: use inline bio for passthru fast path Date: Mon, 9 Nov 2020 18:24:05 -0800 Message-Id: <20201110022405.6707-7-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201110022405.6707-1-chaitanya.kulkarni@wdc.com> References: <20201110022405.6707-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In nvmet_passthru_execute_cmd() which is a high frequency function it uses bio_alloc() which leads to memory allocation from the fs pool for each I/O. For NVMeoF nvmet_req we already have inline_bvec allocated as a part of request allocation that can be used with preallocated bio when we already know the size of request before bio allocation with bio_alloc(), which we already do. Introduce a bio member for the nvmet_req passthru anon union. In the fast path, check if we can get away with inline bvec and bio from nvmet_req with bio_init() call before actually allocating from the bio_alloc(). This will be useful to avoid any new memory allocation under high memory pressure situation and get rid of any extra work of allocation (bio_alloc()) vs initialization (bio_init()) when transfer len is < NVMET_MAX_INLINE_DATA_LEN that user can configure at compile time. Signed-off-by: Chaitanya Kulkarni Reviewed-by: Logan Gunthorpe --- drivers/nvme/target/nvmet.h | 1 + drivers/nvme/target/passthru.c | 12 +++++++++--- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 2f9635273629..e89ec280e91a 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -332,6 +332,7 @@ struct nvmet_req { struct work_struct work; } f; struct { + struct bio inline_bio; struct request *rq; struct work_struct work; bool use_workqueue; diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index 2b24205ee79d..b9776fc8f08f 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -194,14 +194,20 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) if (req->sg_cnt > BIO_MAX_PAGES) return -EINVAL; - bio = bio_alloc(GFP_KERNEL, req->sg_cnt); - bio->bi_end_io = bio_put; + if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { + bio = &req->p.inline_bio; + bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); + } else { + bio = bio_alloc(GFP_KERNEL, min(req->sg_cnt, BIO_MAX_PAGES)); + bio->bi_end_io = bio_put; + } bio->bi_opf = req_op(rq); for_each_sg(req->sg, sg, req->sg_cnt, i) { if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length, sg->offset) < sg->length) { - bio_put(bio); + if (bio != &req->p.inline_bio) + bio_put(bio); return -EINVAL; } }