From patchwork Wed Nov 17 03:38:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12623601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9E40C433EF for ; Wed, 17 Nov 2021 03:38:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D259661BE1 for ; Wed, 17 Nov 2021 03:38:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229910AbhKQDlK (ORCPT ); Tue, 16 Nov 2021 22:41:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230255AbhKQDlJ (ORCPT ); Tue, 16 Nov 2021 22:41:09 -0500 Received: from mail-io1-xd2d.google.com (mail-io1-xd2d.google.com [IPv6:2607:f8b0:4864:20::d2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00EFDC061570 for ; Tue, 16 Nov 2021 19:38:12 -0800 (PST) Received: by mail-io1-xd2d.google.com with SMTP id k22so1255878iol.13 for ; Tue, 16 Nov 2021 19:38:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3CAui2JMitf1Xwb6TPXFbbW7kvaykln1ulpIaFQ2nLM=; b=Y5WGtLQxH2C0dni/BwVp7cYSTyYyohZ5DmlM12NpOdda0naLnwDTgU+H6vygFr1pri lc1kowFbPAzW0nDgJvvYV6s6QR2wYRlWdLfE5VJZmVoljnoy6jCSRgCkbiqCviAP23NZ gwvSldVPzfD3quCAFvyz4JkBzerN2MzMCayUjSguqo5f3bQOdExGfQpBzEANhfHibg2Q lLCpBSidzlBEcDtbLcd+ff2b6vCnjCtG9rovhxNXSR7dalqRxVd0/163YFYdnea4g714 v7lRgP0mjSRJsKvA4uljvSd99C5CHohcfz3E+Yb38x70PIu0dqQRCjpmaIeUQPBm1vXa ukaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3CAui2JMitf1Xwb6TPXFbbW7kvaykln1ulpIaFQ2nLM=; b=5TOK8kDtcYK9OxMtYB5YOuLfsOeJDaOf4yjdkupHHZAbt9E9NUDtjx2M3gRFuPa2bA IYxHc/vhX9ChsBLjLAZehMXC0RejOuvSDwXtSzp67srSIHqtMVnxl50Yf1sTGpHHtrd3 5OYU8OADf4+qy7xCzruBU2xRI1XMatKRvrlvuwzhiBP6A9OOkGFhgx6kGUWEHaie6Rx/ uzfxYP4Vs6WWo9mglBxfvhupihyxYCcyqpCxdYW451P2xdEZSs0CYVnm4WSSIoE1K6xO 4abT3MQLFjETawpRIg7DL3mm1nqPou0eHuKthTsP/IqWseQYqRGtK3x9GN18IqhBZ6O7 VL5A== X-Gm-Message-State: AOAM531KAUdpbovToi0DO/tRYSVskkLKjjM4sAf7tzbg0aZ6TkhyZ/HO 9IxPFBvQUNZYdZJdyz4LFxYdUcFKQD4nB3zv X-Google-Smtp-Source: ABdhPJwebyYKedlR+7B02BArNNq+dvUyBW9K/OXHNzZBKNMrJ9/h0poDWD81bXLbP+khNGWzi++u5g== X-Received: by 2002:a05:6602:328a:: with SMTP id d10mr8670131ioz.175.1637120291220; Tue, 16 Nov 2021 19:38:11 -0800 (PST) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id l13sm12563693ios.49.2021.11.16.19.38.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 19:38:10 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: hch@infradead.org, Jens Axboe Subject: [PATCH 1/4] block: add mq_ops->queue_rqs hook Date: Tue, 16 Nov 2021 20:38:04 -0700 Message-Id: <20211117033807.185715-2-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211117033807.185715-1-axboe@kernel.dk> References: <20211117033807.185715-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If we have a list of requests in our plug list, send it to the driver in one go, if possible. The driver must set mq_ops->queue_rqs() to support this, if not the usual one-by-one path is used. Signed-off-by: Jens Axboe --- block/blk-mq.c | 17 +++++++++++++++++ include/linux/blk-mq.h | 8 ++++++++ 2 files changed, 25 insertions(+) diff --git a/block/blk-mq.c b/block/blk-mq.c index 9b4e79e2ac1e..005715206b16 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2208,6 +2208,19 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule) int queued = 0; int errors = 0; + /* + * Peek first request and see if we have a ->queue_rqs() hook. If we + * do, we can dispatch the whole plug list in one go. We already know + * at this point that all requests belong to the same queue, caller + * must ensure that's the case. + */ + rq = rq_list_peek(&plug->mq_list); + if (rq->q->mq_ops->queue_rqs) { + rq->q->mq_ops->queue_rqs(&plug->mq_list); + if (rq_list_empty(plug->mq_list)) + return; + } + while ((rq = rq_list_pop(&plug->mq_list))) { bool last = rq_list_empty(plug->mq_list); blk_status_t ret; @@ -2256,6 +2269,10 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) if (!plug->multiple_queues && !plug->has_elevator && !from_schedule) { blk_mq_plug_issue_direct(plug, false); + /* + * Expected case, all requests got dispatched. If not, fall + * through to individual dispatch of the remainder. + */ if (rq_list_empty(plug->mq_list)) return; } diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 3ba1e750067b..897cf475e7eb 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -503,6 +503,14 @@ struct blk_mq_ops { */ void (*commit_rqs)(struct blk_mq_hw_ctx *); + /** + * @queue_rqs: Queue a list of new requests. Driver is guaranteed + * that each request belongs to the same queue. If the driver doesn't + * empty the @rqlist completely, then the rest will be queued + * individually by the block layer upon return. + */ + void (*queue_rqs)(struct request **rqlist); + /** * @get_budget: Reserve budget before queue request, once .queue_rq is * run, it is driver's responsibility to release the From patchwork Wed Nov 17 03:38:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12623603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EC24C4332F for ; Wed, 17 Nov 2021 03:38:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F2BB561545 for ; Wed, 17 Nov 2021 03:38:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230255AbhKQDlL (ORCPT ); Tue, 16 Nov 2021 22:41:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231200AbhKQDlK (ORCPT ); Tue, 16 Nov 2021 22:41:10 -0500 Received: from mail-io1-xd31.google.com (mail-io1-xd31.google.com [IPv6:2607:f8b0:4864:20::d31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8913FC061746 for ; Tue, 16 Nov 2021 19:38:12 -0800 (PST) Received: by mail-io1-xd31.google.com with SMTP id e144so1329402iof.3 for ; Tue, 16 Nov 2021 19:38:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4G4Mjbe7lIxc74U3KrDv4l8eTOrTW6gu4gjlLQHEK0k=; b=svSwfhjyuJgA0GF2D87eQFGI5pFplJPSMvBTs/0mfTeYw4kcKixtMfAbV6jkqytx2v cuJEfJvMmFAT0XvF6tZfVGgR0OGXIUrAAjHhmb6/qNHGCIC+VvHWAT81VivB8T23MZxi iwUAujSuaWgvxSJaduwFUqo/IC5vrUDtoFfS939AlQTIIb0AvWPp2E8eDIOgDgJwaB1o zGuBwkXH6emCIhjAzsKmjpy41S7LuUez7+uX4bge+QTf8o1embOyHlcg241newWd0cGu znDOxjw/OEweSyzj6t1qW3MD4AGF9MfqYVz2hhGPncI/Uzu5N6oiX9oWc2Q/FUEbANo7 GjPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4G4Mjbe7lIxc74U3KrDv4l8eTOrTW6gu4gjlLQHEK0k=; b=Mj0AbdK193fa/605yRKpi8xIC220PHRj7wGpE1tf4+Gc3DGevcfN++2T85Mm2Dp2UL 8shWZlmOJrpL6+LhBCsXpsxv5yVCaXawLQ7n5oJm68etivrsEd6brXhzeGdWyyYKHgPE Gg1ogiej7kNuRAIq+cIVzpJFEv8vp+J80hGzAZLiMe0MSKtWaKxad4Ox57J1CntCja3i T+c26UYUQwg/sUDtAtcaQMcxI9VlBkox4ixg+nBt5xWJ2meZYutt55ZPoQnF5zrQ4aK3 Wzp0djfNGuDhUEMujy90JHNcLxuTZaifKoN3AcLNWnd0i5jZDE0ZGCBaKjPOMk5z6bsP nFgQ== X-Gm-Message-State: AOAM532NQ5UwBA17olM4eCJ39gCq3SFrl1KLU6EE3xcu6LnMZTeW1NDb 2xXqWsQDvsmUm2EPYUe88bcgEEVGCMZlcnfl X-Google-Smtp-Source: ABdhPJyTr1DRMagYOjMA2OQv6txRKgU6H9uQM+mP9atjqXqq3LW6IIRd8jh8hlmCpGBOJ2eYkqftNQ== X-Received: by 2002:a05:6638:3899:: with SMTP id b25mr10321264jav.39.1637120291851; Tue, 16 Nov 2021 19:38:11 -0800 (PST) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id l13sm12563693ios.49.2021.11.16.19.38.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 19:38:11 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: hch@infradead.org, Jens Axboe Subject: [PATCH 2/4] nvme: split command copy into a helper Date: Tue, 16 Nov 2021 20:38:05 -0700 Message-Id: <20211117033807.185715-3-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211117033807.185715-1-axboe@kernel.dk> References: <20211117033807.185715-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We'll need it for batched submit as well. Signed-off-by: Jens Axboe Reviewed-by: Chaitanya Kulkarni --- drivers/nvme/host/pci.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index c89f74ea00d4..c33cd1177b37 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -501,6 +501,15 @@ static inline void nvme_write_sq_db(struct nvme_queue *nvmeq, bool write_sq) nvmeq->last_sq_tail = nvmeq->sq_tail; } +static inline void nvme_copy_cmd(struct nvme_queue *nvmeq, + struct nvme_command *cmd) +{ + memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes), cmd, + sizeof(*cmd)); + if (++nvmeq->sq_tail == nvmeq->q_depth) + nvmeq->sq_tail = 0; +} + /** * nvme_submit_cmd() - Copy a command into a queue and ring the doorbell * @nvmeq: The queue to use @@ -511,10 +520,7 @@ static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd, bool write_sq) { spin_lock(&nvmeq->sq_lock); - memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes), - cmd, sizeof(*cmd)); - if (++nvmeq->sq_tail == nvmeq->q_depth) - nvmeq->sq_tail = 0; + nvme_copy_cmd(nvmeq, cmd); nvme_write_sq_db(nvmeq, write_sq); spin_unlock(&nvmeq->sq_lock); } From patchwork Wed Nov 17 03:38:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12623607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EDAEC433FE for ; Wed, 17 Nov 2021 03:38:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5930063215 for ; Wed, 17 Nov 2021 03:38:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231200AbhKQDlL (ORCPT ); Tue, 16 Nov 2021 22:41:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231424AbhKQDlK (ORCPT ); Tue, 16 Nov 2021 22:41:10 -0500 Received: from mail-io1-xd31.google.com (mail-io1-xd31.google.com [IPv6:2607:f8b0:4864:20::d31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22458C061570 for ; Tue, 16 Nov 2021 19:38:13 -0800 (PST) Received: by mail-io1-xd31.google.com with SMTP id z26so1281230iod.10 for ; Tue, 16 Nov 2021 19:38:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8z9BXWgx/3pZLVd7wZo3C6gCk0NqzQCEacc8RWYt7RY=; b=YISAX82UiEU1N76h2yuk2lno5RsmzbtdFBdrvsOcWSo5pT+UKzlyRebIIBSoGeJOfS PvNy79cyUZjmd37Yz0lU6//h0FL0c7kCIRx7xTRXOWlvwPe2BNc7lLsj6aBMwGg0XB2m D7rEKnrlyuwDWVVjat5+0bLlrlN8JD2YyIXAg2s1TGK+kqcz/LwKefhaHah/TIj+YjRB dCZFPmDaJv1UQWcVpvhnuBc6fhFANnPX0FfeuB/91a79sONhrLEvYY06mbLGmBdZIy7t PyT1yGuUW+If4bJAtKi6fj2T7PQq7OxCkbMs9E7inHdFA25WfAXSRprUz3dEsc3dQZes 9HZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8z9BXWgx/3pZLVd7wZo3C6gCk0NqzQCEacc8RWYt7RY=; b=5tDuFUvlZosG4iOvlNJtW3epy2Y5EL70XvHBa2b1lINwvkzfMqwR3EK8yrQZn1kIGS RbnZ5uQDhmBurCR4Z6ZIEEt3cO6yln3+Ao3aRNu/lGxmeT7xkKp3eelHagEF11D+xJEH vWdjihM6x88UoPtHRXVgXdCAzsdq+BIYYL9lDmnFhBZMYY6wvXvNqECCgJ4CViHnGAco 4r0VeWOPtrp20YLOiZb1HPeLxq+wk2Brci34W9CD9pUt82DQK2qn3b6ZcURR6Zmc4IKP NhNoVT1urOlkWvZUaG0crV/fLzySQaye/FKm+Kc19Whuhfo2tOuWCxyjUxGLq/DGTEgG /FbA== X-Gm-Message-State: AOAM531Q7Rq7zb/FoTj+TdfYeBbUmmQTV1mfUZ+XnuJZBR1OvPhEx5Ed RVgh4Q4CxttudcLoQOu0AE8gc2dRoP8iZtwF X-Google-Smtp-Source: ABdhPJyFi10QaKFMCVn53K4XwHu3hXqQt9qZ2LX8GcrygDbHG+3nlPpJ5YkbTVMgqwClgIPTADjPgA== X-Received: by 2002:a05:6602:2a42:: with SMTP id k2mr8531850iov.132.1637120292384; Tue, 16 Nov 2021 19:38:12 -0800 (PST) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id l13sm12563693ios.49.2021.11.16.19.38.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 19:38:12 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: hch@infradead.org, Jens Axboe Subject: [PATCH 3/4] nvme: separate command prep and issue Date: Tue, 16 Nov 2021 20:38:06 -0700 Message-Id: <20211117033807.185715-4-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211117033807.185715-1-axboe@kernel.dk> References: <20211117033807.185715-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add a nvme_prep_rq() helper to setup a command, and nvme_queue_rq() is adapted to use this helper. Signed-off-by: Jens Axboe --- drivers/nvme/host/pci.c | 54 +++++++++++++++++++++++++---------------- 1 file changed, 33 insertions(+), 21 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index c33cd1177b37..d2b654fc3603 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -937,18 +937,10 @@ static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req, return BLK_STS_OK; } -/* - * NOTE: ns is NULL when called on the admin queue. - */ -static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, - const struct blk_mq_queue_data *bd) +static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct nvme_ns *ns, + struct request *req, struct nvme_command *cmnd) { - struct nvme_ns *ns = hctx->queue->queuedata; - struct nvme_queue *nvmeq = hctx->driver_data; - struct nvme_dev *dev = nvmeq->dev; - struct request *req = bd->rq; struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - struct nvme_command *cmnd = &iod->cmd; blk_status_t ret; iod->aborted = false; @@ -956,16 +948,6 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, iod->npages = -1; iod->nents = 0; - /* - * We should not need to do this, but we're still using this to - * ensure we can drain requests on a dying queue. - */ - if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) - return BLK_STS_IOERR; - - if (!nvme_check_ready(&dev->ctrl, req, true)) - return nvme_fail_nonready_command(&dev->ctrl, req); - ret = nvme_setup_cmd(ns, req); if (ret) return ret; @@ -983,7 +965,6 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, } blk_mq_start_request(req); - nvme_submit_cmd(nvmeq, cmnd, bd->last); return BLK_STS_OK; out_unmap_data: nvme_unmap_data(dev, req); @@ -992,6 +973,37 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, return ret; } +/* + * NOTE: ns is NULL when called on the admin queue. + */ +static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) +{ + struct nvme_ns *ns = hctx->queue->queuedata; + struct nvme_queue *nvmeq = hctx->driver_data; + struct nvme_dev *dev = nvmeq->dev; + struct request *req = bd->rq; + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + blk_status_t ret; + + /* + * We should not need to do this, but we're still using this to + * ensure we can drain requests on a dying queue. + */ + if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) + return BLK_STS_IOERR; + + if (!nvme_check_ready(&dev->ctrl, req, true)) + return nvme_fail_nonready_command(&dev->ctrl, req); + + ret = nvme_prep_rq(dev, ns, req, &iod->cmd); + if (ret == BLK_STS_OK) { + nvme_submit_cmd(nvmeq, &iod->cmd, bd->last); + return BLK_STS_OK; + } + return ret; +} + static __always_inline void nvme_pci_unmap_rq(struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); From patchwork Wed Nov 17 03:38:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12623605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3400C43217 for ; Wed, 17 Nov 2021 03:38:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C2F7F61C4F for ; Wed, 17 Nov 2021 03:38:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231650AbhKQDlL (ORCPT ); Tue, 16 Nov 2021 22:41:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231424AbhKQDlL (ORCPT ); Tue, 16 Nov 2021 22:41:11 -0500 Received: from mail-il1-x136.google.com (mail-il1-x136.google.com [IPv6:2607:f8b0:4864:20::136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB3E1C061570 for ; Tue, 16 Nov 2021 19:38:13 -0800 (PST) Received: by mail-il1-x136.google.com with SMTP id i11so1268144ilv.13 for ; Tue, 16 Nov 2021 19:38:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wnQo7FoyxT08km9ibGsa/nVYEJXXJ3lPgVFWk+SC5qk=; b=X1hCAMaJK/jZFrXfeIHAwJoTRcDXMJycF9FC7aciJQWq53cV9u4hJEbYU4rIvg/bK6 uWknRdYsRZXw46+FI82bQMXQ/RrM/KVioIoOicGrFR4vJNC/Vi2yrD+6d3lJuhu9HQvz gi/zTOneYpaqn8zNa0MwtNKU5yokDrOPk8+KVWXnXRzvDFhYMPMZEKHknGWFRYFv6zrF IGe3vdH6CXsCx2j0v+dSu1kWPqhQf+xeYsJRdMl7thH5/1ggVuyKG44bW2NDn4Z0Kp6L Q+k65auAOzWMRsYHdzlVWiWB9YJfBk430ELr61rQBlt+lraIFpj1aqRW0dYaLAjpSzBC jKvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wnQo7FoyxT08km9ibGsa/nVYEJXXJ3lPgVFWk+SC5qk=; b=njmiqSNHXMgagVsc0fp4kD9dAttA+iES/CxIkKIyG9AtY5UZL7KkBEVQeOZeAn/MEV 52O/VWNdaHHLgqlihDNrZhB1yCSsnOdhMmwXYTh26rZAvPPS/JBV4MlvuZqEDnqQuc8j qN/7Nop1FSjmj5iAZd3Hn4SFRlfq2lS4gnZ+s0l86uv3jCCOkn8V6LJS2MoxfRABsHMM HchI2XdgzLuL/isHcB/GpSGpPBocyyO/ALnNqs+x2q99YuZBot9vtXHgfHloZV8hDO1R OiXhUHdCKk0nbWdMrsb6daEmiOfzBTyQ3cCg2df8Hb7WRh8u4ROP68tTQitIH4P+yTi7 5hxg== X-Gm-Message-State: AOAM532pFyIyxms9R66kpri5ym6/0NgSXXww1nZNOiupl8i181ZgoO2N 9fUTeMyA7c4GvjMdS7F6sA+RLCgrjPQ41usC X-Google-Smtp-Source: ABdhPJyAw5H1VGO8WdFAje6FkWnhl+anX7sh0EotttHPQAkHXXZsRfjMTUP46Q4wwPYskDvCd2J5jg== X-Received: by 2002:a92:c0cb:: with SMTP id t11mr7972957ilf.154.1637120292944; Tue, 16 Nov 2021 19:38:12 -0800 (PST) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id l13sm12563693ios.49.2021.11.16.19.38.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 19:38:12 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: hch@infradead.org, Jens Axboe Subject: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Date: Tue, 16 Nov 2021 20:38:07 -0700 Message-Id: <20211117033807.185715-5-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211117033807.185715-1-axboe@kernel.dk> References: <20211117033807.185715-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This enables the block layer to send us a full plug list of requests that need submitting. The block layer guarantees that they all belong to the same queue, but we do have to check the hardware queue mapping for each request. If errors are encountered, leave them in the passed in list. Then the block layer will handle them individually. This is good for about a 4% improvement in peak performance, taking us from 9.6M to 10M IOPS/core. Signed-off-by: Jens Axboe --- drivers/nvme/host/pci.c | 67 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 67 insertions(+) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index d2b654fc3603..2eedd04b1f90 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1004,6 +1004,72 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, return ret; } +static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist) +{ + spin_lock(&nvmeq->sq_lock); + while (!rq_list_empty(*rqlist)) { + struct request *req = rq_list_pop(rqlist); + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + + nvme_copy_cmd(nvmeq, absolute_pointer(&iod->cmd)); + } + nvme_write_sq_db(nvmeq, true); + spin_unlock(&nvmeq->sq_lock); +} + +static void nvme_queue_rqs(struct request **rqlist) +{ + struct request *requeue_list = NULL, *req, *prev = NULL; + struct blk_mq_hw_ctx *hctx; + struct nvme_queue *nvmeq; + struct nvme_ns *ns; + +restart: + req = rq_list_peek(rqlist); + hctx = req->mq_hctx; + nvmeq = hctx->driver_data; + ns = hctx->queue->queuedata; + + /* + * We should not need to do this, but we're still using this to + * ensure we can drain requests on a dying queue. + */ + if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) + return; + + rq_list_for_each(rqlist, req) { + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + blk_status_t ret; + + if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true))) + goto requeue; + + if (req->mq_hctx != hctx) { + /* detach rest of list, and submit */ + prev->rq_next = NULL; + nvme_submit_cmds(nvmeq, rqlist); + /* req now start of new list for this hw queue */ + *rqlist = req; + goto restart; + } + + hctx->tags->rqs[req->tag] = req; + ret = nvme_prep_rq(nvmeq->dev, ns, req, &iod->cmd); + if (ret == BLK_STS_OK) { + prev = req; + continue; + } +requeue: + /* detach 'req' and add to remainder list */ + if (prev) + prev->rq_next = req->rq_next; + rq_list_add(&requeue_list, req); + } + + nvme_submit_cmds(nvmeq, rqlist); + *rqlist = requeue_list; +} + static __always_inline void nvme_pci_unmap_rq(struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); @@ -1741,6 +1807,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = { static const struct blk_mq_ops nvme_mq_ops = { .queue_rq = nvme_queue_rq, + .queue_rqs = nvme_queue_rqs, .complete = nvme_pci_complete_rq, .commit_rqs = nvme_commit_rqs, .init_hctx = nvme_init_hctx,