From patchwork Fri Dec 3 21:45:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12655837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41113C433EF for ; Fri, 3 Dec 2021 21:45:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383257AbhLCVtT (ORCPT ); Fri, 3 Dec 2021 16:49:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236204AbhLCVtT (ORCPT ); Fri, 3 Dec 2021 16:49:19 -0500 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A516C061751 for ; Fri, 3 Dec 2021 13:45:55 -0800 (PST) Received: by mail-pj1-x1033.google.com with SMTP id iq11so3304552pjb.3 for ; Fri, 03 Dec 2021 13:45:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YKuHL/KZVSVISLRG3bffZqV5xT2zA+0nRRE7V8Ac64s=; b=f2pSf5NdbjG+w/joZ1zBUGX9AvZuLkRcIX6k9Z1K5WUfF55j+mNKn8BO3w26nyqVAh OiT/XrQOw4blpksIu5ZdNwXqsGJyen3+UV8FpWsGMQF2/Ub0Y4oibFeedh9lpSyaq9b2 CdPwBRqR7jyyfhQ6t6vYMm2xcKEt/WoVocCNv0c58jNU2kAiRPirlIYj9bLkWsnN4JUc yeVtBrCltmVRWQTntZSojr7QsgwPr2OwdFh7aq3v9lR5O4uEQfYU8ze6IEVz8KonMgz7 RQcVILOU4QnYQAacLRbmNcdyaBDXmi3a/v6ctis/TSKi9N+UtZJ82xAfuPIzhLCLaakX 2w2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YKuHL/KZVSVISLRG3bffZqV5xT2zA+0nRRE7V8Ac64s=; b=YWQ9WLxxnF84oz4yWj94oSnfZq8sDEnWc3qB7KbM4MNiCp5ZDma8Sgn73X+ZnWZOi+ vshz2UYND2Gjlfk4LhyeR/ODxnbylhBaD0U9Ywp+u9OuZ71b5TcbE4oOn58AEDVeXDND WcouiW3zc0RQHLxvZGPzZLkT15kCsM6T2TUagPNDOcu3zIw3ucIkuYdYPIX6i+wRUIzd iP6aPqn6ETH8e6OlHi/n3bGVT4jrCUl8yRsg7++6aJLfD8t3+R0nXfY+UdtTnpPEv/Vi sMQs/zn3lTKUIiHrWC1Zx9PcyulfocHpJ9/JTdZVftu+GW6cbhP6KScVYvIhl3pKNUsH +lDA== X-Gm-Message-State: AOAM530uN8ZFFE1AQGyoejOfGe2E0dVIW9cve8dIftDuJAagJtMh4qvb gZxtPwrjTomgvfJSzsMPHC+gOWh1BKVRV2so X-Google-Smtp-Source: ABdhPJxFFSbx8kIz6tEOf+g5FlRKymbAnBz6UtS9KiqXcbYifv9l8XECHjDyvGiAuD0R9PiLyfRRaA== X-Received: by 2002:a17:90b:4d90:: with SMTP id oj16mr17092107pjb.2.1638567954222; Fri, 03 Dec 2021 13:45:54 -0800 (PST) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id f4sm4436225pfj.61.2021.12.03.13.45.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Dec 2021 13:45:53 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: Jens Axboe Subject: [PATCH 1/4] block: add mq_ops->queue_rqs hook Date: Fri, 3 Dec 2021 14:45:41 -0700 Message-Id: <20211203214544.343460-2-axboe@kernel.dk> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211203214544.343460-1-axboe@kernel.dk> References: <20211203214544.343460-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If we have a list of requests in our plug list, send it to the driver in one go, if possible. The driver must set mq_ops->queue_rqs() to support this, if not the usual one-by-one path is used. Signed-off-by: Jens Axboe --- block/blk-mq.c | 24 +++++++++++++++++++++--- include/linux/blk-mq.h | 8 ++++++++ 2 files changed, 29 insertions(+), 3 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 22ec21aa0c22..9ac9174a2ba4 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2513,6 +2513,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) { struct blk_mq_hw_ctx *this_hctx; struct blk_mq_ctx *this_ctx; + struct request *rq; unsigned int depth; LIST_HEAD(list); @@ -2521,7 +2522,26 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) plug->rq_count = 0; if (!plug->multiple_queues && !plug->has_elevator && !from_schedule) { - blk_mq_run_dispatch_ops(plug->mq_list->q, + struct request_queue *q; + + rq = plug->mq_list; + q = rq->q; + + /* + * Peek first request and see if we have a ->queue_rqs() hook. + * If we do, we can dispatch the whole plug list in one go. We + * already know at this point that all requests belong to the + * same queue, caller must ensure that's the case. + */ + if (q->mq_ops->queue_rqs && + !(rq->mq_hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)) { + blk_mq_run_dispatch_ops(q, + q->mq_ops->queue_rqs(&plug->mq_list)); + if (rq_list_empty(plug->mq_list)) + return; + } + + blk_mq_run_dispatch_ops(q, blk_mq_plug_issue_direct(plug, false)); if (rq_list_empty(plug->mq_list)) return; @@ -2531,8 +2551,6 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) this_ctx = NULL; depth = 0; do { - struct request *rq; - rq = rq_list_pop(&plug->mq_list); if (!this_hctx) { diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index ecdc049b52fa..cdd183757ea0 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -494,6 +494,14 @@ struct blk_mq_ops { */ void (*commit_rqs)(struct blk_mq_hw_ctx *); + /** + * @queue_rqs: Queue a list of new requests. Driver is guaranteed + * that each request belongs to the same queue. If the driver doesn't + * empty the @rqlist completely, then the rest will be queued + * individually by the block layer upon return. + */ + void (*queue_rqs)(struct request **rqlist); + /** * @get_budget: Reserve budget before queue request, once .queue_rq is * run, it is driver's responsibility to release the From patchwork Fri Dec 3 21:45:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12655839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F39D4C433FE for ; Fri, 3 Dec 2021 21:45:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383258AbhLCVtU (ORCPT ); Fri, 3 Dec 2021 16:49:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236204AbhLCVtU (ORCPT ); Fri, 3 Dec 2021 16:49:20 -0500 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9E36C061751 for ; Fri, 3 Dec 2021 13:45:55 -0800 (PST) Received: by mail-pf1-x435.google.com with SMTP id g18so4083507pfk.5 for ; Fri, 03 Dec 2021 13:45:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jIRFGxrYQlPRm7z5eZ4R3P7jhnosq67mPO1H2y/DpmQ=; b=BpzKmP9l5WsKjZFs9SR9AjohNSkKzMg1wMi/L5sVHQlrqA8/vyWQkHbZhrqJ2ZG9nK kH4pRL3yClleD1r1eHKwPggfzl82OqELyv2hjMxGO6u6EZ1E5Ez2Nk+UHH88fkEdvzNQ gjXOxZyMJ0zpgj1LHvMBUiqvaH4NpTJHqXyYdn208/SyDspY3gIGI9odg+I+HaPcyttQ DwqpphZ/G1L14WFUXiOPiOrLHDKSyuBFUGLd7befto1KpBRq9y345ET19MQ+fNdUHp9L H/QJhojsf0kAMrM4W1a9VIZ8B+j3lcSOgB5tB0mtvpjTRieJVsyyce10ia0Jq63W9f5U cI8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jIRFGxrYQlPRm7z5eZ4R3P7jhnosq67mPO1H2y/DpmQ=; b=ILxJTqWQvQoRuvDZHao0qgjvQUte8R+prHTSPy2b2ArORDB1lQKVA/MAUxRFUtNbz8 eNmK9Z+KV+ASpZCOakAeu7b5/v8549mIUYel7xrhnN3mQqjnvIurUfQKCRdJwgR5cd3/ /nbxn9bJrS4lBcJ7SxHIIdL84LGkANdGwwTsuHrpi4bwKQyU/VTwOnvOnTtuNS3YDGU/ z4CSslaTtdglVoaq/GfBOds+1Qej9M+w/iVUfpO9lzPiAQJ6xvgA0+ytI80v7TDlJYOg uAz66fnVKoIJqtblscy/MHBUeGiqnAItNyonlC77CR/47vv4P1Z/g3VIsTT7t6uZwTTu M1jg== X-Gm-Message-State: AOAM533uSLawRokz7nNeCrbm1gcqqchC8yu8SM+Np9Tu/9fx/cN9AQ0T mUmltq8khslcz3+jhUHE0tWOJeOFCfBndVs9 X-Google-Smtp-Source: ABdhPJzaqYWCIttA1Pb1xjDkmPu7X7KOjQ/HgtCzoSMTmd0qdLZ1UopuyI3bQqiuAMWNUyY2D3hiug== X-Received: by 2002:a65:464e:: with SMTP id k14mr6392693pgr.493.1638567955191; Fri, 03 Dec 2021 13:45:55 -0800 (PST) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id f4sm4436225pfj.61.2021.12.03.13.45.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Dec 2021 13:45:54 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: Jens Axboe , Chaitanya Kulkarni Subject: [PATCH 2/4] nvme: split command copy into a helper Date: Fri, 3 Dec 2021 14:45:42 -0700 Message-Id: <20211203214544.343460-3-axboe@kernel.dk> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211203214544.343460-1-axboe@kernel.dk> References: <20211203214544.343460-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We'll need it for batched submit as well. Reviewed-by: Chaitanya Kulkarni Signed-off-by: Jens Axboe Reviewed-by: Hannes Reinecke --- drivers/nvme/host/pci.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 8637538f3fd5..09ea21f75439 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -500,6 +500,15 @@ static inline void nvme_write_sq_db(struct nvme_queue *nvmeq, bool write_sq) nvmeq->last_sq_tail = nvmeq->sq_tail; } +static inline void nvme_sq_copy_cmd(struct nvme_queue *nvmeq, + struct nvme_command *cmd) +{ + memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes), cmd, + sizeof(*cmd)); + if (++nvmeq->sq_tail == nvmeq->q_depth) + nvmeq->sq_tail = 0; +} + /** * nvme_submit_cmd() - Copy a command into a queue and ring the doorbell * @nvmeq: The queue to use @@ -510,10 +519,7 @@ static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd, bool write_sq) { spin_lock(&nvmeq->sq_lock); - memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes), - cmd, sizeof(*cmd)); - if (++nvmeq->sq_tail == nvmeq->q_depth) - nvmeq->sq_tail = 0; + nvme_sq_copy_cmd(nvmeq, cmd); nvme_write_sq_db(nvmeq, write_sq); spin_unlock(&nvmeq->sq_lock); } From patchwork Fri Dec 3 21:45:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12655841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 524C5C433EF for ; Fri, 3 Dec 2021 21:45:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383269AbhLCVtV (ORCPT ); Fri, 3 Dec 2021 16:49:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236204AbhLCVtV (ORCPT ); Fri, 3 Dec 2021 16:49:21 -0500 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFD59C061751 for ; Fri, 3 Dec 2021 13:45:56 -0800 (PST) Received: by mail-pf1-x430.google.com with SMTP id n85so4052672pfd.10 for ; Fri, 03 Dec 2021 13:45:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yxdkcGfQVk3YgliGR9Ok7AI5TvejdPpxVDGMXfDvDmw=; b=rbNOBEBv2NoUR31USK/AYDPLPOUhO+dsBIII1KaC+beReWqI2g8RNiuS4nLx+z6kmS 9F77UtnxERF3U3astDqQcQLgUP4OK2rTUV5/7K5to7dNB7blAIYkxA+1JduHnbxNKuGH PUESMy20z1wiy6LO4zvbRppKOSwzQ4sm81w12VNHZxnyWsyxRFLD688vnsC8TCeTXUDy t7xuOBF3K8GwVOxkyQCF5xTk+H1nKjxrmXQK7G2pHpiEdhW4G5yXYKEETAhwl5kJeK7C TCgUh8mRX0vbiR94K+woA3p66USVWvpTglDYzpuJa5RUw/eUHISGUMuv0sbU0JfV0zqC 7hFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yxdkcGfQVk3YgliGR9Ok7AI5TvejdPpxVDGMXfDvDmw=; b=RUZP290d2QU3UF2QtO5aVBf0oioeWgUlRR24T6PLUQBlQpuVJitzSWkHuan1ZfaO9/ wa1Eqr6+NzNlFi/BVCD+oEI+Dq5HjUN+qZn1XYZvma5ta280AkuI0Ax0Hn1RPI//uVY6 IjhL04WVYBn8hGkXjKGUkskTkqxywvidmiDHMffs5cYtIxhYzFMlaf+81MksnpHkMqxd KxAtLPiMcIud+u5srZJ4wX/c9+ygec+lju/ZvKLdbX+oP5L6pbTELem+fmvaBXd6VLRd GJp8+YFWC3pvQuvM7oYKLMAZ71uuc6FhTvHx4re+VvBjVbfxdNZjLE8pAlojcJ0XOHgA 5VMA== X-Gm-Message-State: AOAM530iT6tPN/4kNRZypK2j0gVtTohQq4vjk/aha4yRhIESBzPYsMUJ rGpevEsSj/waEWeABkrsedPoHn5VIjkZHoy/ X-Google-Smtp-Source: ABdhPJzT1fEaPTjdNcvOwRImCv1vbb5q/LX7DIW1dRgqYVU2gef6EkgZ9O1T3hEp0DDkhe7umk71Sw== X-Received: by 2002:a05:6a00:1705:b0:4a0:3492:16c with SMTP id h5-20020a056a00170500b004a03492016cmr21682439pfc.3.1638567956085; Fri, 03 Dec 2021 13:45:56 -0800 (PST) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id f4sm4436225pfj.61.2021.12.03.13.45.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Dec 2021 13:45:55 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: Jens Axboe Subject: [PATCH 3/4] nvme: separate command prep and issue Date: Fri, 3 Dec 2021 14:45:43 -0700 Message-Id: <20211203214544.343460-4-axboe@kernel.dk> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211203214544.343460-1-axboe@kernel.dk> References: <20211203214544.343460-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add a nvme_prep_rq() helper to setup a command, and nvme_queue_rq() is adapted to use this helper. Signed-off-by: Jens Axboe Reviewed-by: Hannes Reinecke --- drivers/nvme/host/pci.c | 57 ++++++++++++++++++++++++----------------- 1 file changed, 33 insertions(+), 24 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 09ea21f75439..6be6b1ab4285 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -918,52 +918,32 @@ static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req, return BLK_STS_OK; } -/* - * NOTE: ns is NULL when called on the admin queue. - */ -static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, - const struct blk_mq_queue_data *bd) +static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) { - struct nvme_ns *ns = hctx->queue->queuedata; - struct nvme_queue *nvmeq = hctx->driver_data; - struct nvme_dev *dev = nvmeq->dev; - struct request *req = bd->rq; struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - struct nvme_command *cmnd = &iod->cmd; blk_status_t ret; iod->aborted = 0; iod->npages = -1; iod->nents = 0; - /* - * We should not need to do this, but we're still using this to - * ensure we can drain requests on a dying queue. - */ - if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) - return BLK_STS_IOERR; - - if (!nvme_check_ready(&dev->ctrl, req, true)) - return nvme_fail_nonready_command(&dev->ctrl, req); - - ret = nvme_setup_cmd(ns, req); + ret = nvme_setup_cmd(req->q->queuedata, req); if (ret) return ret; if (blk_rq_nr_phys_segments(req)) { - ret = nvme_map_data(dev, req, cmnd); + ret = nvme_map_data(dev, req, &iod->cmd); if (ret) goto out_free_cmd; } if (blk_integrity_rq(req)) { - ret = nvme_map_metadata(dev, req, cmnd); + ret = nvme_map_metadata(dev, req, &iod->cmd); if (ret) goto out_unmap_data; } blk_mq_start_request(req); - nvme_submit_cmd(nvmeq, cmnd, bd->last); return BLK_STS_OK; out_unmap_data: nvme_unmap_data(dev, req); @@ -972,6 +952,35 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, return ret; } +/* + * NOTE: ns is NULL when called on the admin queue. + */ +static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) +{ + struct nvme_queue *nvmeq = hctx->driver_data; + struct nvme_dev *dev = nvmeq->dev; + struct request *req = bd->rq; + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + blk_status_t ret; + + /* + * We should not need to do this, but we're still using this to + * ensure we can drain requests on a dying queue. + */ + if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) + return BLK_STS_IOERR; + + if (unlikely(!nvme_check_ready(&dev->ctrl, req, true))) + return nvme_fail_nonready_command(&dev->ctrl, req); + + ret = nvme_prep_rq(dev, req); + if (unlikely(ret)) + return ret; + nvme_submit_cmd(nvmeq, &iod->cmd, bd->last); + return BLK_STS_OK; +} + static __always_inline void nvme_pci_unmap_rq(struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); From patchwork Fri Dec 3 21:45:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12655843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9BFAC433F5 for ; Fri, 3 Dec 2021 21:45:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383271AbhLCVtW (ORCPT ); Fri, 3 Dec 2021 16:49:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236204AbhLCVtW (ORCPT ); Fri, 3 Dec 2021 16:49:22 -0500 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ADDFFC061751 for ; Fri, 3 Dec 2021 13:45:57 -0800 (PST) Received: by mail-pl1-x632.google.com with SMTP id q17so2979341plr.11 for ; Fri, 03 Dec 2021 13:45:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=o9zV3AsUZv3pbYT8ZjfgRU03aeoO//Fo4QIP9poWU10=; b=O9dRHmm+b/fSAJP61fkqyLIKKrC932Ckx8rkC8pOe2OdQfHAn1+hnqLbjSdP9HvSLU VtEa9hdPdc8JiuZNf/aSdpn25KLqOTvEgOgAnznvWgmwbvk5te1OgglAmnuGf8UDBeFL 2CgcPcLYAC3oXQU9jQIjSRe9KEBwZk0ZZ0gDlDAvj8LNQ8eJhecOS2A7TbexTanGNw2R 3LLH56MBqaEe6KLl7icZXXzWMtpcsfSFJoW4IgRlphYjmz4RjuZ21B/zDNljczxuy66k QRPGmS8ffOfguC4hLvFMkCqi4g9UU/J0oyPp10RD3WgLZZ+5uUB1hAVPYr9YsBclzTP3 +OJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=o9zV3AsUZv3pbYT8ZjfgRU03aeoO//Fo4QIP9poWU10=; b=OYFvJhKPOAW8qaBwq6Q0vEZIMYYIoyyNp5O2IAgHAU4QoAAWY5DVtdqWpwOa5Vu+U5 eXeA7QvaemBq721gqfuGrvq9CPodBixBuVdiSRh8FopyodJ0NzKmepCYD/LqsUaKZwOq 452xZQHJSjN0L/bI2lcaxWbEWqRxunLHg0/ezoMUxmFhEHS2Fd6rIhCCW4upwTrJZqvs mPthcKgJ5/FjLLiR1F4IXK2VNV/Iv/qAv1NZNLqb6lf1e4oW3NCZdGpotTo7YM3QsOlo 4RjjWkcUqvKEw8UdjEJqhywM+3JHtSj7k58aLRZHBe7WyKrCZ4FmN5mLs74HVLtz2KuN qU+A== X-Gm-Message-State: AOAM533obwUwvE8x1eHzwlz7TnpOTUjtOUVmEIcj5wpVsShb9bKBjls4 NfR97O1kQc7/liu3JW5teNSXAzxFNC2tCieV X-Google-Smtp-Source: ABdhPJwm5/Pin5+Uog1HvCngT5V4+Ahn+yrA4GFB6w1uHITyV+zEh9CiOMWTmUiW7fh94H39ZzWsVA== X-Received: by 2002:a17:903:24d:b0:143:beb5:b6b1 with SMTP id j13-20020a170903024d00b00143beb5b6b1mr25758780plh.54.1638567956943; Fri, 03 Dec 2021 13:45:56 -0800 (PST) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id f4sm4436225pfj.61.2021.12.03.13.45.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Dec 2021 13:45:56 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: Jens Axboe Subject: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Date: Fri, 3 Dec 2021 14:45:44 -0700 Message-Id: <20211203214544.343460-5-axboe@kernel.dk> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211203214544.343460-1-axboe@kernel.dk> References: <20211203214544.343460-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This enables the block layer to send us a full plug list of requests that need submitting. The block layer guarantees that they all belong to the same queue, but we do have to check the hardware queue mapping for each request. If errors are encountered, leave them in the passed in list. Then the block layer will handle them individually. This is good for about a 4% improvement in peak performance, taking us from 9.6M to 10M IOPS/core. Signed-off-by: Jens Axboe Reviewed-by: Hannes Reinecke --- drivers/nvme/host/pci.c | 61 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 6be6b1ab4285..197aa45ef7ef 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -981,6 +981,66 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, return BLK_STS_OK; } +static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist) +{ + spin_lock(&nvmeq->sq_lock); + while (!rq_list_empty(*rqlist)) { + struct request *req = rq_list_pop(rqlist); + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + + memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes), + absolute_pointer(&iod->cmd), sizeof(iod->cmd)); + if (++nvmeq->sq_tail == nvmeq->q_depth) + nvmeq->sq_tail = 0; + } + nvme_write_sq_db(nvmeq, true); + spin_unlock(&nvmeq->sq_lock); +} + +static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req) +{ + /* + * We should not need to do this, but we're still using this to + * ensure we can drain requests on a dying queue. + */ + if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) + return false; + if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true))) + return false; + + req->mq_hctx->tags->rqs[req->tag] = req; + return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK; +} + +static void nvme_queue_rqs(struct request **rqlist) +{ + struct request *req = rq_list_peek(rqlist), *prev = NULL; + struct request *requeue_list = NULL; + + do { + struct nvme_queue *nvmeq = req->mq_hctx->driver_data; + + if (!nvme_prep_rq_batch(nvmeq, req)) { + /* detach 'req' and add to remainder list */ + if (prev) + prev->rq_next = req->rq_next; + rq_list_add(&requeue_list, req); + } else { + prev = req; + } + + req = rq_list_next(req); + if (!req || (prev && req->mq_hctx != prev->mq_hctx)) { + /* detach rest of list, and submit */ + prev->rq_next = NULL; + nvme_submit_cmds(nvmeq, rqlist); + *rqlist = req; + } + } while (req); + + *rqlist = requeue_list; +} + static __always_inline void nvme_pci_unmap_rq(struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); @@ -1678,6 +1738,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = { static const struct blk_mq_ops nvme_mq_ops = { .queue_rq = nvme_queue_rq, + .queue_rqs = nvme_queue_rqs, .complete = nvme_pci_complete_rq, .commit_rqs = nvme_commit_rqs, .init_hctx = nvme_init_hctx,