From patchwork Wed Nov 3 18:32:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12601445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E5DAC433F5 for ; Wed, 3 Nov 2021 18:32:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EA62961076 for ; Wed, 3 Nov 2021 18:32:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230156AbhKCSfE (ORCPT ); Wed, 3 Nov 2021 14:35:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229893AbhKCSfD (ORCPT ); Wed, 3 Nov 2021 14:35:03 -0400 Received: from mail-ot1-x329.google.com (mail-ot1-x329.google.com [IPv6:2607:f8b0:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA131C061714 for ; Wed, 3 Nov 2021 11:32:25 -0700 (PDT) Received: by mail-ot1-x329.google.com with SMTP id r10-20020a056830448a00b0055ac7767f5eso4766337otv.3 for ; Wed, 03 Nov 2021 11:32:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iUVdiKdE3UtLDrSJ4ppTvX/REgJOcEw5TGAopy8ZtZU=; b=P64WAdj+AWI6P8K618uYTEwfAIyIyUvOGoUMaRrR3YGPUJbQ0lZH6k9VvlTHm2yUFG 6K7A4zwaKg7GjamFnl9bN0kwMCPe6kLxkjU4NWIOzGGDCaifbNu/nS0SCeghlpneldQM a/QUFylYbVYLHrZo424omwfoV001aSWslv5bhuhz5T5LWUIb+DmeMmBQbgcjx5lWWxo5 o2ByKu6nk7wLroB9F2wqHhBDu5N6kVxTUT/9TBq1Ncsihh11mgTJmI9fcgcntqH8rW3e 4GhRD5qf3QoYYqwOP9IkjYujHmeakhrhBRFcUNoNa380mytvvh7n5viJqMEWxCDdYJXw BMSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iUVdiKdE3UtLDrSJ4ppTvX/REgJOcEw5TGAopy8ZtZU=; b=wDkK3UdKw7HRASjDtabfeTJZphuEyfMmuPUfB8sZa97fpp4sX6RiNlOe6WYetVuzkH h6hCcpPfARY7/33DW7OFDuOCmf2iC/QcJxeWs2AFj39fQcs5R4QDOUtsajkmiRKQgF99 igBuzWWkCyKtAM8R99juqS0Nt0TFLNnv6tfR8+pHQ9Wn4r5er4UmY16e6Z5848rWugpp dNzisv1Zq6TEdKj/z9XnWZnP+QlN7Vih60us1u+9ea00cKJMkxSPenrD4n1mBWn6f9We JegUqDhHXe1+A0jHHWS1wZVDuiJ0XM2tnx9PVcuMTd2r7YUUj/gCeJKBKauWxd3SD0e8 7bhA== X-Gm-Message-State: AOAM530OIu5Mb3/wGXXgaetvAHTLaeloqIrE7Hf/oyHMXb9uMYvCLDf5 ggDZcTFs9vRzrQuhN5/rHHxW3Pl7f19eVg== X-Google-Smtp-Source: ABdhPJwZxpHYCfWA2tjXGDNLEqB31Xl5XsX8bP7nadr9aBZyJzTWeIwy7B3jBuwmXw4M6coDwh40qQ== X-Received: by 2002:a9d:1cad:: with SMTP id l45mr1463627ota.343.1635964344990; Wed, 03 Nov 2021 11:32:24 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id i20sm766056otp.18.2021.11.03.11.32.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Nov 2021 11:32:24 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 1/4] block: have plug stored requests hold references to the queue Date: Wed, 3 Nov 2021 12:32:19 -0600 Message-Id: <20211103183222.180268-2-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211103183222.180268-1-axboe@kernel.dk> References: <20211103183222.180268-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Requests that were stored in the cache deliberately didn't hold an enter reference to the queue, instead we grabbed one every time we pulled a request out of there. That made for awkward logic on freeing the remainder of the cached list, if needed, where we had to artificially raise the queue usage count before each free. Grab references up front for cached plug requests. That's safer, and also more efficient. Fixes: 47c122e35d7e ("block: pre-allocate requests if plug is started and is a batch") Signed-off-by: Jens Axboe --- block/blk-core.c | 2 +- block/blk-mq.c | 7 ++++--- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index fd389a16013c..c2d267b6f910 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1643,7 +1643,7 @@ void blk_flush_plug(struct blk_plug *plug, bool from_schedule) flush_plug_callbacks(plug, from_schedule); if (!rq_list_empty(plug->mq_list)) blk_mq_flush_plug_list(plug, from_schedule); - if (unlikely(!from_schedule && plug->cached_rq)) + if (unlikely(!rq_list_empty(plug->cached_rq))) blk_mq_free_plug_rqs(plug); } diff --git a/block/blk-mq.c b/block/blk-mq.c index c68aa0a332e1..5498454c2164 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -410,7 +410,10 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data, tag_mask &= ~(1UL << i); rq = blk_mq_rq_ctx_init(data, tags, tag, alloc_time_ns); rq_list_add(data->cached_rq, rq); + nr++; } + /* caller already holds a reference, add for remainder */ + percpu_ref_get_many(&data->q->q_usage_counter, nr - 1); data->nr_tags -= nr; return rq_list_pop(data->cached_rq); @@ -630,10 +633,8 @@ void blk_mq_free_plug_rqs(struct blk_plug *plug) { struct request *rq; - while ((rq = rq_list_pop(&plug->cached_rq)) != NULL) { - percpu_ref_get(&rq->q->q_usage_counter); + while ((rq = rq_list_pop(&plug->cached_rq)) != NULL) blk_mq_free_request(rq); - } } static void req_bio_endio(struct request *rq, struct bio *bio, From patchwork Wed Nov 3 18:32:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12601449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EFB1C433FE for ; Wed, 3 Nov 2021 18:32:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 54D0C61156 for ; Wed, 3 Nov 2021 18:32:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229893AbhKCSfE (ORCPT ); Wed, 3 Nov 2021 14:35:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230011AbhKCSfD (ORCPT ); Wed, 3 Nov 2021 14:35:03 -0400 Received: from mail-oi1-x233.google.com (mail-oi1-x233.google.com [IPv6:2607:f8b0:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE73BC061203 for ; Wed, 3 Nov 2021 11:32:26 -0700 (PDT) Received: by mail-oi1-x233.google.com with SMTP id bg25so4376943oib.1 for ; Wed, 03 Nov 2021 11:32:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Bsj5ZLlquf/pPLdn6lhiC7/HYQDntfsyVm6vIVzVNlU=; b=Swa63GpKQpBSghR1qeumFKO5qqLhiewOsM0bu7kWTodUUnHlGW+wHuIktdcJ6PYCg8 EPKdZYBVuTWpjfAipaJjYTcQd19qhd0cdRoOgDBhx6IPLyUfgs6hHZ1X2wjowA4gs+U1 pUgRpNHRrPaFhlVGjpHB4fzzALhpTvwFiwwzhTV8Clxo/bG2bbpB2iHXt21N3Tvo35md SOQ7MQLzsb93lRS9Hrazjw6lEAdEwmvpN5Ixxixi+cqzC6N+E7EiGQv/xRW5hJ0G1ZjG 2U44PnE86XWtdrlNYXhpsxYSQstetIoY8bBNuXrkoRN5Am3MBC/8B8i6dquJ+vdP2Y/O tVlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Bsj5ZLlquf/pPLdn6lhiC7/HYQDntfsyVm6vIVzVNlU=; b=PDplI/c/9mNHq+6c60Sdz1Ju4YyNSgm3gZg6Ks5mwpccOzAaQmQiwxnbIRQWjqDgKy 1NEZURe2+ZRaDf7911acXhLCgUJ8XhcctxI6xj5RYba3nEen/1BEebDMos6tUxsXLzCq Rud499B4drPsrqiySrMTp8vpix1JJWK4ia3pQaNL4PbnskFudM+1PE0A/5cpLHuaXr82 UNzwOyv47mP+7U82QOFmK7s5ubAJuFcebPw2DYEBzfAMT3B72Kjf/yrYRb+sn4OBHl6q z2BZ56gWnV9GqEKzB7YdjgolPzLMA3GG3hq7W5q1NEd/wnW38rM/9gLbCE+rDJ+lGlbX tTsw== X-Gm-Message-State: AOAM531+U4FOzLaYjX2FfzVcjKbQHUrh1HpYDcnxjCpKNxmiUg2Pncf8 5stVKIU9GAeTAYX1u/1EdqzsCbl1ONx0Sg== X-Google-Smtp-Source: ABdhPJz3szlLyzgsIM6jlRM1dX/5oNRG/ffajYqelPQkYI1OrU1l1SL8/Ptla8HASQa29giRMpNn5g== X-Received: by 2002:a54:4d89:: with SMTP id y9mr11845941oix.22.1635964345836; Wed, 03 Nov 2021 11:32:25 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id i20sm766056otp.18.2021.11.03.11.32.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Nov 2021 11:32:25 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 2/4] block: make blk_try_enter_queue() available for blk-mq Date: Wed, 3 Nov 2021 12:32:20 -0600 Message-Id: <20211103183222.180268-3-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211103183222.180268-1-axboe@kernel.dk> References: <20211103183222.180268-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Just a prep patch for shifting the queue enter logic. Signed-off-by: Jens Axboe --- block/blk-core.c | 26 +------------------------- block/blk.h | 25 +++++++++++++++++++++++++ 2 files changed, 26 insertions(+), 25 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index c2d267b6f910..e00f5a2287cc 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -386,30 +386,6 @@ void blk_cleanup_queue(struct request_queue *q) } EXPORT_SYMBOL(blk_cleanup_queue); -static bool blk_try_enter_queue(struct request_queue *q, bool pm) -{ - rcu_read_lock(); - if (!percpu_ref_tryget_live_rcu(&q->q_usage_counter)) - goto fail; - - /* - * The code that increments the pm_only counter must ensure that the - * counter is globally visible before the queue is unfrozen. - */ - if (blk_queue_pm_only(q) && - (!pm || queue_rpm_status(q) == RPM_SUSPENDED)) - goto fail_put; - - rcu_read_unlock(); - return true; - -fail_put: - blk_queue_exit(q); -fail: - rcu_read_unlock(); - return false; -} - /** * blk_queue_enter() - try to increase q->q_usage_counter * @q: request queue pointer @@ -442,7 +418,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) return 0; } -static inline int bio_queue_enter(struct bio *bio) +int bio_queue_enter(struct bio *bio) { struct request_queue *q = bdev_get_queue(bio->bi_bdev); diff --git a/block/blk.h b/block/blk.h index 7afffd548daf..f7371d3b1522 100644 --- a/block/blk.h +++ b/block/blk.h @@ -55,6 +55,31 @@ void blk_free_flush_queue(struct blk_flush_queue *q); void blk_freeze_queue(struct request_queue *q); void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic); void blk_queue_start_drain(struct request_queue *q); +int bio_queue_enter(struct bio *bio); + +static inline bool blk_try_enter_queue(struct request_queue *q, bool pm) +{ + rcu_read_lock(); + if (!percpu_ref_tryget_live_rcu(&q->q_usage_counter)) + goto fail; + + /* + * The code that increments the pm_only counter must ensure that the + * counter is globally visible before the queue is unfrozen. + */ + if (blk_queue_pm_only(q) && + (!pm || queue_rpm_status(q) == RPM_SUSPENDED)) + goto fail_put; + + rcu_read_unlock(); + return true; + +fail_put: + blk_queue_exit(q); +fail: + rcu_read_unlock(); + return false; +} #define BIO_INLINE_VECS 4 struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs, From patchwork Wed Nov 3 18:32:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12601451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E50D3C4332F for ; Wed, 3 Nov 2021 18:32:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C30AF60F5A for ; Wed, 3 Nov 2021 18:32:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230011AbhKCSfF (ORCPT ); Wed, 3 Nov 2021 14:35:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229787AbhKCSfE (ORCPT ); Wed, 3 Nov 2021 14:35:04 -0400 Received: from mail-oi1-x232.google.com (mail-oi1-x232.google.com [IPv6:2607:f8b0:4864:20::232]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AF3CC061714 for ; Wed, 3 Nov 2021 11:32:27 -0700 (PDT) Received: by mail-oi1-x232.google.com with SMTP id u191so5002285oie.13 for ; Wed, 03 Nov 2021 11:32:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n5R9kzEVPsbqM/N8i22T1YICjNfMAyJqiI/S+Doffh0=; b=o4Eh4NLAdm4eP9+e2i1bE1y0s/z+6wxhtw0fxmS+rzGtpuqxuK+qRL6V+yj9f4er/D OiIZJfssNSdlQ+xHdsZc9mzpo9esjIhryyTMTNwuPtXqFGCdVSFACyfGgKQADvQdUYW9 3lWx1piAL09Fvb7X4JyJSHYp+zlz/v0gP9sxGBaEo1lDwZijwYY+wQ5Ud5qt4qcnR8ez 0AamoZWFtGaOHYxAST3dvagE/KZa+IjgEq++DKeaq3WTlClD7yzp0JV+cB15lZ31Drv0 gpOpSNilu4eNARzApHW9JpGg1GqvoKVF8WQt84bLpSnJ8jFIJ41GyEs1sNIqpKUT1tIl UH7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n5R9kzEVPsbqM/N8i22T1YICjNfMAyJqiI/S+Doffh0=; b=s7T7rQxPCOAj936Bn5jzOMkPxYqzIb7aNfE9cTrtVtcF080uuEGSsAUTYuOfSwA09e oLeg9ocCXME91mBQsJYpQgvFScraGjCEfo80BensHmAeJlYRqiGuAV5JVaEiVbJIZ3UV VPn/l+WlDDZqBjswOwZ9m3JgG9U/a1uWqkvdzleTTEDS++pP0DjEb1Bk+q6NPpnwKvaD fFqS3ii+oery1bpzw7LNN4N3iKl8LPIbNWrnrJvJzsp5F22y5ABZzUrCCA8puR6IOnOM iH1qpd0eEdnjHQoDKTNeaLVlOep7qu69s3t2+DvvBWlPOYcTKl23ND7naLr3haGlmKry /m1w== X-Gm-Message-State: AOAM531XMnuYc67Wq0xsdZ7Wri5bP98eVVoNmbrWPP26sN5GrOMZpK82 ZO+ajsnNdk0Sb2qYAWc10b+Mbh9OmDo1JA== X-Google-Smtp-Source: ABdhPJyG2wR/7+GkFcWs/b6pmwjexSLLlkfGeFI92z6Z1py3oObQW9NGlk6o/qmDTOnFP+krF7lPUA== X-Received: by 2002:a05:6808:1493:: with SMTP id e19mr12226625oiw.140.1635964346598; Wed, 03 Nov 2021 11:32:26 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id i20sm766056otp.18.2021.11.03.11.32.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Nov 2021 11:32:26 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio() Date: Wed, 3 Nov 2021 12:32:21 -0600 Message-Id: <20211103183222.180268-4-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211103183222.180268-1-axboe@kernel.dk> References: <20211103183222.180268-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Retain the old logic for the fops based submit, but for our internal blk_mq_submit_bio(), move the queue entering logic into the core function itself. We need to be a bit careful if going into the scheduler, as a scheduler or queue mappings can arbitrarily change before we have entered the queue. Have the bio scheduler mapping do that separately, it's a very cheap operation compared to actually doing merging locking and lookups. Signed-off-by: Jens Axboe --- block/blk-core.c | 14 ++++++-------- block/blk-mq-sched.c | 13 ++++++++++--- block/blk-mq.c | 28 ++++++++++++++++++---------- 3 files changed, 34 insertions(+), 21 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index e00f5a2287cc..2b12a427ffa6 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -868,18 +868,16 @@ static void __submit_bio(struct bio *bio) { struct gendisk *disk = bio->bi_bdev->bd_disk; - if (unlikely(bio_queue_enter(bio) != 0)) - return; - if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio)) - goto queue_exit; + return; if (!disk->fops->submit_bio) { blk_mq_submit_bio(bio); - return; + } else { + if (unlikely(bio_queue_enter(bio) != 0)) + return; + disk->fops->submit_bio(bio); + blk_queue_exit(disk->queue); } - disk->fops->submit_bio(bio); -queue_exit: - blk_queue_exit(disk->queue); } /* diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 4a6789e4398b..4be652fa38e7 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -370,15 +370,20 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, bool ret = false; enum hctx_type type; - if (e && e->type->ops.bio_merge) - return e->type->ops.bio_merge(q, bio, nr_segs); + if (bio_queue_enter(bio)) + return false; + + if (e && e->type->ops.bio_merge) { + ret = e->type->ops.bio_merge(q, bio, nr_segs); + goto out_put; + } ctx = blk_mq_get_ctx(q); hctx = blk_mq_map_queue(q, bio->bi_opf, ctx); type = hctx->type; if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) || list_empty_careful(&ctx->rq_lists[type])) - return false; + goto out_put; /* default per sw-queue merge */ spin_lock(&ctx->lock); @@ -391,6 +396,8 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, ret = true; spin_unlock(&ctx->lock); +out_put: + blk_queue_exit(q); return ret; } diff --git a/block/blk-mq.c b/block/blk-mq.c index 5498454c2164..4bc98c7264fa 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2478,6 +2478,13 @@ static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug) return BLK_MAX_REQUEST_COUNT; } +static inline bool blk_mq_queue_enter(struct request_queue *q, struct bio *bio) +{ + if (!blk_try_enter_queue(q, false) && bio_queue_enter(bio)) + return false; + return true; +} + /** * blk_mq_submit_bio - Create and send a request to block device. * @bio: Bio pointer. @@ -2506,21 +2513,20 @@ void blk_mq_submit_bio(struct bio *bio) __blk_queue_split(q, &bio, &nr_segs); if (!bio_integrity_prep(bio)) - goto queue_exit; + return; if (!blk_queue_nomerges(q) && bio_mergeable(bio)) { if (blk_attempt_plug_merge(q, bio, nr_segs, &same_queue_rq)) - goto queue_exit; + return; if (blk_mq_sched_bio_merge(q, bio, nr_segs)) - goto queue_exit; + return; } - rq_qos_throttle(q, bio); - plug = blk_mq_plug(q, bio); if (plug && plug->cached_rq) { rq = rq_list_pop(&plug->cached_rq); INIT_LIST_HEAD(&rq->queuelist); + rq_qos_throttle(q, bio); } else { struct blk_mq_alloc_data data = { .q = q, @@ -2528,6 +2534,11 @@ void blk_mq_submit_bio(struct bio *bio) .cmd_flags = bio->bi_opf, }; + if (unlikely(!blk_mq_queue_enter(q, bio))) + return; + + rq_qos_throttle(q, bio); + if (plug) { data.nr_tags = plug->nr_ios; plug->nr_ios = 1; @@ -2538,7 +2549,8 @@ void blk_mq_submit_bio(struct bio *bio) rq_qos_cleanup(q, bio); if (bio->bi_opf & REQ_NOWAIT) bio_wouldblock_error(bio); - goto queue_exit; + blk_queue_exit(q); + return; } } @@ -2621,10 +2633,6 @@ void blk_mq_submit_bio(struct bio *bio) /* Default case. */ blk_mq_sched_insert_request(rq, false, true, true); } - - return; -queue_exit: - blk_queue_exit(q); } static size_t order_to_size(unsigned int order) From patchwork Wed Nov 3 18:32:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12601447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80553C43217 for ; Wed, 3 Nov 2021 18:32:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6A70661076 for ; Wed, 3 Nov 2021 18:32:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229787AbhKCSfF (ORCPT ); Wed, 3 Nov 2021 14:35:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230198AbhKCSfF (ORCPT ); Wed, 3 Nov 2021 14:35:05 -0400 Received: from mail-oi1-x22c.google.com (mail-oi1-x22c.google.com [IPv6:2607:f8b0:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C561C061714 for ; Wed, 3 Nov 2021 11:32:28 -0700 (PDT) Received: by mail-oi1-x22c.google.com with SMTP id y11so5057617oih.7 for ; Wed, 03 Nov 2021 11:32:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nx4YNyDIu1laRmWmTqB2WvSefHm+x/QcfSPBrmlNO+I=; b=Le/mfW/wLjpuLrGyo/O58PZCIPoV/sSGJqqGmpEaCZSMhJxGlJyfhyw/+rPdLXMpAJ ezjuWA9Dg5AIhHss61ysnG9O61eCHXhCeF8iM4RZugpDTyFfbICs9rnjPNeU5yd0RgZJ q7siGBlAUK2MvAS0tm3q6Jee4IYBzZCf1R4kjsl9kXyz2c2c7zBot2CJ0ai0EBBb4Jny 0m9t7NBuj1ZIkjAQisz/Fv/7qOpDkMdTyHTs5U7pbUua735KAV31yL2ONkF3EMZCYpKq vIMdYUB1TG5tueBmeOlTZb3hE+k7yl4luD6svgEkWSY0g7rfLIENxvOmn9srww/IFK9o gxig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nx4YNyDIu1laRmWmTqB2WvSefHm+x/QcfSPBrmlNO+I=; b=6aaRYlKq2AwbZeoH3O9K2FHE+fOkL6cXjsmSMdSoH6UD038L5U+XvjXwWaIiZ0QiNJ cO1aZS+yR67HdeXK5NrYl/oj1MzBaHUBQA/kFpJbrceDXNPVyhEfdMNynSUWuGSNqQcw iSwwklQZ6m/QJ2LQOnXO+8D3eP6qKT9M6voFcNuV8tBTcbqjjdPjP73PaxHNe2gzj+vt PuBGgJ3W3eqxWuwW51rCSi1YlGfLVTs7VKGOJAtZLwVp80llCIcftKs6GkSHpxRqhb/r +qV7kzkmITtTLBtQ/UhUN9sCyFp1X0NhFPA9R5evF/EXnuLaRCuDJesJeWZzEGkBV3Hz 5I5g== X-Gm-Message-State: AOAM5339mM9rSlfTliaySln6w7xrjP5crK1lDatpEhIXRXQ0MeyWkBKS p138pJtxeqkuiP4hzSzb0hLtQxfBtAeNgg== X-Google-Smtp-Source: ABdhPJzwDxkdLwltU8R8tjTKIyJuWHmBJhHULNIyMZpF9fBjF+p5+aWdeWaKyX37rpUFkdjEJ8T+/w== X-Received: by 2002:a05:6808:18a7:: with SMTP id bi39mr12090676oib.136.1635964347483; Wed, 03 Nov 2021 11:32:27 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id i20sm766056otp.18.2021.11.03.11.32.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Nov 2021 11:32:26 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 4/4] block: move plug rq alloc into helper and ensure queue match Date: Wed, 3 Nov 2021 12:32:22 -0600 Message-Id: <20211103183222.180268-5-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211103183222.180268-1-axboe@kernel.dk> References: <20211103183222.180268-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We need to improve the logic here a bit, most importantly ensuring that the request matches the current queue. If it doesn't, we cannot use it and must fallback to normal request alloc. Fixes: 47c122e35d7e ("block: pre-allocate requests if plug is started and is a batch") Signed-off-by: Jens Axboe --- block/blk-mq.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 4bc98c7264fa..e92c36f2326a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2485,6 +2485,24 @@ static inline bool blk_mq_queue_enter(struct request_queue *q, struct bio *bio) return true; } +static inline struct request *blk_get_plug_request(struct request_queue *q, + struct blk_plug *plug, + struct bio *bio) +{ + struct request *rq; + + if (plug && !rq_list_empty(plug->cached_rq)) { + rq = rq_list_peek(&plug->cached_rq); + if (rq->q == q) { + rq_qos_throttle(q, bio); + plug->cached_rq = rq_list_next(rq); + INIT_LIST_HEAD(&rq->queuelist); + return rq; + } + } + return NULL; +} + /** * blk_mq_submit_bio - Create and send a request to block device. * @bio: Bio pointer. @@ -2523,11 +2541,8 @@ void blk_mq_submit_bio(struct bio *bio) } plug = blk_mq_plug(q, bio); - if (plug && plug->cached_rq) { - rq = rq_list_pop(&plug->cached_rq); - INIT_LIST_HEAD(&rq->queuelist); - rq_qos_throttle(q, bio); - } else { + rq = blk_get_plug_request(q, plug, bio); + if (!rq) { struct blk_mq_alloc_data data = { .q = q, .nr_tags = 1,