From patchwork Fri Mar 11 17:30:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12778451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6D7EC4321E for ; Fri, 11 Mar 2022 17:30:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234132AbiCKRbw (ORCPT ); Fri, 11 Mar 2022 12:31:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350781AbiCKRbv (ORCPT ); Fri, 11 Mar 2022 12:31:51 -0500 Received: from mail-il1-x131.google.com (mail-il1-x131.google.com [IPv6:2607:f8b0:4864:20::131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5742180212 for ; Fri, 11 Mar 2022 09:30:47 -0800 (PST) Received: by mail-il1-x131.google.com with SMTP id r2so1505009ilh.0 for ; Fri, 11 Mar 2022 09:30:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0+lGCgm8YQ8+dIdxKhhw6hUQk2kg7X0gjyYtTJsiPzs=; b=WkmrzHbGXHHAq8OAz/zvD00E4LdgaBvLE/LBw9eaPu2DvRDn4HZwuJPJDicFmBhsMF nWgV5Aq0YVYYkyTg3mPgFHMoGTLEd/hs7EculEkmDwBDlmTNUwtxX0+jDXxrfAF/eNS9 yVQvL6zSjSgx4m6x6K58tMyHVnDvqlOMfu60zW+T/O1x2+GoriYFWHagHVBhY5JtXmIN d6BzUozyUQ+aDJ0nBqkWIro7L5UaNsmjgLiAFPqtTQLMKd/ofQjKT1TD5h3Th4m8RDpU foGMt2U+gpe7eh95a9biqjho+qW7MVJZZQYvE4Y0Y2fSXiOvZ1+ePr4B9ZAsoTBf0GhH ZLng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0+lGCgm8YQ8+dIdxKhhw6hUQk2kg7X0gjyYtTJsiPzs=; b=MKBTX+MefmedP5VbnSzkBxuHLz8SEUper/Zkv9X7b5EVetCzQ3/ILqSyqloCHlUo3m q/Ue/JJjRhhoa+UoAHKRvUJSrsyWG1mgppXw32v7BjodiGFRCH6bp9sxtRNYi7+seyYG EA9y1sDMv/G3mY6zDYVYUkhn2GdCrNH6Kz5lzhJ8M6AXel1R3ReDjPwAF7BUpB6y7GgC LqTygxbBpW+N2slVy56H/ko/JuYXkf7j210Ibbu9FXgivN+pHf+gxBezAj+ocGe1nAq5 p9d9Pesph1QKAdJQrAjlzwvqHh1ciKkjeAeZIrgqPeayqia5yB554XQl+uFjR3z4isba IxAA== X-Gm-Message-State: AOAM530SuQmD3gF/bqX91I4balke7leJIjpUd0vSdHFSXMLgCF+yb5Jy DA6aonXm5ceCxOYagY+qaDG1Qf5Jeg+ObGDV X-Google-Smtp-Source: ABdhPJyld0EEoUxIZrrQF7gWk82Y/3EcT3Gc0bV4qtL4FPNOvVEkL+69QnJeMAuCNdIXyis79aWs8g== X-Received: by 2002:a92:cdae:0:b0:2c2:c05d:ac36 with SMTP id g14-20020a92cdae000000b002c2c05dac36mr8474918ild.196.1647019846739; Fri, 11 Mar 2022 09:30:46 -0800 (PST) Received: from m1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id r2-20020a92d442000000b002c62b540c85sm4622356ilm.5.2022.03.11.09.30.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Mar 2022 09:30:46 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: song@kernel.org, linux-raid@vger.kernel.org, llowrey@nuclearwinter.com, i400sjon@gmail.com, rogerheflin@gmail.com, Jens Axboe Subject: [PATCH 1/2] block: ensure plug merging checks the correct queue at least once Date: Fri, 11 Mar 2022 10:30:40 -0700 Message-Id: <20220311173041.165948-2-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220311173041.165948-1-axboe@kernel.dk> References: <20220311173041.165948-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Song reports that a RAID rebuild workload runs much slower recently, and it is seeing a lot less merging than it did previously. The reason is that a previous commit reduced the amount of work we do for plug merging. RAID rebuild interleaves requests between disks, so a last-entry check in plug merging always misses a merge opportunity since we always find a different disk than what we are looking for. Modify the logic such that it's still a one-hit cache, but ensure that we check enough to find the right target before giving up. Fixes: d38a9c04c0d5 ("block: only check previous entry for plug merge attempt") Reported-by: Song Liu Tested-by: Song Liu Signed-off-by: Jens Axboe --- block/blk-merge.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index f5255991b773..8d8177f71ebd 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -1087,12 +1087,20 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, if (!plug || rq_list_empty(plug->mq_list)) return false; - /* check the previously added entry for a quick merge attempt */ - rq = rq_list_peek(&plug->mq_list); - if (rq->q == q) { - if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) == - BIO_MERGE_OK) - return true; + rq_list_for_each(&plug->mq_list, rq) { + if (rq->q == q) { + if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) == + BIO_MERGE_OK) + return true; + break; + } + + /* + * Only keep iterating plug list for merges if we have multiple + * queues + */ + if (!plug->multiple_queues) + break; } return false; } From patchwork Fri Mar 11 17:30:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12778452 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32495C433FE for ; Fri, 11 Mar 2022 17:30:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350780AbiCKRb6 (ORCPT ); Fri, 11 Mar 2022 12:31:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350785AbiCKRbw (ORCPT ); Fri, 11 Mar 2022 12:31:52 -0500 Received: from mail-io1-xd32.google.com (mail-io1-xd32.google.com [IPv6:2607:f8b0:4864:20::d32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69BF015169C for ; Fri, 11 Mar 2022 09:30:48 -0800 (PST) Received: by mail-io1-xd32.google.com with SMTP id d62so10833564iog.13 for ; Fri, 11 Mar 2022 09:30:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=91p5fyoti+r99A/MLHXzhui0PoyX/YysuNW7zXeNbUA=; b=V5re+zgR1eGZ4rAjBh1xEwA16T7jqn5dR/KLfBrwxL0xdnm7gG1f74sq97R7FCxEnB 8Yp9YgEQZFkKY4TyP7XIsJIAGfsje3FHbq6sdlCCrBKO2KVZecWv5DZn4jszm7dKs2lt IHQihgTc3xw6Z/NcMqBF3uUbvNL+tDYR8q1OzMW2dFVeKboZjMUXJgp1+onWB0ynwYfx AlT6Zs19ljKk1mq/myRHg5r9WVKC2vjUcv3Gi51uvwmLgy27u1wy+daUvuRSAeyJLsLs U+Uq1udqZy6I+zKw8+9epehkrAKMfyV++kMJ22FElEhJykO5ipmIduAIS+jxOtr1HM/l AsBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=91p5fyoti+r99A/MLHXzhui0PoyX/YysuNW7zXeNbUA=; b=GX7X/gO3LEnUZg/xSKT3MaodSBX+PgjzGrU0bgKJpszKN1Z5xC62GaStpwKVvNA9J1 lsSNp9bnujXy/uij+7zeZkdDMH7T9SYv4HEsj219ILZxCBLk3pGEgMxM7QfjHjZQo+W1 QwCzoCutI/Y65SNhqkATXdJX/hn2qZLD8OHkWLsOhmzRehlJOu9c6uhe94VoEG7DYB44 lPEuosTsWmNgaMr3JcZEokoQOA0k2lVRW/7SpjurUB8stCqy0SgE3IfDsrcGjBosGLSz +0ti5IYUSv/Ppycl+b746CWT62MJM92/8wNj+bYulTdqTy7LzY9sSEsmgf+dWeQJ51rO DyGQ== X-Gm-Message-State: AOAM532XIohZVY/F8pDjnKPhHjnBqek6uDfQ6kvhkL7s+e1flKEYAhfB nvlaNjghzgRCyM0gZkz9FKUGHGS15s4D9tl8 X-Google-Smtp-Source: ABdhPJzdSa1Whd43NnaQvNGvszejNdbL7bw9pouDrrqgkfcxe5cCaPt5e/3/yjzGglu9+dTb6RPqMQ== X-Received: by 2002:a05:6638:359:b0:317:c322:b012 with SMTP id x25-20020a056638035900b00317c322b012mr9171926jap.285.1647019847618; Fri, 11 Mar 2022 09:30:47 -0800 (PST) Received: from m1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id r2-20020a92d442000000b002c62b540c85sm4622356ilm.5.2022.03.11.09.30.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Mar 2022 09:30:47 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: song@kernel.org, linux-raid@vger.kernel.org, llowrey@nuclearwinter.com, i400sjon@gmail.com, rogerheflin@gmail.com, Jens Axboe Subject: [PATCH 2/2] block: flush plug based on hardware and software queue order Date: Fri, 11 Mar 2022 10:30:41 -0700 Message-Id: <20220311173041.165948-3-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220311173041.165948-1-axboe@kernel.dk> References: <20220311173041.165948-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We used to sort the plug list if we had multiple queues before dispatching requests to the IO scheduler. This usually isn't needed, but for certain workloads that interleave requests to disks, it's a less efficient to process the plug list one-by-one if everything is interleaved. Don't sort the list, but skip through it and flush out entries that have the same target at the same time. Fixes: df87eb0fce8f ("block: get rid of plug list sorting") Reported-by: Song Liu Tested-by: Song Liu Signed-off-by: Jens Axboe --- block/blk-mq.c | 59 ++++++++++++++++++++++++-------------------------- 1 file changed, 28 insertions(+), 31 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 862d91c6112e..213bb5979bed 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2573,13 +2573,36 @@ static void __blk_mq_flush_plug_list(struct request_queue *q, q->mq_ops->queue_rqs(&plug->mq_list); } +static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched) +{ + struct blk_mq_hw_ctx *this_hctx = NULL; + struct blk_mq_ctx *this_ctx = NULL; + struct request *requeue_list = NULL; + unsigned int depth = 0; + LIST_HEAD(list); + + do { + struct request *rq = rq_list_pop(&plug->mq_list); + + if (!this_hctx) { + this_hctx = rq->mq_hctx; + this_ctx = rq->mq_ctx; + } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) { + rq_list_add(&requeue_list, rq); + continue; + } + list_add_tail(&rq->queuelist, &list); + depth++; + } while (!rq_list_empty(plug->mq_list)); + + plug->mq_list = requeue_list; + trace_block_unplug(this_hctx->queue, depth, !from_sched); + blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, from_sched); +} + void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) { - struct blk_mq_hw_ctx *this_hctx; - struct blk_mq_ctx *this_ctx; struct request *rq; - unsigned int depth; - LIST_HEAD(list); if (rq_list_empty(plug->mq_list)) return; @@ -2615,35 +2638,9 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) return; } - this_hctx = NULL; - this_ctx = NULL; - depth = 0; do { - rq = rq_list_pop(&plug->mq_list); - - if (!this_hctx) { - this_hctx = rq->mq_hctx; - this_ctx = rq->mq_ctx; - } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) { - trace_block_unplug(this_hctx->queue, depth, - !from_schedule); - blk_mq_sched_insert_requests(this_hctx, this_ctx, - &list, from_schedule); - depth = 0; - this_hctx = rq->mq_hctx; - this_ctx = rq->mq_ctx; - - } - - list_add(&rq->queuelist, &list); - depth++; + blk_mq_dispatch_plug_list(plug, from_schedule); } while (!rq_list_empty(plug->mq_list)); - - if (!list_empty(&list)) { - trace_block_unplug(this_hctx->queue, depth, !from_schedule); - blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, - from_schedule); - } } void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,