From patchwork Sun Oct 17 01:37:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7ADEEC433FE for ; Sun, 17 Oct 2021 01:37:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5CC0D611C1 for ; Sun, 17 Oct 2021 01:37:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231837AbhJQBkC (ORCPT ); Sat, 16 Oct 2021 21:40:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244666AbhJQBkB (ORCPT ); Sat, 16 Oct 2021 21:40:01 -0400 Received: from mail-il1-x12d.google.com (mail-il1-x12d.google.com [IPv6:2607:f8b0:4864:20::12d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B41BC061765 for ; Sat, 16 Oct 2021 18:37:53 -0700 (PDT) Received: by mail-il1-x12d.google.com with SMTP id h27so5491794ila.5 for ; Sat, 16 Oct 2021 18:37:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8OcUz0fb+TxP+b1ymvvoFMzaszn0xSCc26VaJnYFNDg=; b=MUuun0nepB4H+VC9Apr38JKcIN49MICoRd/0bgPJXLocemjxdNm6noNJL+BLd1zi1G e0B0NT0ZHFKJsEKEhiQNVBM4FxJbbztb+44MCv4UJi5MRuDl6u6AUOJpLJ+7WQvIlTzb lT8q4LUZiWim9lUMV6V4Nui7ooVRfva6HG3NuP2Dl3uZu6Qcj03Y1wZ6LOnLz/X+b+xX 9N+Pm9T1r4ixuu+Kh1hpxvYdJAOxaZ+YmJazrhOAm2Lz8kR/5b081F8tcGKTRe2VjUQU Kyb7ymPqtjoByBOgmRTzUEVpouJD2F63hI0J7PuWBwg++DyjnJGsDknDud6YDqbPZ48b qeXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8OcUz0fb+TxP+b1ymvvoFMzaszn0xSCc26VaJnYFNDg=; b=tWvZCd0Wq6yDfhqv+NePSLQb3N/tX1keBeK6pPmqPlJ/K17/qdJLirUHf+y8Da1mEg 0wOLhKO9dPaiN84XRPCQmy2FhLmtxr0UbllGczd7kVOzffOH88dy3WpLrFmxhfR8o6ih 7onX8tGgriFrH4iBbnqDGc3UXhE/RFurkZ852vICRWhxzWqxcsShTaQDoWl07L7mnDuV 3j2Hjh5L54XMLeI/WK9FVXx6ZT1alieTQrpm0imIo+CpzYYiuT0JTasMJjdaP7Ooa9+h BMwll0kUzK4hnpWf+n99/I/P53OzOjwyViFy1X/0ahP8pXiiq6NXMtYM4nhI51GMEqaH vQTw== X-Gm-Message-State: AOAM531DjDkB0LL0R6OzHZ+gTCOn+UVQzNS/OGoiME1RMwC9dxyG5AAC nmByqdh7qGQjCnMj1WBLVHu4S99CD8WPPQ== X-Google-Smtp-Source: ABdhPJzCBVCUJYqbjbb4rK/ShbyY5ig3kdxqBC+o78FzSOH/l+WNUyhN+TqBX5CkOljmHNjOBbCzgQ== X-Received: by 2002:a05:6e02:1989:: with SMTP id g9mr9133522ilf.165.1634434672439; Sat, 16 Oct 2021 18:37:52 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:37:52 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 01/14] block: inline fast path of driver tag allocation Date: Sat, 16 Oct 2021 19:37:35 -0600 Message-Id: <20211017013748.76461-2-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If we don't use an IO scheduler or have shared tags, then we don't need to call into this external function at all. This saves ~2% for such a setup. Signed-off-by: Jens Axboe --- block/blk-mq.c | 8 +++----- block/blk-mq.h | 15 ++++++++++++++- 2 files changed, 17 insertions(+), 6 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 1bbe5de66c40..90bc93fe373e 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1145,7 +1145,7 @@ static inline unsigned int queued_to_index(unsigned int queued) return min(BLK_MQ_MAX_DISPATCH_ORDER - 1, ilog2(queued) + 1); } -static bool __blk_mq_get_driver_tag(struct request *rq) +static bool __blk_mq_alloc_driver_tag(struct request *rq) { struct sbitmap_queue *bt = &rq->mq_hctx->tags->bitmap_tags; unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags; @@ -1169,11 +1169,9 @@ static bool __blk_mq_get_driver_tag(struct request *rq) return true; } -bool blk_mq_get_driver_tag(struct request *rq) +bool __blk_mq_get_driver_tag(struct blk_mq_hw_ctx *hctx, struct request *rq) { - struct blk_mq_hw_ctx *hctx = rq->mq_hctx; - - if (rq->tag == BLK_MQ_NO_TAG && !__blk_mq_get_driver_tag(rq)) + if (rq->tag == BLK_MQ_NO_TAG && !__blk_mq_alloc_driver_tag(rq)) return false; if ((hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED) && diff --git a/block/blk-mq.h b/block/blk-mq.h index 8be447995106..ceed0a001c76 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -264,7 +264,20 @@ static inline void blk_mq_put_driver_tag(struct request *rq) __blk_mq_put_driver_tag(rq->mq_hctx, rq); } -bool blk_mq_get_driver_tag(struct request *rq); +bool __blk_mq_get_driver_tag(struct blk_mq_hw_ctx *hctx, struct request *rq); + +static inline bool blk_mq_get_driver_tag(struct request *rq) +{ + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; + + if (rq->tag != BLK_MQ_NO_TAG && + !(hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)) { + hctx->tags->rqs[rq->tag] = rq; + return true; + } + + return __blk_mq_get_driver_tag(hctx, rq); +} static inline void blk_mq_clear_mq_map(struct blk_mq_queue_map *qmap) { From patchwork Sun Oct 17 01:37:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49865C4332F for ; Sun, 17 Oct 2021 01:37:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 28AF660F70 for ; Sun, 17 Oct 2021 01:37:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244776AbhJQBkC (ORCPT ); Sat, 16 Oct 2021 21:40:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244666AbhJQBkC (ORCPT ); Sat, 16 Oct 2021 21:40:02 -0400 Received: from mail-io1-xd34.google.com (mail-io1-xd34.google.com [IPv6:2607:f8b0:4864:20::d34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E722EC061765 for ; Sat, 16 Oct 2021 18:37:53 -0700 (PDT) Received: by mail-io1-xd34.google.com with SMTP id z69so9063202iof.9 for ; Sat, 16 Oct 2021 18:37:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Zk+VFUyTHswlZhVazEfOgbMJknuQJMVW20A7FDAdBoY=; b=1NiYVEuR3i7KT6Jj6x5n12kjrMNcvRwLttA0275gZExQUfb7VWn2/iVeb6E72aY7DL /RsUyyp+5GVTVO63uFz3ZhCRgn5c1CmOKPPJGBpQTyaMlVznN69Zu9GDT3BHerltZ9o8 UGzo1wwyFhay9NxLYIukwIHXI6sM3aqLfVNF9w2S+Fx0rZdTYZ03Rsxhb5LdeAMZzv/q wqk0ybvAtm0T0TjG213C6TDLHO1lzriKsXcoPwxsPoocCjq8zoYHimRdyOhAq53gdKCm 7oy1ilQq/TH8kQTWz9NU1GYyBaJJ5A5Wxiv38YWqKWj6ZHzcQbivhfqfD0kUe2aAIIiC h8rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Zk+VFUyTHswlZhVazEfOgbMJknuQJMVW20A7FDAdBoY=; b=3Ac8cluVfUwQJKPsg5ZaG2TO49F3CMlBsPjoNgzNu9sWtcoZd/geTX5PBE+ovFd8YQ x0hJ2gomRzhsy7uCzmbNpehyWt8BnLoDFTJUJg2451yLOBWSGE5T6VwdWv6lCsJstDgL Lcycf0DVNZVcOvwbIef7SFZOEOq2n2lMiOPEgPv+ShOZ3x8poRtTnl6MGxA4asGweEMo wkOu9qHgow/qBTRo3tEIEsmpyePuOA/Usz4LaqbUraDp3vKdy1jfjbTydx6c7RO3RTaO 46FB21t5gpSWOTmuHz0M72fF2KSwYUw7sHsYqfPJ/gvFBgMyTA+Ve3q1hKFmF9nErvwt zXWg== X-Gm-Message-State: AOAM532PU+s7dVk3PgvBkk96Rj7y9/+32ZLocgLm8KOBbiCgFvYgEL3y SfVArPJpz/nl7PQ1ot94ZPyJYajNBbE17A== X-Google-Smtp-Source: ABdhPJzJno1cEt4Cd8Ntf+ikpFg7p9YdPTGDS260+dzXRvlwYJWjj8mJfrg57kmgcRtyp6vM4J5eYg== X-Received: by 2002:a05:6602:2f06:: with SMTP id q6mr9409170iow.39.1634434673157; Sat, 16 Oct 2021 18:37:53 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:37:52 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 02/14] block: don't bother iter advancing a fully done bio Date: Sat, 16 Oct 2021 19:37:36 -0600 Message-Id: <20211017013748.76461-3-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If we're completing nbytes and nbytes is the size of the bio, don't bother with calling into the iterator increment helpers. Just clear the bio size and we're done. Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig --- block/bio.c | 15 ++------------- include/linux/bio.h | 24 ++++++++++++++++++++++-- 2 files changed, 24 insertions(+), 15 deletions(-) diff --git a/block/bio.c b/block/bio.c index a3c9ff23a036..2427e6fca942 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1278,18 +1278,7 @@ int submit_bio_wait(struct bio *bio) } EXPORT_SYMBOL(submit_bio_wait); -/** - * bio_advance - increment/complete a bio by some number of bytes - * @bio: bio to advance - * @bytes: number of bytes to complete - * - * This updates bi_sector, bi_size and bi_idx; if the number of bytes to - * complete doesn't align with a bvec boundary, then bv_len and bv_offset will - * be updated on the last bvec as well. - * - * @bio will then represent the remaining, uncompleted portion of the io. - */ -void bio_advance(struct bio *bio, unsigned bytes) +void __bio_advance(struct bio *bio, unsigned bytes) { if (bio_integrity(bio)) bio_integrity_advance(bio, bytes); @@ -1297,7 +1286,7 @@ void bio_advance(struct bio *bio, unsigned bytes) bio_crypt_advance(bio, bytes); bio_advance_iter(bio, &bio->bi_iter, bytes); } -EXPORT_SYMBOL(bio_advance); +EXPORT_SYMBOL(__bio_advance); void bio_copy_data_iter(struct bio *dst, struct bvec_iter *dst_iter, struct bio *src, struct bvec_iter *src_iter) diff --git a/include/linux/bio.h b/include/linux/bio.h index 62d684b7dd4c..9538f20ffaa5 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -119,6 +119,28 @@ static inline void bio_advance_iter_single(const struct bio *bio, bvec_iter_advance_single(bio->bi_io_vec, iter, bytes); } +void __bio_advance(struct bio *, unsigned bytes); + +/** + * bio_advance - increment/complete a bio by some number of bytes + * @bio: bio to advance + * @bytes: number of bytes to complete + * + * This updates bi_sector, bi_size and bi_idx; if the number of bytes to + * complete doesn't align with a bvec boundary, then bv_len and bv_offset will + * be updated on the last bvec as well. + * + * @bio will then represent the remaining, uncompleted portion of the io. + */ +static inline void bio_advance(struct bio *bio, unsigned int nbytes) +{ + if (nbytes == bio->bi_iter.bi_size) { + bio->bi_iter.bi_size = 0; + return; + } + __bio_advance(bio, nbytes); +} + #define __bio_for_each_segment(bvl, bio, iter, start) \ for (iter = (start); \ (iter).bi_size && \ @@ -381,8 +403,6 @@ static inline int bio_iov_vecs_to_alloc(struct iov_iter *iter, int max_segs) struct request_queue; extern int submit_bio_wait(struct bio *bio); -extern void bio_advance(struct bio *, unsigned); - extern void bio_init(struct bio *bio, struct bio_vec *table, unsigned short max_vecs); extern void bio_uninit(struct bio *); From patchwork Sun Oct 17 01:37:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 127DDC433F5 for ; Sun, 17 Oct 2021 01:37:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EB31D60F70 for ; Sun, 17 Oct 2021 01:37:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244782AbhJQBkD (ORCPT ); Sat, 16 Oct 2021 21:40:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244778AbhJQBkD (ORCPT ); Sat, 16 Oct 2021 21:40:03 -0400 Received: from mail-io1-xd30.google.com (mail-io1-xd30.google.com [IPv6:2607:f8b0:4864:20::d30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B1B1C061765 for ; Sat, 16 Oct 2021 18:37:54 -0700 (PDT) Received: by mail-io1-xd30.google.com with SMTP id o184so208985iof.6 for ; Sat, 16 Oct 2021 18:37:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7lhCfdRSOPd+mAiRJf4K+vWNm8poJDhSHgYsxOkE4s0=; b=FMKK8+PNzDg5nZfyrwUBguLcNEm4fQqMpkTB86Dor/7UIWp+20PqISHhXKtVsv3nWS ARKpDM+GC2vpO0I3dBN1/laFk2seNntdS2IeD1iOHflrdKcTk71Zm2Cb+NShx/QzO9gX LOy1TlJBXmF/d986H6gQr3+4qdFxJzJsWxu8iRR3LoROZlnkt80hvo0rmksXs/TX6d1i TAd+JpXObVWYIKyh/v1dLbgIRmYqniWpbnCeaWLIWwETVAo6/ccTWYgWULOV5fbiCF3f k/fZvsyTQYT2lxwz4/VlouGBft2Y/X9w+lrHRQixY2OFd5V49IA4OPLRBTJ/IcQRXhiZ VYyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7lhCfdRSOPd+mAiRJf4K+vWNm8poJDhSHgYsxOkE4s0=; b=007TwOHFAlEYJzDxFN9OT2g/kRCldrG57t0CQ8kL/0YHHQxoHzLd1l6g6+l4L4vFEN Md8l/3gJmqoTBZ2SB9niqVdwJEin8X0YEoQF5ftxMNp7jnNWHpeEdzT+PwHknJIelgJF Aq5j+4bAI52O5osK/WJg+SN5XbqeM40mvgdvoSe6m5qYgLbcKEOZ1YlBjIW2XVqgfvwz tIwmY/q0BLQ9bXUYEMIBIPGIghi2zD9GmZu3rf2KkT0qXR3gHAptetYH3MX4f6X/Wr1s zkBSvxoeaCYuX0iFVSn4+OVZBj6iu5SVaS3N2q2QZLsPDQ/j38OHAqCpQyuggZN/x6Nr zLVg== X-Gm-Message-State: AOAM5339NS6HsA+VPSK2X4yNjckH1uWfJVKAXpoJZRK71XD0lo5kEsJC 94y2tPZ8uu2rh2YCa1mSbMNpBkKDbZe82A== X-Google-Smtp-Source: ABdhPJzpy3WnjcqoP7WMe+XbPRt+TRwRlZIZ5Lb3ZcbdPnCSGTwaMmnoGfurp3cQH2ReLAe1JnItow== X-Received: by 2002:a02:c6ca:: with SMTP id r10mr13491212jan.111.1634434673716; Sat, 16 Oct 2021 18:37:53 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:37:53 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 03/14] block: remove useless caller argument to print_req_error() Date: Sat, 16 Oct 2021 19:37:37 -0600 Message-Id: <20211017013748.76461-4-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We have exactly one caller of this, just get rid of adding the useless function name to the output. Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig --- block/blk-core.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index d5b0258dd218..2596327f07d6 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -217,8 +217,7 @@ int blk_status_to_errno(blk_status_t status) } EXPORT_SYMBOL_GPL(blk_status_to_errno); -static void print_req_error(struct request *req, blk_status_t status, - const char *caller) +static void print_req_error(struct request *req, blk_status_t status) { int idx = (__force int)status; @@ -226,9 +225,9 @@ static void print_req_error(struct request *req, blk_status_t status, return; printk_ratelimited(KERN_ERR - "%s: %s error, dev %s, sector %llu op 0x%x:(%s) flags 0x%x " + "%s error, dev %s, sector %llu op 0x%x:(%s) flags 0x%x " "phys_seg %u prio class %u\n", - caller, blk_errors[idx].name, + blk_errors[idx].name, req->rq_disk ? req->rq_disk->disk_name : "?", blk_rq_pos(req), req_op(req), blk_op_str(req_op(req)), req->cmd_flags & ~REQ_OP_MASK, @@ -1464,7 +1463,7 @@ bool blk_update_request(struct request *req, blk_status_t error, if (unlikely(error && !blk_rq_is_passthrough(req) && !(req->rq_flags & RQF_QUIET))) - print_req_error(req, error, __func__); + print_req_error(req, error); blk_account_io_completion(req, nr_bytes); From patchwork Sun Oct 17 01:37:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E48FC433EF for ; Sun, 17 Oct 2021 01:37:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F0B6360F70 for ; Sun, 17 Oct 2021 01:37:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244792AbhJQBkE (ORCPT ); Sat, 16 Oct 2021 21:40:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244778AbhJQBkD (ORCPT ); Sat, 16 Oct 2021 21:40:03 -0400 Received: from mail-io1-xd2c.google.com (mail-io1-xd2c.google.com [IPv6:2607:f8b0:4864:20::d2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1CBB1C061765 for ; Sat, 16 Oct 2021 18:37:55 -0700 (PDT) Received: by mail-io1-xd2c.google.com with SMTP id 188so12157663iou.12 for ; Sat, 16 Oct 2021 18:37:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+Y03XGOLth6ew15W15TylF0EHYAmrv3Y3nwGIpoWtJU=; b=GY2jfy5JCSz/tI5lmX2KmMJ1Nnm4nNHqpEOccV8loPkha8cnNkfmpTIoR+c557yNY7 5wvaZ353Dd/A6LSemW0P7PbIGT7pLhOZ7OXuMdPooOZrNctB4Se7C8ojDnxuCs+nDC6y USElD+rjUEj0Gnc4PG3ofOyh8jsqz3R0W537ZL/Gg3Z0t18pM0iFLZpEATNDoNK7aJim ErFk/y9iWU8SPj43lD0+1DtvpHmHLxUb1WsWfOHMmRNGMCK7caUYQ7BdFYD7xNPhDPj8 R+su5PsNDGegg6ICRAyK2cU4XpZcu0wO3eBQOtp1akF+KUsxHNc4KbQpAHImG6bypWr/ YYbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+Y03XGOLth6ew15W15TylF0EHYAmrv3Y3nwGIpoWtJU=; b=cp8zfZmjgirr1iMp6ySdsFemIrzwBaHvokT4gaXqgIy2ECfHKQa+v721TnRALFTUlb jfGMGYO2aBfJMEkhaIeTUGMBlbaMUcSoFNrLU5JiYvDtQ1wY9pwWxUIt1a9ajNx/pJhB Yl4m/5zFuSkfGDNceh9GvuPlxN9FZ2TlkIq/O29pOTLX7aGSZLdkBDPQupEgzdonNbgi lQ4+PSSG2uCC/EPLyNj4DcIBKWYQQ71T+7fSmuCBjgdQ1ZEy4xWvV/DTHXY29ieoz/tC RVtiX2oXxSKZJWAEs/ScAs3o02nogkN3Q0gcmV+/7GN8wVDVrkXIjqLsgbY/5NC6kaY3 EiyQ== X-Gm-Message-State: AOAM531fT4/vqq6IAoN/cOkRjKCzAGmVoCcXVj/wzSKoK7raIv3HJ2uj NCumTWIL2r1rZVYv5cI4tjYl24EhjD4= X-Google-Smtp-Source: ABdhPJzfOz8+phn4bUIwqRK4WvHwgUfElCmdjdQ9hvw53ZdiIELLumMcQh61nCj2qbecg36Q7opQJA== X-Received: by 2002:a05:6602:1651:: with SMTP id y17mr9509809iow.114.1634434674286; Sat, 16 Oct 2021 18:37:54 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:37:54 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 04/14] block: move update request helpers into blk-mq.c Date: Sat, 16 Oct 2021 19:37:38 -0600 Message-Id: <20211017013748.76461-5-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org For some reason we still have them in blk-core, with the rest of the request completion being in blk-mq. That causes and out-of-line call for each completion. Move them into blk-mq.c instead, where they belong. Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig --- block/blk-core.c | 146 +---------------------------------------------- block/blk-mq.c | 144 ++++++++++++++++++++++++++++++++++++++++++++++ block/blk.h | 1 + 3 files changed, 146 insertions(+), 145 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 2596327f07d6..bdc03b80a8d0 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -217,7 +217,7 @@ int blk_status_to_errno(blk_status_t status) } EXPORT_SYMBOL_GPL(blk_status_to_errno); -static void print_req_error(struct request *req, blk_status_t status) +void blk_print_req_error(struct request *req, blk_status_t status) { int idx = (__force int)status; @@ -235,33 +235,6 @@ static void print_req_error(struct request *req, blk_status_t status) IOPRIO_PRIO_CLASS(req->ioprio)); } -static void req_bio_endio(struct request *rq, struct bio *bio, - unsigned int nbytes, blk_status_t error) -{ - if (error) - bio->bi_status = error; - - if (unlikely(rq->rq_flags & RQF_QUIET)) - bio_set_flag(bio, BIO_QUIET); - - bio_advance(bio, nbytes); - - if (req_op(rq) == REQ_OP_ZONE_APPEND && error == BLK_STS_OK) { - /* - * Partial zone append completions cannot be supported as the - * BIO fragments may end up not being written sequentially. - */ - if (bio->bi_iter.bi_size) - bio->bi_status = BLK_STS_IOERR; - else - bio->bi_iter.bi_sector = rq->__sector; - } - - /* don't actually finish bio if it's part of flush sequence */ - if (bio->bi_iter.bi_size == 0 && !(rq->rq_flags & RQF_FLUSH_SEQ)) - bio_endio(bio); -} - void blk_dump_rq_flags(struct request *rq, char *msg) { printk(KERN_INFO "%s: dev %s: flags=%llx\n", msg, @@ -1304,17 +1277,6 @@ static void update_io_ticks(struct block_device *part, unsigned long now, } } -static void blk_account_io_completion(struct request *req, unsigned int bytes) -{ - if (req->part && blk_do_io_stat(req)) { - const int sgrp = op_stat_group(req_op(req)); - - part_stat_lock(); - part_stat_add(req->part, sectors[sgrp], bytes >> 9); - part_stat_unlock(); - } -} - void __blk_account_io_done(struct request *req, u64 now) { const int sgrp = op_stat_group(req_op(req)); @@ -1423,112 +1385,6 @@ void blk_steal_bios(struct bio_list *list, struct request *rq) } EXPORT_SYMBOL_GPL(blk_steal_bios); -/** - * blk_update_request - Complete multiple bytes without completing the request - * @req: the request being processed - * @error: block status code - * @nr_bytes: number of bytes to complete for @req - * - * Description: - * Ends I/O on a number of bytes attached to @req, but doesn't complete - * the request structure even if @req doesn't have leftover. - * If @req has leftover, sets it up for the next range of segments. - * - * Passing the result of blk_rq_bytes() as @nr_bytes guarantees - * %false return from this function. - * - * Note: - * The RQF_SPECIAL_PAYLOAD flag is ignored on purpose in this function - * except in the consistency check at the end of this function. - * - * Return: - * %false - this request doesn't have any more data - * %true - this request has more data - **/ -bool blk_update_request(struct request *req, blk_status_t error, - unsigned int nr_bytes) -{ - int total_bytes; - - trace_block_rq_complete(req, blk_status_to_errno(error), nr_bytes); - - if (!req->bio) - return false; - -#ifdef CONFIG_BLK_DEV_INTEGRITY - if (blk_integrity_rq(req) && req_op(req) == REQ_OP_READ && - error == BLK_STS_OK) - req->q->integrity.profile->complete_fn(req, nr_bytes); -#endif - - if (unlikely(error && !blk_rq_is_passthrough(req) && - !(req->rq_flags & RQF_QUIET))) - print_req_error(req, error); - - blk_account_io_completion(req, nr_bytes); - - total_bytes = 0; - while (req->bio) { - struct bio *bio = req->bio; - unsigned bio_bytes = min(bio->bi_iter.bi_size, nr_bytes); - - if (bio_bytes == bio->bi_iter.bi_size) - req->bio = bio->bi_next; - - /* Completion has already been traced */ - bio_clear_flag(bio, BIO_TRACE_COMPLETION); - req_bio_endio(req, bio, bio_bytes, error); - - total_bytes += bio_bytes; - nr_bytes -= bio_bytes; - - if (!nr_bytes) - break; - } - - /* - * completely done - */ - if (!req->bio) { - /* - * Reset counters so that the request stacking driver - * can find how many bytes remain in the request - * later. - */ - req->__data_len = 0; - return false; - } - - req->__data_len -= total_bytes; - - /* update sector only for requests with clear definition of sector */ - if (!blk_rq_is_passthrough(req)) - req->__sector += total_bytes >> 9; - - /* mixed attributes always follow the first bio */ - if (req->rq_flags & RQF_MIXED_MERGE) { - req->cmd_flags &= ~REQ_FAILFAST_MASK; - req->cmd_flags |= req->bio->bi_opf & REQ_FAILFAST_MASK; - } - - if (!(req->rq_flags & RQF_SPECIAL_PAYLOAD)) { - /* - * If total number of sectors is less than the first segment - * size, something has gone terribly wrong. - */ - if (blk_rq_bytes(req) < blk_rq_cur_bytes(req)) { - blk_dump_rq_flags(req, "request botched"); - req->__data_len = blk_rq_cur_bytes(req); - } - - /* recalculate the number of segments */ - req->nr_phys_segments = blk_recalc_rq_segments(req); - } - - return true; -} -EXPORT_SYMBOL_GPL(blk_update_request); - #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE /** * rq_flush_dcache_pages - Helper function to flush all pages in a request diff --git a/block/blk-mq.c b/block/blk-mq.c index 90bc93fe373e..ffccc5f0f66a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -613,6 +613,150 @@ void blk_mq_free_plug_rqs(struct blk_plug *plug) } } +static void req_bio_endio(struct request *rq, struct bio *bio, + unsigned int nbytes, blk_status_t error) +{ + if (error) + bio->bi_status = error; + + if (unlikely(rq->rq_flags & RQF_QUIET)) + bio_set_flag(bio, BIO_QUIET); + + bio_advance(bio, nbytes); + + if (req_op(rq) == REQ_OP_ZONE_APPEND && error == BLK_STS_OK) { + /* + * Partial zone append completions cannot be supported as the + * BIO fragments may end up not being written sequentially. + */ + if (bio->bi_iter.bi_size) + bio->bi_status = BLK_STS_IOERR; + else + bio->bi_iter.bi_sector = rq->__sector; + } + + /* don't actually finish bio if it's part of flush sequence */ + if (bio->bi_iter.bi_size == 0 && !(rq->rq_flags & RQF_FLUSH_SEQ)) + bio_endio(bio); +} + +static void blk_account_io_completion(struct request *req, unsigned int bytes) +{ + if (req->part && blk_do_io_stat(req)) { + const int sgrp = op_stat_group(req_op(req)); + + part_stat_lock(); + part_stat_add(req->part, sectors[sgrp], bytes >> 9); + part_stat_unlock(); + } +} + +/** + * blk_update_request - Complete multiple bytes without completing the request + * @req: the request being processed + * @error: block status code + * @nr_bytes: number of bytes to complete for @req + * + * Description: + * Ends I/O on a number of bytes attached to @req, but doesn't complete + * the request structure even if @req doesn't have leftover. + * If @req has leftover, sets it up for the next range of segments. + * + * Passing the result of blk_rq_bytes() as @nr_bytes guarantees + * %false return from this function. + * + * Note: + * The RQF_SPECIAL_PAYLOAD flag is ignored on purpose in this function + * except in the consistency check at the end of this function. + * + * Return: + * %false - this request doesn't have any more data + * %true - this request has more data + **/ +bool blk_update_request(struct request *req, blk_status_t error, + unsigned int nr_bytes) +{ + int total_bytes; + + trace_block_rq_complete(req, blk_status_to_errno(error), nr_bytes); + + if (!req->bio) + return false; + +#ifdef CONFIG_BLK_DEV_INTEGRITY + if (blk_integrity_rq(req) && req_op(req) == REQ_OP_READ && + error == BLK_STS_OK) + req->q->integrity.profile->complete_fn(req, nr_bytes); +#endif + + if (unlikely(error && !blk_rq_is_passthrough(req) && + !(req->rq_flags & RQF_QUIET))) + blk_print_req_error(req, error); + + blk_account_io_completion(req, nr_bytes); + + total_bytes = 0; + while (req->bio) { + struct bio *bio = req->bio; + unsigned bio_bytes = min(bio->bi_iter.bi_size, nr_bytes); + + if (bio_bytes == bio->bi_iter.bi_size) + req->bio = bio->bi_next; + + /* Completion has already been traced */ + bio_clear_flag(bio, BIO_TRACE_COMPLETION); + req_bio_endio(req, bio, bio_bytes, error); + + total_bytes += bio_bytes; + nr_bytes -= bio_bytes; + + if (!nr_bytes) + break; + } + + /* + * completely done + */ + if (!req->bio) { + /* + * Reset counters so that the request stacking driver + * can find how many bytes remain in the request + * later. + */ + req->__data_len = 0; + return false; + } + + req->__data_len -= total_bytes; + + /* update sector only for requests with clear definition of sector */ + if (!blk_rq_is_passthrough(req)) + req->__sector += total_bytes >> 9; + + /* mixed attributes always follow the first bio */ + if (req->rq_flags & RQF_MIXED_MERGE) { + req->cmd_flags &= ~REQ_FAILFAST_MASK; + req->cmd_flags |= req->bio->bi_opf & REQ_FAILFAST_MASK; + } + + if (!(req->rq_flags & RQF_SPECIAL_PAYLOAD)) { + /* + * If total number of sectors is less than the first segment + * size, something has gone terribly wrong. + */ + if (blk_rq_bytes(req) < blk_rq_cur_bytes(req)) { + blk_dump_rq_flags(req, "request botched"); + req->__data_len = blk_rq_cur_bytes(req); + } + + /* recalculate the number of segments */ + req->nr_phys_segments = blk_recalc_rq_segments(req); + } + + return true; +} +EXPORT_SYMBOL_GPL(blk_update_request); + inline void __blk_mq_end_request(struct request *rq, blk_status_t error) { if (blk_mq_need_time_stamp(rq)) { diff --git a/block/blk.h b/block/blk.h index f6e61cebd6ae..fdfaa6896fc4 100644 --- a/block/blk.h +++ b/block/blk.h @@ -213,6 +213,7 @@ static inline void blk_integrity_del(struct gendisk *disk) unsigned long blk_rq_timeout(unsigned long timeout); void blk_add_timer(struct request *req); +void blk_print_req_error(struct request *req, blk_status_t status); bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, unsigned int nr_segs, struct request **same_queue_rq); From patchwork Sun Oct 17 01:37:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAEEFC433FE for ; Sun, 17 Oct 2021 01:37:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B538E60F6F for ; Sun, 17 Oct 2021 01:37:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244795AbhJQBkF (ORCPT ); Sat, 16 Oct 2021 21:40:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244790AbhJQBkE (ORCPT ); Sat, 16 Oct 2021 21:40:04 -0400 Received: from mail-io1-xd2c.google.com (mail-io1-xd2c.google.com [IPv6:2607:f8b0:4864:20::d2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAC4CC061765 for ; Sat, 16 Oct 2021 18:37:55 -0700 (PDT) Received: by mail-io1-xd2c.google.com with SMTP id s17so12140951ioa.13 for ; Sat, 16 Oct 2021 18:37:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7WTC+xPmGleKsJJr9wn3PUo4M9NVD4IQgB+kOuNx48Y=; b=vie/4XpYIeo9XAcZZJtanDRw6EnMI9sbL6xmesWHAWFFcJpn+oy/jbD+1UAWcbKfnR 0t+1NOprsmu569H+Xjjk8M/EDfLvC4yNhZiS2ShdKcLu8auYmO50BJyYpCxIYIFg9BAt jZhiGUU6G5+V3UtgA/KuFhL9ZGH5170VQvLIOHWZxa3re3WGQZm7bN2zQ8u/umbnq9bd HulmgMf9X0opeMyN0d+mn5Jnxqzqh/Kf4SDHYKPwYND0sFs6RPDQpGVeQ7RMfK6MEK1A XiXKXWz9P08+KSJI39YTgA6anbrkcLHG9YP3Ml0EnqKnvCbYd9Y8KcrBfnhTjkpBKZXP gQaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7WTC+xPmGleKsJJr9wn3PUo4M9NVD4IQgB+kOuNx48Y=; b=raaWc4IUbIH+ZVXhWkJq7t1QFLmc/m/QcEHf9Vo7XqCLGeNe4A2eRErBnsjJLMS33C nqJY0/oPIRWRlIxCDhqekEQs2NjT6CHTZMp7Hibw+mpR6LC3qq/pabE1Mp6MRqyKaGvr eakG9cZ/uu5FbmSNn6eg1cr5c3XOWvgJoBwHv+QWYrS4we3eqIFdCmBGNuEKDjc/8Qyl hthGP7LjdzhobniRSwD+06W7wcXZouhCF6hJVugX+/rAwnelZQKCNxAMt62MDIrVGtax 2vnPPGaN/hr7u8BWqo5wjxeNZm2jJXbr7ef0GwLfJCiDYu0B1xR9iSQkfH/KaZFrrKZ1 Seyw== X-Gm-Message-State: AOAM533Qc8Ll68XD/FecDdm6Ka2ElHtinxGl1cb68c9E3OPT3Tte8qU1 HMnjYT7aykVh2Gexx4HFqHKWd2+lVVJOng== X-Google-Smtp-Source: ABdhPJygUntO9Srr19wt2XYmiUwn/zX1ADDtIQqPBQaCIZ2fZXHku9r2IQ+WgPBXaoIa9xj9YZNF9w== X-Received: by 2002:a05:6638:134f:: with SMTP id u15mr13365938jad.145.1634434674981; Sat, 16 Oct 2021 18:37:54 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:37:54 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe , Christoph Hellwig Subject: [PATCH 05/14] block: don't call blk_status_to_errno() for success status Date: Sat, 16 Oct 2021 19:37:39 -0600 Message-Id: <20211017013748.76461-6-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We only need to call it to resolve the blk_status_t -> errno mapping if the status is different than BLK_STS_OK. Check that it is before doing so. Suggested-by: Christoph Hellwig Signed-off-by: Jens Axboe Signed-off-by: Christoph Hellwig --- block/blk-mq.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index ffccc5f0f66a..fa5b12200404 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -676,9 +676,11 @@ static void blk_account_io_completion(struct request *req, unsigned int bytes) bool blk_update_request(struct request *req, blk_status_t error, unsigned int nr_bytes) { - int total_bytes; + int total_bytes, blk_errno = 0; - trace_block_rq_complete(req, blk_status_to_errno(error), nr_bytes); + if (unlikely(error != BLK_STS_OK)) + blk_errno = blk_status_to_errno(error); + trace_block_rq_complete(req, blk_errno, nr_bytes); if (!req->bio) return false; From patchwork Sun Oct 17 01:37:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66453C433F5 for ; Sun, 17 Oct 2021 01:37:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4D6E561040 for ; Sun, 17 Oct 2021 01:37:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244790AbhJQBkF (ORCPT ); Sat, 16 Oct 2021 21:40:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244778AbhJQBkF (ORCPT ); Sat, 16 Oct 2021 21:40:05 -0400 Received: from mail-io1-xd29.google.com (mail-io1-xd29.google.com [IPv6:2607:f8b0:4864:20::d29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 790F1C061765 for ; Sat, 16 Oct 2021 18:37:56 -0700 (PDT) Received: by mail-io1-xd29.google.com with SMTP id e144so12217359iof.3 for ; Sat, 16 Oct 2021 18:37:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kulItnynzJhecGOua2JHXYjtl5ISMVm/qb97eP8VDrc=; b=Ht1c48nBdWf+wlcfiZRiTp5QaMW+/DmtZEmp3lEUUDWEPTU/7JPpnhsEz7CfEd+6J2 ejTS8+GO0cNpfWJz87jtgctINCQSlzzPW0TLukYeUd7cuzD97KKI7gCaHehZrTOFWn23 W6yiwWIR5erZZssjuawTwpx6dajBtKEjJ4q+DW3SkpaLUlcZ5VKq4TvzdNyAam3rFzA5 vV8Ogjmq4InSjwyCAJCMQj/djbwoH5sT+u+u7T1aE/Hw3ZbqIJbEFQpEKGufrofBMRIX Zi2wbAEy11bRPZ0QhdbtRRY/xyV2WeSUnCf7A16nJ+8GvIu/cDw0Jk9Avf7k5lYlBRUG ZPoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kulItnynzJhecGOua2JHXYjtl5ISMVm/qb97eP8VDrc=; b=h+MG6BG038qScOXfcKKRzTWUfzmO1ri/sQ3kwCFpBUveCTevn3yGzQzeYIcfdLf7yf lHFSPQo6E3YLzwL6XhIUFxgw9at4OEGTrgqx2FJozXgYgRcMsjEJKVx+bWBVwIZyaoJY g/+rQyP4xWPZskxtXgWfngV8ty7mZSnQ6iLO7zusSdGysM3gNNhYbvQdnzox6GG08Pyk Wsy1vF3h3fgHg9MqYFyUbzfiNkNb+HDu7gEEMARW4VNkCLT/RHxd5eYFhYZif8bYrpZE /mg5gJdcp8SzTc/JvznSmEVxAJgbzv0h8PtC+EfulA7jYr4mBPKrzMiWMgf5xUSpxQ1Y qPIQ== X-Gm-Message-State: AOAM5306i8CyxRYjJdAvw15Cre3uNyfszUeYPdiq47vvwWhgm67Iw97K SeOwNKMIwtZGQpkVQbRbpoAA4KZ47sjF0w== X-Google-Smtp-Source: ABdhPJxvQ/TuF1l8q0T8KC8j2+U3rWPSQww0UPGSDS9DaXuCJxZ6s0VKDB0FcmHc8zTcisnCqBaXSw== X-Received: by 2002:a02:ac8a:: with SMTP id x10mr13729163jan.43.1634434675728; Sat, 16 Oct 2021 18:37:55 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:37:55 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 06/14] block: store elevator state in request Date: Sat, 16 Oct 2021 19:37:40 -0600 Message-Id: <20211017013748.76461-7-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add an rq private RQF_ELV flag, which tells the block layer that this request was initialized on a queue that has an IO scheduler attached. This allows for faster checking in the fast path, rather than having to deference rq->q later on. Elevator switching does full quiesce of the queue before detaching an IO scheduler, so it's safe to cache this in the request itself. Signed-off-by: Jens Axboe --- block/blk-mq-sched.h | 27 ++++++++++++++++----------- block/blk-mq.c | 20 +++++++++++--------- include/linux/blk-mq.h | 2 ++ 3 files changed, 29 insertions(+), 20 deletions(-) diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index fe252278ed9a..98836106b25f 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -56,29 +56,34 @@ static inline bool blk_mq_sched_allow_merge(struct request_queue *q, struct request *rq, struct bio *bio) { - struct elevator_queue *e = q->elevator; - - if (e && e->type->ops.allow_merge) - return e->type->ops.allow_merge(q, rq, bio); + if (rq->rq_flags & RQF_ELV) { + struct elevator_queue *e = q->elevator; + if (e->type->ops.allow_merge) + return e->type->ops.allow_merge(q, rq, bio); + } return true; } static inline void blk_mq_sched_completed_request(struct request *rq, u64 now) { - struct elevator_queue *e = rq->q->elevator; + if (rq->rq_flags & RQF_ELV) { + struct elevator_queue *e = rq->q->elevator; - if (e && e->type->ops.completed_request) - e->type->ops.completed_request(rq, now); + if (e->type->ops.completed_request) + e->type->ops.completed_request(rq, now); + } } static inline void blk_mq_sched_requeue_request(struct request *rq) { - struct request_queue *q = rq->q; - struct elevator_queue *e = q->elevator; + if (rq->rq_flags & RQF_ELV) { + struct request_queue *q = rq->q; + struct elevator_queue *e = q->elevator; - if ((rq->rq_flags & RQF_ELVPRIV) && e && e->type->ops.requeue_request) - e->type->ops.requeue_request(rq); + if ((rq->rq_flags & RQF_ELVPRIV) && e->type->ops.requeue_request) + e->type->ops.requeue_request(rq); + } } static inline bool blk_mq_sched_has_work(struct blk_mq_hw_ctx *hctx) diff --git a/block/blk-mq.c b/block/blk-mq.c index fa5b12200404..5d22c228f6df 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -299,7 +299,7 @@ void blk_mq_wake_waiters(struct request_queue *q) */ static inline bool blk_mq_need_time_stamp(struct request *rq) { - return (rq->rq_flags & (RQF_IO_STAT | RQF_STATS)) || rq->q->elevator; + return (rq->rq_flags & (RQF_IO_STAT | RQF_STATS | RQF_ELV)); } static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, @@ -309,9 +309,11 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, struct request *rq = tags->static_rqs[tag]; if (data->q->elevator) { + rq->rq_flags = RQF_ELV; rq->tag = BLK_MQ_NO_TAG; rq->internal_tag = tag; } else { + rq->rq_flags = 0; rq->tag = tag; rq->internal_tag = BLK_MQ_NO_TAG; } @@ -320,7 +322,6 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, rq->q = data->q; rq->mq_ctx = data->ctx; rq->mq_hctx = data->hctx; - rq->rq_flags = 0; rq->cmd_flags = data->cmd_flags; if (data->flags & BLK_MQ_REQ_PM) rq->rq_flags |= RQF_PM; @@ -356,11 +357,11 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, data->ctx->rq_dispatched[op_is_sync(data->cmd_flags)]++; refcount_set(&rq->ref, 1); - if (!op_is_flush(data->cmd_flags)) { + if (!op_is_flush(data->cmd_flags) && (rq->rq_flags & RQF_ELV)) { struct elevator_queue *e = data->q->elevator; rq->elv.icq = NULL; - if (e && e->type->ops.prepare_request) { + if (e->type->ops.prepare_request) { if (e->type->icq_cache) blk_mq_sched_assign_ioc(rq); @@ -575,12 +576,13 @@ static void __blk_mq_free_request(struct request *rq) void blk_mq_free_request(struct request *rq) { struct request_queue *q = rq->q; - struct elevator_queue *e = q->elevator; struct blk_mq_ctx *ctx = rq->mq_ctx; struct blk_mq_hw_ctx *hctx = rq->mq_hctx; - if (rq->rq_flags & RQF_ELVPRIV) { - if (e && e->type->ops.finish_request) + if (rq->rq_flags & (RQF_ELVPRIV | RQF_ELV)) { + struct elevator_queue *e = q->elevator; + + if (e->type->ops.finish_request) e->type->ops.finish_request(rq); if (rq->elv.icq) { put_io_context(rq->elv.icq->ioc); @@ -2239,7 +2241,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, goto insert; } - if (q->elevator && !bypass_insert) + if ((rq->rq_flags & RQF_ELV) && !bypass_insert) goto insert; budget_token = blk_mq_get_dispatch_budget(q); @@ -2475,7 +2477,7 @@ void blk_mq_submit_bio(struct bio *bio) } blk_add_rq_to_plug(plug, rq); - } else if (q->elevator) { + } else if (rq->rq_flags & RQF_ELV) { /* Insert the request at the IO scheduler queue */ blk_mq_sched_insert_request(rq, false, true, true); } else if (plug && !blk_queue_nomerges(q)) { diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index a9c1d0882550..3a399aa372b5 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -55,6 +55,8 @@ typedef __u32 __bitwise req_flags_t; #define RQF_MQ_POLL_SLEPT ((__force req_flags_t)(1 << 20)) /* ->timeout has been called, don't expire again */ #define RQF_TIMED_OUT ((__force req_flags_t)(1 << 21)) +/* queue has elevator attached */ +#define RQF_ELV ((__force req_flags_t)(1 << 22)) /* flags that prevent us from merging requests: */ #define RQF_NOMERGE_FLAGS \ From patchwork Sun Oct 17 01:37:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CE82C433EF for ; Sun, 17 Oct 2021 01:38:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 295BF60F6F for ; Sun, 17 Oct 2021 01:38:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244666AbhJQBkG (ORCPT ); Sat, 16 Oct 2021 21:40:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244796AbhJQBkF (ORCPT ); Sat, 16 Oct 2021 21:40:05 -0400 Received: from mail-io1-xd29.google.com (mail-io1-xd29.google.com [IPv6:2607:f8b0:4864:20::d29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53CB5C061767 for ; Sat, 16 Oct 2021 18:37:57 -0700 (PDT) Received: by mail-io1-xd29.google.com with SMTP id y67so12157842iof.10 for ; Sat, 16 Oct 2021 18:37:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jHDbtDHsjqqwQO86cmJQVVA0iWqfAFLWw2YWksxgxRI=; b=kvguSuTXz3Jhj7oEcK5lvAqYPaNA1ZwRn7LHSzB060BX63wQNKDxTNQb93ciA6fGjq uT28jqaNgxGM03rz6U37zK/luc4Jf4UjUSuMBJTY7XZR8mioQLTLzPx+p3inUtTYOzc5 dN29iZACqwJ6TDxNM8+VZQ7bqHO7euNQtLQhcxwEuxKyDGUByLb4v3WUoYC/v550Lc7v jmSGzKXeUe/hIZx0EiaAVANdkWyyuufLGj4ZIBPvwFqKzX5fqrW0mGHLd/+2o5LnvIdH NqeOxz77YfS1gQVsWi2myDPoHSivsrpelzA53YHbs8Rzf/B2NiYQ33x9FMF7VdUisZUP SOlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jHDbtDHsjqqwQO86cmJQVVA0iWqfAFLWw2YWksxgxRI=; b=iyA9jo3/mSDWJwb9b1mtmREHonBYoYqSJMgxv+VvI0pdYayDHz6pxsTi7TgZqYy7c2 Z80mfTv9EEL37C6g0WCheOFdSRaApPKb2kXf0LQKK49WgN+3KnHYWz74H3fKCQQ48lJK +VALsx6mSkHz8346pYt32Qx+vJozOH+X6of0zxGGJ9hG17Uu/tgz+BwOa5AUOkWqMswa RAYxC2M8CLymr3mO8RZpcX/kfMZPFDXlkmJSkHa40W3X1ub+3Ynxz36oa8Vi7ACpUHue YAmtoaTEv7t1Mctd4TaGDAffnCifoK8z1+CwW9+AkBTE5Pxyz+A2b5RIF/TVJhPO1OSW md7Q== X-Gm-Message-State: AOAM532di5l+qrQ8wgTRM1Ga12ivIVJzL95jiFBdmpf9G32ahWd4jOY1 x/Za7yZl3GDWJRMP/v4Z/BSpj9Zl5s0p5g== X-Google-Smtp-Source: ABdhPJwnx01UW+YZvEmo1aOuFKx+1D+xCtGfyou9iZABy6fDD6h2C7xqwDqRvpo+8Yk28vlLVnwTyw== X-Received: by 2002:a02:b803:: with SMTP id o3mr13482327jam.55.1634434676550; Sat, 16 Oct 2021 18:37:56 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:37:56 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 07/14] block: change plugging to use a singly linked list Date: Sat, 16 Oct 2021 19:37:41 -0600 Message-Id: <20211017013748.76461-8-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Use a singly linked list for the blk_plug. This saves 8 bytes in the blk_plug struct, and makes for faster list manipulations than doubly linked lists. As we don't use the doubly linked lists for anything, singly linked is just fine. This yields a bump in default (merging enabled) performance from 7.0 to 7.1M IOPS, and ~7.5M IOPS with merging disabled. Signed-off-by: Jens Axboe --- block/blk-core.c | 5 +- block/blk-merge.c | 8 +-- block/blk-mq.c | 156 +++++++++++++++++++++++++++-------------- block/blk.h | 2 +- include/linux/blkdev.h | 6 +- 5 files changed, 113 insertions(+), 64 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index bdc03b80a8d0..fced71948162 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1542,11 +1542,12 @@ void blk_start_plug_nr_ios(struct blk_plug *plug, unsigned short nr_ios) if (tsk->plug) return; - INIT_LIST_HEAD(&plug->mq_list); + plug->mq_list = NULL; plug->cached_rq = NULL; plug->nr_ios = min_t(unsigned short, nr_ios, BLK_MAX_REQUEST_COUNT); plug->rq_count = 0; plug->multiple_queues = false; + plug->has_elevator = false; plug->nowait = false; INIT_LIST_HEAD(&plug->cb_list); @@ -1632,7 +1633,7 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule) { flush_plug_callbacks(plug, from_schedule); - if (!list_empty(&plug->mq_list)) + if (!rq_list_empty(plug->mq_list)) blk_mq_flush_plug_list(plug, from_schedule); if (unlikely(!from_schedule && plug->cached_rq)) blk_mq_free_plug_rqs(plug); diff --git a/block/blk-merge.c b/block/blk-merge.c index ec727234ac48..b3f3e689a5ac 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -1085,23 +1085,23 @@ static enum bio_merge_status blk_attempt_bio_merge(struct request_queue *q, * Caller must ensure !blk_queue_nomerges(q) beforehand. */ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, - unsigned int nr_segs, struct request **same_queue_rq) + unsigned int nr_segs, bool *same_queue_rq) { struct blk_plug *plug; struct request *rq; plug = blk_mq_plug(q, bio); - if (!plug || list_empty(&plug->mq_list)) + if (!plug || rq_list_empty(plug->mq_list)) return false; /* check the previously added entry for a quick merge attempt */ - rq = list_last_entry(&plug->mq_list, struct request, queuelist); + rq = rq_list_peek(&plug->mq_list); if (rq->q == q && same_queue_rq) { /* * Only blk-mq multiple hardware queues case checks the rq in * the same queue, there should be only one such rq in a queue */ - *same_queue_rq = rq; + *same_queue_rq = true; } if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) == BIO_MERGE_OK) return true; diff --git a/block/blk-mq.c b/block/blk-mq.c index 5d22c228f6df..cd1249284c1f 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -19,7 +19,6 @@ #include #include #include -#include #include #include #include @@ -2118,54 +2117,100 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, spin_unlock(&ctx->lock); } -static int plug_rq_cmp(void *priv, const struct list_head *a, - const struct list_head *b) +static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule) { - struct request *rqa = container_of(a, struct request, queuelist); - struct request *rqb = container_of(b, struct request, queuelist); + struct blk_mq_hw_ctx *hctx = NULL; + int queued = 0; + int errors = 0; + + while (!rq_list_empty(plug->mq_list)) { + struct request *rq; + blk_status_t ret; + + rq = rq_list_pop(&plug->mq_list); - if (rqa->mq_ctx != rqb->mq_ctx) - return rqa->mq_ctx > rqb->mq_ctx; - if (rqa->mq_hctx != rqb->mq_hctx) - return rqa->mq_hctx > rqb->mq_hctx; + if (!hctx) + hctx = rq->mq_hctx; + else if (hctx != rq->mq_hctx && hctx->queue->mq_ops->commit_rqs) { + trace_block_unplug(hctx->queue, queued, !from_schedule); + hctx->queue->mq_ops->commit_rqs(hctx); + queued = 0; + hctx = rq->mq_hctx; + } + + ret = blk_mq_request_issue_directly(rq, + rq_list_empty(plug->mq_list)); + if (ret != BLK_STS_OK) { + if (ret == BLK_STS_RESOURCE || + ret == BLK_STS_DEV_RESOURCE) { + blk_mq_request_bypass_insert(rq, false, + rq_list_empty(plug->mq_list)); + break; + } + blk_mq_end_request(rq, ret); + errors++; + } else + queued++; + } - return blk_rq_pos(rqa) > blk_rq_pos(rqb); + /* + * If we didn't flush the entire list, we could have told + * the driver there was more coming, but that turned out to + * be a lie. + */ + if ((!rq_list_empty(plug->mq_list) || errors) && + hctx->queue->mq_ops->commit_rqs && queued) + hctx->queue->mq_ops->commit_rqs(hctx); } void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) { + struct blk_mq_hw_ctx *this_hctx; + struct blk_mq_ctx *this_ctx; + unsigned int depth; LIST_HEAD(list); - if (list_empty(&plug->mq_list)) + if (rq_list_empty(plug->mq_list)) return; - list_splice_init(&plug->mq_list, &list); - - if (plug->rq_count > 2 && plug->multiple_queues) - list_sort(NULL, &list, plug_rq_cmp); - plug->rq_count = 0; + if (!plug->multiple_queues && !plug->has_elevator) { + blk_mq_plug_issue_direct(plug, from_schedule); + if (rq_list_empty(plug->mq_list)) + return; + } + + this_hctx = NULL; + this_ctx = NULL; + depth = 0; do { - struct list_head rq_list; - struct request *rq, *head_rq = list_entry_rq(list.next); - struct list_head *pos = &head_rq->queuelist; /* skip first */ - struct blk_mq_hw_ctx *this_hctx = head_rq->mq_hctx; - struct blk_mq_ctx *this_ctx = head_rq->mq_ctx; - unsigned int depth = 1; - - list_for_each_continue(pos, &list) { - rq = list_entry_rq(pos); - BUG_ON(!rq->q); - if (rq->mq_hctx != this_hctx || rq->mq_ctx != this_ctx) - break; - depth++; + struct request *rq; + + rq = rq_list_pop(&plug->mq_list); + + if (!this_hctx) { + this_hctx = rq->mq_hctx; + this_ctx = rq->mq_ctx; + } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) { + trace_block_unplug(this_hctx->queue, depth, + !from_schedule); + blk_mq_sched_insert_requests(this_hctx, this_ctx, + &list, from_schedule); + depth = 0; + this_hctx = rq->mq_hctx; + this_ctx = rq->mq_ctx; + } - list_cut_before(&rq_list, &list, pos); - trace_block_unplug(head_rq->q, depth, !from_schedule); - blk_mq_sched_insert_requests(this_hctx, this_ctx, &rq_list, + list_add(&rq->queuelist, &list); + depth++; + } while (!rq_list_empty(plug->mq_list)); + + if (!list_empty(&list)) { + trace_block_unplug(this_hctx->queue, depth, !from_schedule); + blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, from_schedule); - } while(!list_empty(&list)); + } } static void blk_mq_bio_to_request(struct request *rq, struct bio *bio, @@ -2345,16 +2390,17 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq) { - list_add_tail(&rq->queuelist, &plug->mq_list); - plug->rq_count++; - if (!plug->multiple_queues && !list_is_singular(&plug->mq_list)) { - struct request *tmp; + if (!plug->multiple_queues) { + struct request *nxt = rq_list_peek(&plug->mq_list); - tmp = list_first_entry(&plug->mq_list, struct request, - queuelist); - if (tmp->q != rq->q) + if (nxt && nxt->q != rq->q) plug->multiple_queues = true; } + if (!plug->has_elevator && (rq->rq_flags & RQF_ELV)) + plug->has_elevator = true; + rq->rq_next = NULL; + rq_list_add(&plug->mq_list, rq); + plug->rq_count++; } /* @@ -2389,7 +2435,7 @@ void blk_mq_submit_bio(struct bio *bio) const int is_flush_fua = op_is_flush(bio->bi_opf); struct request *rq; struct blk_plug *plug; - struct request *same_queue_rq = NULL; + bool same_queue_rq = false; unsigned int nr_segs = 1; blk_status_t ret; @@ -2463,15 +2509,17 @@ void blk_mq_submit_bio(struct bio *bio) * IO may benefit a lot from plug merging. */ unsigned int request_count = plug->rq_count; - struct request *last = NULL; + struct request *tmp = NULL; - if (!request_count) + if (!request_count) { trace_block_plug(q); - else - last = list_entry_rq(plug->mq_list.prev); + } else if (!blk_queue_nomerges(q)) { + tmp = rq_list_peek(&plug->mq_list); + if (blk_rq_bytes(tmp) < BLK_PLUG_FLUSH_SIZE) + tmp = NULL; + } - if (request_count >= blk_plug_max_rq_count(plug) || (last && - blk_rq_bytes(last) >= BLK_PLUG_FLUSH_SIZE)) { + if (request_count >= blk_plug_max_rq_count(plug) || tmp) { blk_flush_plug_list(plug, false); trace_block_plug(q); } @@ -2481,6 +2529,8 @@ void blk_mq_submit_bio(struct bio *bio) /* Insert the request at the IO scheduler queue */ blk_mq_sched_insert_request(rq, false, true, true); } else if (plug && !blk_queue_nomerges(q)) { + struct request *first_rq = NULL; + /* * We do limited plugging. If the bio can be merged, do that. * Otherwise the existing request in the plug list will be @@ -2488,19 +2538,17 @@ void blk_mq_submit_bio(struct bio *bio) * The plug list might get flushed before this. If that happens, * the plug list is empty, and same_queue_rq is invalid. */ - if (list_empty(&plug->mq_list)) - same_queue_rq = NULL; - if (same_queue_rq) { - list_del_init(&same_queue_rq->queuelist); + if (!rq_list_empty(plug->mq_list) && same_queue_rq) { + first_rq = rq_list_pop(&plug->mq_list); plug->rq_count--; } blk_add_rq_to_plug(plug, rq); trace_block_plug(q); - if (same_queue_rq) { + if (first_rq) { trace_block_unplug(q, 1, true); - blk_mq_try_issue_directly(same_queue_rq->mq_hctx, - same_queue_rq); + blk_mq_try_issue_directly(first_rq->mq_hctx, + first_rq); } } else if ((q->nr_hw_queues > 1 && is_sync) || !rq->mq_hctx->dispatch_busy) { diff --git a/block/blk.h b/block/blk.h index fdfaa6896fc4..c15d1ab224b8 100644 --- a/block/blk.h +++ b/block/blk.h @@ -216,7 +216,7 @@ void blk_add_timer(struct request *req); void blk_print_req_error(struct request *req, blk_status_t status); bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, - unsigned int nr_segs, struct request **same_queue_rq); + unsigned int nr_segs, bool *same_queue_rq); bool blk_bio_list_merge(struct request_queue *q, struct list_head *list, struct bio *bio, unsigned int nr_segs); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 43dda725dcae..c6a6402cb1a1 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -727,7 +727,7 @@ extern void blk_set_queue_dying(struct request_queue *); * schedule() where blk_schedule_flush_plug() is called. */ struct blk_plug { - struct list_head mq_list; /* blk-mq requests */ + struct request *mq_list; /* blk-mq requests */ /* if ios_left is > 1, we can batch tag/rq allocations */ struct request *cached_rq; @@ -736,6 +736,7 @@ struct blk_plug { unsigned short rq_count; bool multiple_queues; + bool has_elevator; bool nowait; struct list_head cb_list; /* md requires an unplug callback */ @@ -776,8 +777,7 @@ static inline bool blk_needs_flush_plug(struct task_struct *tsk) struct blk_plug *plug = tsk->plug; return plug && - (!list_empty(&plug->mq_list) || - !list_empty(&plug->cb_list)); + (plug->mq_list || !list_empty(&plug->cb_list)); } int blkdev_issue_flush(struct block_device *bdev); From patchwork Sun Oct 17 01:37:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5893AC433F5 for ; Sun, 17 Oct 2021 01:38:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3B23461040 for ; Sun, 17 Oct 2021 01:38:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244810AbhJQBkI (ORCPT ); Sat, 16 Oct 2021 21:40:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244797AbhJQBkG (ORCPT ); Sat, 16 Oct 2021 21:40:06 -0400 Received: from mail-io1-xd2b.google.com (mail-io1-xd2b.google.com [IPv6:2607:f8b0:4864:20::d2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A25EC061767 for ; Sat, 16 Oct 2021 18:37:58 -0700 (PDT) Received: by mail-io1-xd2b.google.com with SMTP id b188so7295053iof.8 for ; Sat, 16 Oct 2021 18:37:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KOK97BmCRNOTGss3t8cHBmzyYP2ZOzFLG8RYKaIBP7k=; b=VmSmQVW2upH3tu3v1uWW0vGNeXvKaQlesmgb8WkIx5Lpt5pnkgyNj5W+mHHiJcbWDY V46lVNiVJO6oyfqesP/LxshWJY3h/IKe977rpd6HgG1GvlmBOFAldwDExBe1ftXDx9zq PHVNWPYJ1qWg0Qp80ZQrKsxkg/GMACYopKMrDnbjMRrspR+sQ3XZpKNnEmG8IMh9HHQ4 lvIzVNaZht70v8Hp+EPdQYn0rpudlCb+2NnJ6ZpZEr+IQv7IBENpfRbmrfhUUI9R9IrZ aLIYf4JsdVtVG+DBwaKROgBcqVX0VB/qBtnYc0j1lRjey28WwJRkilR2VEuh34Oe2Bcc MWFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KOK97BmCRNOTGss3t8cHBmzyYP2ZOzFLG8RYKaIBP7k=; b=THcUt0jKd5JcPbfH1B72EPKDxH0xJhQQwLWXBWmwhE1Tc8fXmN4VPbUsLPuFsE4Utp upxRshLQ9vE1ufP/a+nyjeOXaYX9K+NgRS3YDXkcu4E6kEgTuntS1qhT3q8LnBv83rzP Uzc+6G0Uy9Ppm05XcjSfbxNrCyZW574rucUlU8+pNtWiPkGa/M79c0T0rpU2E206wC1/ c9NeiDzv0FRHlIc1Kqt8LLZrQu3YcZja0bzHG927HYBOrOmh3f6rFOLmRx00I4DYtqhk xX0LHUj5eXGgQZVQfbJh35Cgw/fdpPpOYDLAt58mKnKyTKPfJ+Emw2BFd1m5ZtChq5pd JEmw== X-Gm-Message-State: AOAM530lyT8sPDUATgYBe7Tw89sxpy7aCHMEw5kInRCTzz/fPBIDfSTF J/2+cpyKpn8ogGxeQoheJtKrfPs7wOqSeA== X-Google-Smtp-Source: ABdhPJwcF8LXC0F6/3ziYGxJOHgk934e+rzT5ikr7jJXYcbaU6vIkCJ4je3amzettZOxg2tHaWNW9g== X-Received: by 2002:a05:6602:134d:: with SMTP id i13mr9379234iov.164.1634434677232; Sat, 16 Oct 2021 18:37:57 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:37:56 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 08/14] block: improve layout of struct request Date: Sat, 16 Oct 2021 19:37:42 -0600 Message-Id: <20211017013748.76461-9-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org It's been a while since this was analyzed, move some members around to better flow with the use case. Initial state up top, and queued state after that. This improves my peak case by about 1.5%, from 7750K to 7900K IOPS. Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig --- include/linux/blk-mq.h | 90 +++++++++++++++++++++--------------------- 1 file changed, 46 insertions(+), 44 deletions(-) diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 3a399aa372b5..95c3bd3a008e 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -85,6 +85,8 @@ struct request { int tag; int internal_tag; + unsigned int timeout; + /* the following two fields are internal, NEVER access directly */ unsigned int __data_len; /* total data len */ sector_t __sector; /* sector cursor */ @@ -97,49 +99,6 @@ struct request { struct request *rq_next; }; - /* - * The hash is used inside the scheduler, and killed once the - * request reaches the dispatch list. The ipi_list is only used - * to queue the request for softirq completion, which is long - * after the request has been unhashed (and even removed from - * the dispatch list). - */ - union { - struct hlist_node hash; /* merge hash */ - struct llist_node ipi_list; - }; - - /* - * The rb_node is only used inside the io scheduler, requests - * are pruned when moved to the dispatch queue. So let the - * completion_data share space with the rb_node. - */ - union { - struct rb_node rb_node; /* sort/lookup */ - struct bio_vec special_vec; - void *completion_data; - int error_count; /* for legacy drivers, don't use */ - }; - - /* - * Three pointers are available for the IO schedulers, if they need - * more they have to dynamically allocate it. Flush requests are - * never put on the IO scheduler. So let the flush fields share - * space with the elevator data. - */ - union { - struct { - struct io_cq *icq; - void *priv[2]; - } elv; - - struct { - unsigned int seq; - struct list_head list; - rq_end_io_fn *saved_end_io; - } flush; - }; - struct gendisk *rq_disk; struct block_device *part; #ifdef CONFIG_BLK_RQ_ALLOC_TIME @@ -182,9 +141,52 @@ struct request { enum mq_rq_state state; refcount_t ref; - unsigned int timeout; unsigned long deadline; + /* + * The hash is used inside the scheduler, and killed once the + * request reaches the dispatch list. The ipi_list is only used + * to queue the request for softirq completion, which is long + * after the request has been unhashed (and even removed from + * the dispatch list). + */ + union { + struct hlist_node hash; /* merge hash */ + struct llist_node ipi_list; + }; + + /* + * The rb_node is only used inside the io scheduler, requests + * are pruned when moved to the dispatch queue. So let the + * completion_data share space with the rb_node. + */ + union { + struct rb_node rb_node; /* sort/lookup */ + struct bio_vec special_vec; + void *completion_data; + int error_count; /* for legacy drivers, don't use */ + }; + + + /* + * Three pointers are available for the IO schedulers, if they need + * more they have to dynamically allocate it. Flush requests are + * never put on the IO scheduler. So let the flush fields share + * space with the elevator data. + */ + union { + struct { + struct io_cq *icq; + void *priv[2]; + } elv; + + struct { + unsigned int seq; + struct list_head list; + rq_end_io_fn *saved_end_io; + } flush; + }; + union { struct __call_single_data csd; u64 fifo_time; From patchwork Sun Oct 17 01:37:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0397EC433FE for ; Sun, 17 Oct 2021 01:38:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D7BE860F6F for ; Sun, 17 Oct 2021 01:38:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244797AbhJQBkJ (ORCPT ); Sat, 16 Oct 2021 21:40:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244805AbhJQBkI (ORCPT ); Sat, 16 Oct 2021 21:40:08 -0400 Received: from mail-io1-xd2c.google.com (mail-io1-xd2c.google.com [IPv6:2607:f8b0:4864:20::d2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AD84C061769 for ; Sat, 16 Oct 2021 18:37:58 -0700 (PDT) Received: by mail-io1-xd2c.google.com with SMTP id h196so12231578iof.2 for ; Sat, 16 Oct 2021 18:37:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yQAe4HYfRH7pUFfAqcx+7kBeOw5mlgfuJcq+bQmlNjE=; b=kNOKxVG3y2w+wD1ABwgiTuJbI5+NWGZrG/ZLoG0+lwqR3yILP1XnxqYmpvgCfoS1O3 uHK9nbIofmerJoqqh7j/CUo0r4I/IWZ4OBv93b2pC8MsqfA3Qn+eoykRrvYC69MsUn9j I8qZibvBSRL1aDeCPTL/NUVW6l0tKCaCyOKu94pn2Hhp6hkf/df2McNvCYaxU0G2h1Sl ybY74YCjIrlWLg9v0UdTIuv8MjRde29lQe+nvhDw4N0nRXS8hDla8LOafTzF6809rVvJ DQRuh7IqFVfZlEqaGiBAIxF8eK9Zb01AvQwr1LbZznKOcpnDUuZwOBSc39myo0FJJa5G OhcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yQAe4HYfRH7pUFfAqcx+7kBeOw5mlgfuJcq+bQmlNjE=; b=LmhYIy5oRkbuu28rZrauelvhbq4YecpuXENwyfUiBKomNFVT8PIw++nXcqzSyRTVt/ 1wc2FrlEX4hUUSju8LOWJyLK+9XzNiM8wFk96kl9WhZ7KsWtixvh6+E1c97iqCR08mmV gpxBpVAYgbKigFE4a5Q4Q7AOSdToDSSZSRe6nZ8sO+zEpUaeNGIzRJNXI++yCaRcuoiU ZyZRYzYVGKpo4BZWSHGp2nz6K0PYcj30gKCUEuJV4JIhvbMNtKtI42+6LZ6ktI3t1Tpy itaub5rIEDL6ouNdh9R36zFcNUP8x5yXV7j65+3aDj0iMZDSEOPSNPhEU9onZ83WKe6a BePw== X-Gm-Message-State: AOAM532AvCbAodBHNooJ6cVLou537RT3jxEw33kri8lKAvCZXGck74Yb O01gPHw/zQ1UCimaIQSMsxuB0uTK1511Yw== X-Google-Smtp-Source: ABdhPJxGM4nBzyEIdkVZ3U6DKaiZjchOAOZ6H4v8Ru4ZpVblPJr9vU3hfKdYOEgdPl2/QboFW6NuTw== X-Received: by 2002:a05:6638:220c:: with SMTP id l12mr14031723jas.149.1634434677954; Sat, 16 Oct 2021 18:37:57 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:37:57 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 09/14] block: only mark bio as tracked if it really is tracked Date: Sat, 16 Oct 2021 19:37:43 -0600 Message-Id: <20211017013748.76461-10-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We set BIO_TRACKED unconditionally when rq_qos_throttle() is called, even though we may not even have an rq_qos handler. Only mark it as TRACKED if it really is potentially tracked. This saves considerable time for the case where the bio isn't tracked: 2.64% -1.65% [kernel.vmlinux] [k] bio_endio Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig --- block/blk-rq-qos.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h index f000f83e0621..3cfbc8668cba 100644 --- a/block/blk-rq-qos.h +++ b/block/blk-rq-qos.h @@ -189,9 +189,10 @@ static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio) * BIO_TRACKED lets controllers know that a bio went through the * normal rq_qos path. */ - bio_set_flag(bio, BIO_TRACKED); - if (q->rq_qos) + if (q->rq_qos) { + bio_set_flag(bio, BIO_TRACKED); __rq_qos_throttle(q->rq_qos, bio); + } } static inline void rq_qos_track(struct request_queue *q, struct request *rq, From patchwork Sun Oct 17 01:37:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A460EC4332F for ; Sun, 17 Oct 2021 01:38:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 80F20611C1 for ; Sun, 17 Oct 2021 01:38:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244804AbhJQBkJ (ORCPT ); Sat, 16 Oct 2021 21:40:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244778AbhJQBkI (ORCPT ); Sat, 16 Oct 2021 21:40:08 -0400 Received: from mail-il1-x12f.google.com (mail-il1-x12f.google.com [IPv6:2607:f8b0:4864:20::12f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 061D6C06176D for ; Sat, 16 Oct 2021 18:37:59 -0700 (PDT) Received: by mail-il1-x12f.google.com with SMTP id k3so11258419ilu.2 for ; Sat, 16 Oct 2021 18:37:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vHOundxYtcxO0NTIkPzychbX+B4635OsvZ2Zvbw8uAA=; b=XT8/tPvVRpLafVm612NSkk2On4c8hHvfyu1J3chZaRnricIF3FWT0CDHLvrft3H4im JneH2ZgHEgIdNr0ibaVZwms/I63HJwJ5iLLvi7ag/9EwmA1B4dO8qKPkI+nHZ+zAnElA zBcRgZju7rqDrj8pz0c60Iv1GXs4EFvdHBByjfkenhiaPdbM9mFcanAx5dFsl0olsTL2 nWB0AiHqcTu5JYh6a9L3mfz0xMCFIfl3ZqiiLQhA/Bd+XneS0QADkwu/YMueNt27dkry 8VhtSoJZ5EZozzfNHBrR//0rnCtXfBoP2xum5f+Hi5I/MTOojtX97O+2KHsaSPDvdPAB LawQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vHOundxYtcxO0NTIkPzychbX+B4635OsvZ2Zvbw8uAA=; b=FIO3dNh4s9hF12hlFAI/t9/277Vc0Vr8CNcdo0OtaqubRGJvlKZlTBzQ+UIwqdYuQq SpjXOs1tBmauPhhN3fGqi5LfhDAzeobtWRLpu7J+7RW6G2TuYECrXJCWxerA0cSDlVWQ OIocXSmTcsju3xK7mQsAcTxaFeK/H6uGBnESYMTYHcQr95kL48AfERMWB8+pdFhSPu9L SvLHf3gCR404iltLfTwO97Yd7P/6LDcfXwpQ1/blViuo0nqAKNRdxHkStZJmN+4APHdl WTyxtjy7kqzC7Mw+NVuAGds9StWfEMrhue6tPPCfFCy2Kdp/KG7F2ZFwz1LU3WiRiJCx Md/A== X-Gm-Message-State: AOAM533xnRxzZYzBzKg7qOBB+KWS73mR96zI77QutMVj3Nlj4YrLrmOk hKO/LFp1rj56/AeJ/W/9T4qOaeFFNh/mZA== X-Google-Smtp-Source: ABdhPJy2giWR3J3NQjRD655dtzg+DlrNtYrCeBFy7d9+2iHBYIWm75O0iPvchuqy2qLBT0j6In5nDg== X-Received: by 2002:a05:6e02:1baf:: with SMTP id n15mr8812619ili.249.1634434678696; Sat, 16 Oct 2021 18:37:58 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:37:58 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 10/14] block: move blk_mq_tag_to_rq() inline Date: Sat, 16 Oct 2021 19:37:44 -0600 Message-Id: <20211017013748.76461-11-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This is in the fast path of driver issue or completion, and it's a single array index operation. Move it inline to avoid a function call for it. This does mean making struct blk_mq_tags block layer public, but there's not really much in there. Signed-off-by: Jens Axboe --- block/blk-mq-tag.h | 23 ----------------------- block/blk-mq.c | 11 ----------- include/linux/blk-mq.h | 35 ++++++++++++++++++++++++++++++++++- 3 files changed, 34 insertions(+), 35 deletions(-) diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 71c2f7d8e9b7..e617c7220626 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -4,29 +4,6 @@ struct blk_mq_alloc_data; -/* - * Tag address space map. - */ -struct blk_mq_tags { - unsigned int nr_tags; - unsigned int nr_reserved_tags; - - atomic_t active_queues; - - struct sbitmap_queue bitmap_tags; - struct sbitmap_queue breserved_tags; - - struct request **rqs; - struct request **static_rqs; - struct list_head page_list; - - /* - * used to clear request reference in rqs[] before freeing one - * request pool - */ - spinlock_t lock; -}; - extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags, unsigned int reserved_tags, int node, int alloc_policy); diff --git a/block/blk-mq.c b/block/blk-mq.c index cd1249284c1f..064fdeeb1be5 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1069,17 +1069,6 @@ void blk_mq_delay_kick_requeue_list(struct request_queue *q, } EXPORT_SYMBOL(blk_mq_delay_kick_requeue_list); -struct request *blk_mq_tag_to_rq(struct blk_mq_tags *tags, unsigned int tag) -{ - if (tag < tags->nr_tags) { - prefetch(tags->rqs[tag]); - return tags->rqs[tag]; - } - - return NULL; -} -EXPORT_SYMBOL(blk_mq_tag_to_rq); - static bool blk_mq_rq_inflight(struct blk_mq_hw_ctx *hctx, struct request *rq, void *priv, bool reserved) { diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 95c3bd3a008e..1dccea9505e5 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -685,7 +685,40 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op, struct request *blk_mq_alloc_request_hctx(struct request_queue *q, unsigned int op, blk_mq_req_flags_t flags, unsigned int hctx_idx); -struct request *blk_mq_tag_to_rq(struct blk_mq_tags *tags, unsigned int tag); + +/* + * Tag address space map. + */ +struct blk_mq_tags { + unsigned int nr_tags; + unsigned int nr_reserved_tags; + + atomic_t active_queues; + + struct sbitmap_queue bitmap_tags; + struct sbitmap_queue breserved_tags; + + struct request **rqs; + struct request **static_rqs; + struct list_head page_list; + + /* + * used to clear request reference in rqs[] before freeing one + * request pool + */ + spinlock_t lock; +}; + +static inline struct request *blk_mq_tag_to_rq(struct blk_mq_tags *tags, + unsigned int tag) +{ + if (tag < tags->nr_tags) { + prefetch(tags->rqs[tag]); + return tags->rqs[tag]; + } + + return NULL; +} enum { BLK_MQ_UNIQUE_TAG_BITS = 16, From patchwork Sun Oct 17 01:37:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FA3FC433EF for ; Sun, 17 Oct 2021 01:38:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 365E0611F0 for ; Sun, 17 Oct 2021 01:38:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244807AbhJQBkK (ORCPT ); Sat, 16 Oct 2021 21:40:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244809AbhJQBkI (ORCPT ); Sat, 16 Oct 2021 21:40:08 -0400 Received: from mail-io1-xd2c.google.com (mail-io1-xd2c.google.com [IPv6:2607:f8b0:4864:20::d2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 063DFC06176F for ; Sat, 16 Oct 2021 18:37:59 -0700 (PDT) Received: by mail-io1-xd2c.google.com with SMTP id z69so9063363iof.9 for ; Sat, 16 Oct 2021 18:37:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fDQGjF9l9l0KYntZqRHf0xcHh6lv2lU7jnlOIiiayCM=; b=PVWHBAgo5ll8rF+CBQwGPMKuKwtS1TJuGUy7VbyXEtpa7ot6hkzia1rDA1O8wPOP8M xUgKHCsunkosNoLeDqrZ/uJIe7voMGvIWHoqHVpvFlUg0aqgw9314RKK5RtYtwQpum/4 MPnSEyHQ+5VOXSqIpsjOLLoV0l8CIeooWc4PH3VFTTwEoEV4lRrcsXZ4owWUpPy4s49i fseZtIbh94gGhIypiBdlJ9ndFI6rCd462GryrfcHguFEzfuD2tEHMMDel6gp5ZYzolKf rD/Bvm4tqsckUscBvMk133Z24w8fCGy/MfGz6d6kTVd112o82cobK93s+05QwZxAdl1K uDUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fDQGjF9l9l0KYntZqRHf0xcHh6lv2lU7jnlOIiiayCM=; b=PfCo4yf6UEVR2adInSYB1hRB32Mv3yQHp+U7kDe9Hzimjmn9bfuEjqWKpjR+AkUmmX 1v+UTquUYKdpBbpvJ7Ag46bXX/IYymDaQueZPuZk2r3pMPTRWwB4JYiBjQupStEPv+NV L4wjHZ6wwbUL2iIf0PKeC+CoOZKByoA+fbpJJX3JI8bgG8eAE/mrgxvR0se/AqUBlc+S vmu5wlfbcuOUC7VhxS9zYRPwBLDBn+ohnbE7HEu+nJyVh7HFlTZC63fbEnabF9GppHWZ n/th0IPGMVVq9Gkej839qIyWNuoYcbrkl5c8OBBHgU96cAMnASbgX8xloiGnzuTNcutv n3iA== X-Gm-Message-State: AOAM533CSWYj+xzTPxk2zHgy7kko4eWo2f8DG3tj3oHTWQUpVbY2JmGt POx0+F5t1jI25DAFHhvUb0iEbB0pkCPObQ== X-Google-Smtp-Source: ABdhPJyHJidWNIxE+LzL/gQnjGGnoN13lsvpTuLAcKijxVWRL8huaMIhoHfcJYE8sAJwHuGcOlChjw== X-Received: by 2002:a05:6602:2c02:: with SMTP id w2mr9419923iov.45.1634434679196; Sat, 16 Oct 2021 18:37:59 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:37:58 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 11/14] block: optimize blk_mq_rq_ctx_init() Date: Sat, 16 Oct 2021 19:37:45 -0600 Message-Id: <20211017013748.76461-12-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This takes ~4.7% on a peak performance run, with this patch it brings it down to ~3.7%. Signed-off-by: Pavel Begunkov [axboe: rebase and move queuelist init] Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig --- block/blk-mq.c | 64 ++++++++++++++++++++++++-------------------------- 1 file changed, 31 insertions(+), 33 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 064fdeeb1be5..0d05ac4a3b50 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -304,63 +304,62 @@ static inline bool blk_mq_need_time_stamp(struct request *rq) static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, unsigned int tag, u64 alloc_time_ns) { + struct blk_mq_ctx *ctx = data->ctx; + struct blk_mq_hw_ctx *hctx = data->hctx; + struct request_queue *q = data->q; struct blk_mq_tags *tags = blk_mq_tags_from_data(data); struct request *rq = tags->static_rqs[tag]; + unsigned int cmd_flags = data->cmd_flags; + u64 start_time_ns = 0; - if (data->q->elevator) { - rq->rq_flags = RQF_ELV; - rq->tag = BLK_MQ_NO_TAG; - rq->internal_tag = tag; - } else { + rq->q = q; + rq->mq_ctx = ctx; + rq->mq_hctx = hctx; + rq->cmd_flags = cmd_flags; + data->ctx->rq_dispatched[op_is_sync(data->cmd_flags)]++; + data->hctx->queued++; + if (!q->elevator) { rq->rq_flags = 0; rq->tag = tag; rq->internal_tag = BLK_MQ_NO_TAG; + } else { + rq->rq_flags = RQF_ELV; + rq->tag = BLK_MQ_NO_TAG; + rq->internal_tag = tag; } - - /* csd/requeue_work/fifo_time is initialized before use */ - rq->q = data->q; - rq->mq_ctx = data->ctx; - rq->mq_hctx = data->hctx; - rq->cmd_flags = data->cmd_flags; - if (data->flags & BLK_MQ_REQ_PM) - rq->rq_flags |= RQF_PM; - if (blk_queue_io_stat(data->q)) - rq->rq_flags |= RQF_IO_STAT; + rq->timeout = 0; INIT_LIST_HEAD(&rq->queuelist); - INIT_HLIST_NODE(&rq->hash); - RB_CLEAR_NODE(&rq->rb_node); rq->rq_disk = NULL; rq->part = NULL; #ifdef CONFIG_BLK_RQ_ALLOC_TIME rq->alloc_time_ns = alloc_time_ns; #endif - if (blk_mq_need_time_stamp(rq)) - rq->start_time_ns = ktime_get_ns(); - else - rq->start_time_ns = 0; rq->io_start_time_ns = 0; rq->stats_sectors = 0; rq->nr_phys_segments = 0; #if defined(CONFIG_BLK_DEV_INTEGRITY) rq->nr_integrity_segments = 0; #endif - blk_crypto_rq_set_defaults(rq); - /* tag was already set */ - WRITE_ONCE(rq->deadline, 0); - - rq->timeout = 0; - rq->end_io = NULL; rq->end_io_data = NULL; - - data->ctx->rq_dispatched[op_is_sync(data->cmd_flags)]++; + if (data->flags & BLK_MQ_REQ_PM) + rq->rq_flags |= RQF_PM; + if (blk_queue_io_stat(data->q)) + rq->rq_flags |= RQF_IO_STAT; + if (blk_mq_need_time_stamp(rq)) + start_time_ns = ktime_get_ns(); + rq->start_time_ns = start_time_ns; + blk_crypto_rq_set_defaults(rq); refcount_set(&rq->ref, 1); + WRITE_ONCE(rq->deadline, 0); - if (!op_is_flush(data->cmd_flags) && (rq->rq_flags & RQF_ELV)) { - struct elevator_queue *e = data->q->elevator; + if (rq->rq_flags & RQF_ELV) { + struct elevator_queue *e = q->elevator; rq->elv.icq = NULL; - if (e->type->ops.prepare_request) { + RB_CLEAR_NODE(&rq->rb_node); + INIT_HLIST_NODE(&rq->hash); + if (!op_is_flush(cmd_flags) && e->type->ops.prepare_request) { if (e->type->icq_cache) blk_mq_sched_assign_ioc(rq); @@ -369,7 +368,6 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, } } - data->hctx->queued++; return rq; } From patchwork Sun Oct 17 01:37:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFCFAC433F5 for ; Sun, 17 Oct 2021 01:38:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B70B861211 for ; Sun, 17 Oct 2021 01:38:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244799AbhJQBkK (ORCPT ); Sat, 16 Oct 2021 21:40:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244796AbhJQBkJ (ORCPT ); Sat, 16 Oct 2021 21:40:09 -0400 Received: from mail-il1-x12d.google.com (mail-il1-x12d.google.com [IPv6:2607:f8b0:4864:20::12d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BFA3C061768 for ; Sat, 16 Oct 2021 18:38:00 -0700 (PDT) Received: by mail-il1-x12d.google.com with SMTP id j8so11216294ila.11 for ; Sat, 16 Oct 2021 18:38:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ERBT8VF2wyOK8Bo9ixt5H3WSIawANolkBAaKtj+zNU8=; b=ql2sVcnScaNVaSQlNWeQafIoKnOprfZd/oGZwfkW9H1y8JosRl4J+DjlPeWNZVA8kC WeXsVjG/jQJcY41HO6bbSkLkNh/xguS3NvxYBWIr3cfwibIm5EEQjtR6qwau7+Hw9yJs DwlarUN7QCDssGxQCI1/Ykdr/GEqagb8HgTzlSKEncv1hx/KN+evxgLce26ETYZCOT3w YzsbOnEo0kfDtu/BOHLYxd0n+2Oi8sP6UN3aZSjePNjp06nQxZNEdz9NkXDOHaWjny5W D/SOZ4QSxhg8bolacNRFd9Uz7ws5WrwcvPhJOs7/eSROYHChmxGZ4OrS4mXECXowaurW 28ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ERBT8VF2wyOK8Bo9ixt5H3WSIawANolkBAaKtj+zNU8=; b=apubhKFoE6oCzYX1hF3PtCVkXVO2z0APG9UxJ+Oyt5FQS2bv22vuL+cSj8gMiDWpZz 6KJTMfEHIy6iGGblgtqkxvHOpuw90Tk0NXmS/VRSQLI+O5NwW4I7f2BCppIvlTH48Xq9 C3XXb0s9ers46/YUU9fVw3uaA+RNbrr0iPTOeH6wJ7n0zFGAlgMC0/8R50uteVEjMBSa jkd2Unz7IHYxtHjOeYIYXCvWVz8MHn68eFZkLypicIVn76J97bqmjM+5cRIVc+LDVM5+ HMG3jxatN8PsTsSkHfGpqSXdBkuo18jtC1WJ5+dygf4nwlcGD7pzFL/+Il9L14fj9DWG W8xg== X-Gm-Message-State: AOAM530NnihtfDayP8ttx40sCvW2RvhG66kxcbAnvuLGz103qMz2NS34 l+A9jfIWhhe5y2hIILxrwzC0VLPFKftG/w== X-Google-Smtp-Source: ABdhPJzfCpsQvn4xo4wEw6an6pEQFhLo/FeuRssw3zcmzZC+ED45j40jq9LftmOBwHKWHJ+cqKxmRw== X-Received: by 2002:a92:cbc2:: with SMTP id s2mr9469430ilq.228.1634434679867; Sat, 16 Oct 2021 18:37:59 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:37:59 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 12/14] block: align blkdev_dio inlined bio to a cacheline Date: Sat, 16 Oct 2021 19:37:46 -0600 Message-Id: <20211017013748.76461-13-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We get all sorts of unreliable and funky results since the bio is designed to align on a cacheline, which it does not when inlined like this. Signed-off-by: Jens Axboe --- block/fops.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/fops.c b/block/fops.c index 1d4f862950bb..7eebd590342b 100644 --- a/block/fops.c +++ b/block/fops.c @@ -137,7 +137,7 @@ struct blkdev_dio { size_t size; atomic_t ref; unsigned int flags; - struct bio bio; + struct bio bio ____cacheline_aligned_in_smp; }; static struct bio_set blkdev_dio_pool; From patchwork Sun Oct 17 01:37:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564039 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C89DC43217 for ; Sun, 17 Oct 2021 01:38:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 111606120A for ; Sun, 17 Oct 2021 01:38:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244727AbhJQBkL (ORCPT ); Sat, 16 Oct 2021 21:40:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244806AbhJQBkK (ORCPT ); Sat, 16 Oct 2021 21:40:10 -0400 Received: from mail-il1-x130.google.com (mail-il1-x130.google.com [IPv6:2607:f8b0:4864:20::130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AE8CC061765 for ; Sat, 16 Oct 2021 18:38:01 -0700 (PDT) Received: by mail-il1-x130.google.com with SMTP id h10so11255485ilq.3 for ; Sat, 16 Oct 2021 18:38:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=f+Z//YyA2T7lKHECVu1gnxRhxbGDl6uDF9mrhR8834U=; b=Y6F4c0s1qSoHNoPamz0m8izKn8Nzde+D7z0ktwLdE4Km1dBwP8ZK9df7rBhswl2W8f KG21G1lf0z/2SPY2/pcP0trb6g/u6jTFZ4/0y7TcWVCA3Q4yGvxkFjbJnAiU6TAC0svv 73FPONfo0v7Gw14ZZPz4UEX817+hcZwy9r6oeyR5DWbtGxXwEkfxGvt3vljxQlLWSHFM lLbomVv7WsMa1Nd6165lycqe6Q6NNWZEz3FGrgReAYf8Gg/RsuiYo+T5sO+kj08Sw30X A+CLl4L2z14rzCsm7xgRlTN2Ax1QsQUKcpICZq2JzSqYYF7HEgLAOS9bIp7uoTAzOU2B WIew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f+Z//YyA2T7lKHECVu1gnxRhxbGDl6uDF9mrhR8834U=; b=tcykQ5DCbXwDfQ6+nQCFXvCmWedo2hsdi1N5TPzNbpL232z1U7nVmCYB58Oikpq+6d d/VLxRqnk1lisv+3/xc6VStVLAs0KoTpq0aqAWI8JBbrm/eXNPKT99tdwsdBg3b3XsFE jAGriMw5hX3znMy5nATkM2aWEFv5FZnh7tbt9XpNGaCjnZCi+1cIYsPTfD1UeDSy6TGv tgES3GvEXDKRikmRRRMGxx9WMbptQ3yVMlQHTSrXyZkpRanrNf2tlo5ssq+WnaTly/vT SOV/IPRzU9C8V7w5wIIYoeXvhhpZbWAWqSJ4srcXFKBwKMZ2cmS/Qqf8WtavWIvaLhwU qjZA== X-Gm-Message-State: AOAM533d2ZzEHkP5FeG0lGd63UH9Q85VaEnxiinEgsa0iNQWjS8nhxZT N5qT2kk1Aa3V6gL0vFAIznI1CbtFoXJ85w== X-Google-Smtp-Source: ABdhPJypPCQWFSIQ7kYDz4YFjEP8UDfXNDTsrU/muDjwg3jOTfogja/eznwpXgCVaj527CjVr4Ogww== X-Received: by 2002:a92:c7a1:: with SMTP id f1mr8599651ilk.263.1634434680622; Sat, 16 Oct 2021 18:38:00 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.37.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:38:00 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 13/14] block: remove debugfs blk_mq_ctx dispatched/merged/completed attributes Date: Sat, 16 Oct 2021 19:37:47 -0600 Message-Id: <20211017013748.76461-14-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org These were added as part of early days debugging for blk-mq, and they are not really useful anymore. Rather than spend cycles updating them, just get rid of them. As a bonus, this shrinks the per-cpu software queue size from 256b to 192b. That's a whole cacheline less. Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig --- block/blk-mq-debugfs.c | 54 ------------------------------------------ block/blk-mq-sched.c | 5 +--- block/blk-mq.c | 3 --- block/blk-mq.h | 7 ------ 4 files changed, 1 insertion(+), 68 deletions(-) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 409a8256d9ff..928a16af9175 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -663,57 +663,6 @@ CTX_RQ_SEQ_OPS(default, HCTX_TYPE_DEFAULT); CTX_RQ_SEQ_OPS(read, HCTX_TYPE_READ); CTX_RQ_SEQ_OPS(poll, HCTX_TYPE_POLL); -static int ctx_dispatched_show(void *data, struct seq_file *m) -{ - struct blk_mq_ctx *ctx = data; - - seq_printf(m, "%lu %lu\n", ctx->rq_dispatched[1], ctx->rq_dispatched[0]); - return 0; -} - -static ssize_t ctx_dispatched_write(void *data, const char __user *buf, - size_t count, loff_t *ppos) -{ - struct blk_mq_ctx *ctx = data; - - ctx->rq_dispatched[0] = ctx->rq_dispatched[1] = 0; - return count; -} - -static int ctx_merged_show(void *data, struct seq_file *m) -{ - struct blk_mq_ctx *ctx = data; - - seq_printf(m, "%lu\n", ctx->rq_merged); - return 0; -} - -static ssize_t ctx_merged_write(void *data, const char __user *buf, - size_t count, loff_t *ppos) -{ - struct blk_mq_ctx *ctx = data; - - ctx->rq_merged = 0; - return count; -} - -static int ctx_completed_show(void *data, struct seq_file *m) -{ - struct blk_mq_ctx *ctx = data; - - seq_printf(m, "%lu %lu\n", ctx->rq_completed[1], ctx->rq_completed[0]); - return 0; -} - -static ssize_t ctx_completed_write(void *data, const char __user *buf, - size_t count, loff_t *ppos) -{ - struct blk_mq_ctx *ctx = data; - - ctx->rq_completed[0] = ctx->rq_completed[1] = 0; - return count; -} - static int blk_mq_debugfs_show(struct seq_file *m, void *v) { const struct blk_mq_debugfs_attr *attr = m->private; @@ -803,9 +752,6 @@ static const struct blk_mq_debugfs_attr blk_mq_debugfs_ctx_attrs[] = { {"default_rq_list", 0400, .seq_ops = &ctx_default_rq_list_seq_ops}, {"read_rq_list", 0400, .seq_ops = &ctx_read_rq_list_seq_ops}, {"poll_rq_list", 0400, .seq_ops = &ctx_poll_rq_list_seq_ops}, - {"dispatched", 0600, ctx_dispatched_show, ctx_dispatched_write}, - {"merged", 0600, ctx_merged_show, ctx_merged_write}, - {"completed", 0600, ctx_completed_show, ctx_completed_write}, {}, }; diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index efc1374b8831..e85b7556b096 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -387,13 +387,10 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, * potentially merge with. Currently includes a hand-wavy stop * count of 8, to not spend too much time checking for merges. */ - if (blk_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs)) { - ctx->rq_merged++; + if (blk_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs)) ret = true; - } spin_unlock(&ctx->lock); - return ret; } diff --git a/block/blk-mq.c b/block/blk-mq.c index 0d05ac4a3b50..4d91b74ce67a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -316,7 +316,6 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, rq->mq_ctx = ctx; rq->mq_hctx = hctx; rq->cmd_flags = cmd_flags; - data->ctx->rq_dispatched[op_is_sync(data->cmd_flags)]++; data->hctx->queued++; if (!q->elevator) { rq->rq_flags = 0; @@ -573,7 +572,6 @@ static void __blk_mq_free_request(struct request *rq) void blk_mq_free_request(struct request *rq) { struct request_queue *q = rq->q; - struct blk_mq_ctx *ctx = rq->mq_ctx; struct blk_mq_hw_ctx *hctx = rq->mq_hctx; if (rq->rq_flags & (RQF_ELVPRIV | RQF_ELV)) { @@ -587,7 +585,6 @@ void blk_mq_free_request(struct request *rq) } } - ctx->rq_completed[rq_is_sync(rq)]++; if (rq->rq_flags & RQF_MQ_INFLIGHT) __blk_mq_dec_active_requests(hctx); diff --git a/block/blk-mq.h b/block/blk-mq.h index ceed0a001c76..c12554c58abd 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -25,13 +25,6 @@ struct blk_mq_ctx { unsigned short index_hw[HCTX_MAX_TYPES]; struct blk_mq_hw_ctx *hctxs[HCTX_MAX_TYPES]; - /* incremented at dispatch time */ - unsigned long rq_dispatched[2]; - unsigned long rq_merged; - - /* incremented at completion time */ - unsigned long ____cacheline_aligned_in_smp rq_completed[2]; - struct request_queue *queue; struct blk_mq_ctxs *ctxs; struct kobject kobj; From patchwork Sun Oct 17 01:37:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12564041 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10556C43219 for ; Sun, 17 Oct 2021 01:38:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED11460F6F for ; Sun, 17 Oct 2021 01:38:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244800AbhJQBkM (ORCPT ); Sat, 16 Oct 2021 21:40:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244814AbhJQBkL (ORCPT ); Sat, 16 Oct 2021 21:40:11 -0400 Received: from mail-io1-xd30.google.com (mail-io1-xd30.google.com [IPv6:2607:f8b0:4864:20::d30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 690AAC061767 for ; Sat, 16 Oct 2021 18:38:02 -0700 (PDT) Received: by mail-io1-xd30.google.com with SMTP id h196so12231657iof.2 for ; Sat, 16 Oct 2021 18:38:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q06yRTuE0yUtpQX8v5cJDBVfvslgHmWeMrwXlrQbRGM=; b=OHQ+xn9K5+YEBNhzvXWoEf6KFMy3+q3JRrlj28kabliuOXgfMvuT0Dg64K5udfppYS TLMqlT2dtp4IIMy5h5v9QRowbDEW7RhpNOGFx5LCktPvAezVYyUgSZC87fsD7NHGZ8H6 tUj3OcLSLumbGPnSPHtid6fYa62vhgyE6c7AK+ofMBDzNpwuv3dB33QK20Cbfn854vym ibHYkfpShT0YwDtt1y4vGS31PlCYDuBCxlDR1BJ7D2i22CSeAZFOw/xngXV523olLU3s 8Ff78LoJR7uYSYHzORrEi1O+2Iw7iX2GhDBVANPooA7RYfYOdX+8kwgmYd/sGAdz32tQ hqeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q06yRTuE0yUtpQX8v5cJDBVfvslgHmWeMrwXlrQbRGM=; b=04AVFJmvJmkyo11O3RIugjhNtP2ds2QEBuTdMox6uneOLSBil5wNBsPxUBuUvR6i1K 5PbU49G7DC6ZcU0uz9XbeUwPD8xSYPohbDaIDskhemkthZwbhOzYLzUm1+GrnkCfK/kP iqGMPOzUv4IzOadqCyxwlMJoU1t5zl7UwI1gn8J3MYS1Phu8ftENGpSqpIKO825c839l gKW25q1FaHJN1DcfiAHLeSYik+4ldCi1dXqDzx/S/6QLULWqmm8hG+B4Gl0cP5IeVYVZ uKAHSCdHiJu0OaTthkK6V2qgCPDNhkZ9O4XymhavVIVPx3MBaE2NKI0Ko34/q8CoPt/f UC7A== X-Gm-Message-State: AOAM530sQU8y9lwBR/hpNBdy+OPeyb2U1lhEqDx4NnjEBr32LUJ5mzqI NbiV8at6/DS0U5lssxy+4Txj8/Xxrk6bQA== X-Google-Smtp-Source: ABdhPJwpR/7oVjeB/GXYyAwH7uQ4cnXeBQ9zSt8JkKXw72r328HKf3YFbSkwRc7/TQhe4BCa23/amA== X-Received: by 2002:a05:6602:29c6:: with SMTP id z6mr9338966ioq.215.1634434681633; Sat, 16 Oct 2021 18:38:01 -0700 (PDT) Received: from localhost.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id j17sm4934383ilq.1.2021.10.16.18.38.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 18:38:01 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 14/14] block: remove some blk_mq_hw_ctx debugfs entries Date: Sat, 16 Oct 2021 19:37:48 -0600 Message-Id: <20211017013748.76461-15-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211017013748.76461-1-axboe@kernel.dk> References: <20211017013748.76461-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Just like the blk_mq_ctx counterparts, we've got a bunch of counters in here that are only for debugfs and are of questionnable value. They are: - dispatched, index of how many requests were dispatched in one go - poll_{considered,invoked,success}, which track poll sucess rates. We're confident in the iopoll implementation at this point, don't bother tracking these. As a bonus, this shrinks each hardware queue from 576 bytes to 512 bytes, dropping a whole cacheline. Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig --- block/blk-mq-debugfs.c | 67 ------------------------------------------ block/blk-mq.c | 16 ---------- include/linux/blk-mq.h | 10 ------- 3 files changed, 93 deletions(-) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 928a16af9175..68ca5d21cda7 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -529,70 +529,6 @@ static int hctx_sched_tags_bitmap_show(void *data, struct seq_file *m) return res; } -static int hctx_io_poll_show(void *data, struct seq_file *m) -{ - struct blk_mq_hw_ctx *hctx = data; - - seq_printf(m, "considered=%lu\n", hctx->poll_considered); - seq_printf(m, "invoked=%lu\n", hctx->poll_invoked); - seq_printf(m, "success=%lu\n", hctx->poll_success); - return 0; -} - -static ssize_t hctx_io_poll_write(void *data, const char __user *buf, - size_t count, loff_t *ppos) -{ - struct blk_mq_hw_ctx *hctx = data; - - hctx->poll_considered = hctx->poll_invoked = hctx->poll_success = 0; - return count; -} - -static int hctx_dispatched_show(void *data, struct seq_file *m) -{ - struct blk_mq_hw_ctx *hctx = data; - int i; - - seq_printf(m, "%8u\t%lu\n", 0U, hctx->dispatched[0]); - - for (i = 1; i < BLK_MQ_MAX_DISPATCH_ORDER - 1; i++) { - unsigned int d = 1U << (i - 1); - - seq_printf(m, "%8u\t%lu\n", d, hctx->dispatched[i]); - } - - seq_printf(m, "%8u+\t%lu\n", 1U << (i - 1), hctx->dispatched[i]); - return 0; -} - -static ssize_t hctx_dispatched_write(void *data, const char __user *buf, - size_t count, loff_t *ppos) -{ - struct blk_mq_hw_ctx *hctx = data; - int i; - - for (i = 0; i < BLK_MQ_MAX_DISPATCH_ORDER; i++) - hctx->dispatched[i] = 0; - return count; -} - -static int hctx_queued_show(void *data, struct seq_file *m) -{ - struct blk_mq_hw_ctx *hctx = data; - - seq_printf(m, "%lu\n", hctx->queued); - return 0; -} - -static ssize_t hctx_queued_write(void *data, const char __user *buf, - size_t count, loff_t *ppos) -{ - struct blk_mq_hw_ctx *hctx = data; - - hctx->queued = 0; - return count; -} - static int hctx_run_show(void *data, struct seq_file *m) { struct blk_mq_hw_ctx *hctx = data; @@ -738,9 +674,6 @@ static const struct blk_mq_debugfs_attr blk_mq_debugfs_hctx_attrs[] = { {"tags_bitmap", 0400, hctx_tags_bitmap_show}, {"sched_tags", 0400, hctx_sched_tags_show}, {"sched_tags_bitmap", 0400, hctx_sched_tags_bitmap_show}, - {"io_poll", 0600, hctx_io_poll_show, hctx_io_poll_write}, - {"dispatched", 0600, hctx_dispatched_show, hctx_dispatched_write}, - {"queued", 0600, hctx_queued_show, hctx_queued_write}, {"run", 0600, hctx_run_show, hctx_run_write}, {"active", 0400, hctx_active_show}, {"dispatch_busy", 0400, hctx_dispatch_busy_show}, diff --git a/block/blk-mq.c b/block/blk-mq.c index 4d91b74ce67a..2197cfbf081f 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -316,7 +316,6 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, rq->mq_ctx = ctx; rq->mq_hctx = hctx; rq->cmd_flags = cmd_flags; - data->hctx->queued++; if (!q->elevator) { rq->rq_flags = 0; rq->tag = tag; @@ -1268,14 +1267,6 @@ struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx, return data.rq; } -static inline unsigned int queued_to_index(unsigned int queued) -{ - if (!queued) - return 0; - - return min(BLK_MQ_MAX_DISPATCH_ORDER - 1, ilog2(queued) + 1); -} - static bool __blk_mq_alloc_driver_tag(struct request *rq) { struct sbitmap_queue *bt = &rq->mq_hctx->tags->bitmap_tags; @@ -1597,8 +1588,6 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list, if (!list_empty(&zone_list)) list_splice_tail_init(&zone_list, list); - hctx->dispatched[queued_to_index(queued)]++; - /* If we didn't flush the entire list, we could have told the driver * there was more coming, but that turned out to be a lie. */ @@ -4212,14 +4201,9 @@ static int blk_mq_poll_classic(struct request_queue *q, blk_qc_t cookie, long state = get_current_state(); int ret; - hctx->poll_considered++; - do { - hctx->poll_invoked++; - ret = q->mq_ops->poll(hctx); if (ret > 0) { - hctx->poll_success++; __set_current_state(TASK_RUNNING); return ret; } diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 1dccea9505e5..9fc868abcc81 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -341,9 +341,6 @@ struct blk_mq_hw_ctx { unsigned long queued; /** @run: Number of dispatched requests. */ unsigned long run; -#define BLK_MQ_MAX_DISPATCH_ORDER 7 - /** @dispatched: Number of dispatch requests by queue. */ - unsigned long dispatched[BLK_MQ_MAX_DISPATCH_ORDER]; /** @numa_node: NUMA node the storage adapter has been connected to. */ unsigned int numa_node; @@ -363,13 +360,6 @@ struct blk_mq_hw_ctx { /** @kobj: Kernel object for sysfs. */ struct kobject kobj; - /** @poll_considered: Count times blk_mq_poll() was called. */ - unsigned long poll_considered; - /** @poll_invoked: Count how many requests blk_mq_poll() polled. */ - unsigned long poll_invoked; - /** @poll_success: Count how many polled requests were completed. */ - unsigned long poll_success; - #ifdef CONFIG_BLK_DEBUG_FS /** * @debugfs_dir: debugfs directory for this hardware queue. Named