From patchwork Thu Jan 5 00:20:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13089224 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D685C46467 for ; Thu, 5 Jan 2023 00:20:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229986AbjAEAUR (ORCPT ); Wed, 4 Jan 2023 19:20:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229464AbjAEAUQ (ORCPT ); Wed, 4 Jan 2023 19:20:16 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 167404318D; Wed, 4 Jan 2023 16:20:15 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id cl14so1700461pjb.2; Wed, 04 Jan 2023 16:20:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=TrXn/UsH93lS2A5CebmmFeBERFF7L0+KOIci0sQ1Uks=; b=J1k7jrDwLvhrkklIO+4FmsuN9uB78Outh0/hx/c8Z1UJpFK4SEV/OxPdNdKjEw7YBX 7OSqOziXo58sgBqFndekeZGgYWmddYPn/GbxUccsuibDNq9cKE1ts//NdSwTmakWeIld L6CR/HHaAHEn2vB5/H+U6wVss0atLd7GB3UHPKluuXHn1tx9CHG4vkpHB+dA+XGGyQ5T yxLOXwm1hRobGAHOYhtIKfLpWO3WQMrWvU211f9X9ckEPTgCqOC9aouMjVs5sfzAVK3g bGgaUHpPpDiZkMniECTu1mMhqrslj0ArqGkjO8oif+0Z0XPGxdyOpFQJZkr/lByNZywL WE6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=TrXn/UsH93lS2A5CebmmFeBERFF7L0+KOIci0sQ1Uks=; b=pJIU1uhiBnBZVgkh8s8T2Kx2NaVJ6GcNwSBp+qjazZMKXhRrAg3fQKpT17G88qmU0g hzMm4q1KjX70/+QRCvE5BBGKL//x0AGOr6k0KbrshcCdsETtpa3B0rKMcpNty87LPClE L+SA53A4rMiMrkL9V2FVSdqz785MzVueOtpOjDWPK4X1i62oKCNsOE/8TelxXR/wdeED p36tpiv07Pw5DQ82OvS2FYF3enTB2bVjVRfRjrmD4QpTEy8iP8psxoWR5PqI0qpE5WDm fAAFDYFh6SewH61Pn+65jWxl3pHze1cSAbOGCZ9uBK3E2pFxj4fj3ZsxpsuEX2xIBi/i clvw== X-Gm-Message-State: AFqh2krnJT+rVyQXmzIlnLKILfzMPre/Vww/oXucMc46IoIgEfD1QasC o9tlfKbSmLffENVRSOti6LI= X-Google-Smtp-Source: AMrXdXsrd7kyt1Mnrh8HeCZLy/eCkntDUgy7vpFm0DZRqUI0uISKErArimtz9/xAvJsdV1nzHfr0tg== X-Received: by 2002:a17:903:130d:b0:192:490b:a207 with SMTP id iy13-20020a170903130d00b00192490ba207mr54258769plb.33.1672878014356; Wed, 04 Jan 2023 16:20:14 -0800 (PST) Received: from localhost (2603-800c-1a02-1bae-a7fa-157f-969a-4cde.res6.spectrum.com. [2603:800c:1a02:1bae:a7fa:157f:969a:4cde]) by smtp.gmail.com with ESMTPSA id h27-20020a63385b000000b0045dc85c4a5fsm20729695pgn.44.2023.01.04.16.20.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jan 2023 16:20:13 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: axboe@kernel.dk, josef@toxicpanda.com, hch@lst.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo Subject: [PATCH 1/4] blkcg: Drop unnecessary RCU read [un]locks from blkg_conf_prep/finish() Date: Wed, 4 Jan 2023 14:20:04 -1000 Message-Id: <20230105002007.157497-2-tj@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230105002007.157497-1-tj@kernel.org> References: <20230105002007.157497-1-tj@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Holding the queue lock now implies RCU read lock, so no need to use rcu_read_[un]lock() explicitly. This shouldn't cause any behavior changes. While at it, drop __acquires() annotation on the queue lock too. The __acquires() part was already out of sync and it doesn't catch anything that lockdep can't. Signed-off-by: Tejun Heo --- block/blk-cgroup.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index ce6a2b7d3dfb..99674e23cf88 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -672,12 +672,11 @@ struct block_device *blkcg_conf_open_bdev(char **inputp) * * Parse per-blkg config update from @input and initialize @ctx with the * result. @ctx->blkg points to the blkg to be updated and @ctx->body the - * part of @input following MAJ:MIN. This function returns with RCU read - * lock and queue lock held and must be paired with blkg_conf_finish(). + * part of @input following MAJ:MIN. This function returns with queue lock + * held and must be paired with blkg_conf_finish(). */ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, char *input, struct blkg_conf_ctx *ctx) - __acquires(rcu) __acquires(&bdev->bd_queue->queue_lock) { struct block_device *bdev; struct gendisk *disk; @@ -699,7 +698,6 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, if (ret) goto fail; - rcu_read_lock(); spin_lock_irq(&q->queue_lock); if (!blkcg_policy_enabled(q, pol)) { @@ -728,7 +726,6 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, /* Drop locks to do new blkg allocation with GFP_KERNEL. */ spin_unlock_irq(&q->queue_lock); - rcu_read_unlock(); new_blkg = blkg_alloc(pos, disk, GFP_KERNEL); if (unlikely(!new_blkg)) { @@ -742,7 +739,6 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, goto fail_exit_queue; } - rcu_read_lock(); spin_lock_irq(&q->queue_lock); if (!blkcg_policy_enabled(q, pol)) { @@ -778,7 +774,6 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, radix_tree_preload_end(); fail_unlock: spin_unlock_irq(&q->queue_lock); - rcu_read_unlock(); fail_exit_queue: blk_queue_exit(q); fail: @@ -805,10 +800,8 @@ EXPORT_SYMBOL_GPL(blkg_conf_prep); * with blkg_conf_prep(). */ void blkg_conf_finish(struct blkg_conf_ctx *ctx) - __releases(&ctx->bdev->bd_queue->queue_lock) __releases(rcu) { spin_unlock_irq(&bdev_get_queue(ctx->bdev)->queue_lock); - rcu_read_unlock(); blkdev_put_no_open(ctx->bdev); } EXPORT_SYMBOL_GPL(blkg_conf_finish); From patchwork Thu Jan 5 00:20:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13089225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FC8AC54EBC for ; Thu, 5 Jan 2023 00:20:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229464AbjAEAUT (ORCPT ); Wed, 4 Jan 2023 19:20:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234381AbjAEAUS (ORCPT ); Wed, 4 Jan 2023 19:20:18 -0500 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92C581D0CB; Wed, 4 Jan 2023 16:20:17 -0800 (PST) Received: by mail-pl1-x62d.google.com with SMTP id b2so37755854pld.7; Wed, 04 Jan 2023 16:20:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=0Ts0rVlkmgS/XPggqVjv7cJVPKi0SCbA4jUZjRT1Ws4=; b=EzlQfEhiPKE+3WEwi6W3NYIs6mO8zpre4HhW2fmiaSCbylT9DTLWVp8UvA1K/H9Dhe BWM0LJfKlgx+qXt2VCmI+KaVsTN5R1D5xz1Zr5/sos7+LGis+Qfis5wxtd420skJ0ujZ 6vsIF54RQN3iF4AHF2oRgiKYFpI1TkBScTIRzXawhR7/zOSIPrsLPoDXuRJCoED722lK Xg1R+dVn3bQbPGIOh2zUetFFDhrATqkdPymgPuTYf1gjPIhgB+k0UsKzc36hZQEzYh5I cyfvmV4f7w7rawc/R23eUVs7k/taqu0aqjEgbBWdW9ejpscDpzw5UPo6cBvVFqiEU7Wh gSbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=0Ts0rVlkmgS/XPggqVjv7cJVPKi0SCbA4jUZjRT1Ws4=; b=Yj5+qOt2I6UQQydgiq+r0OjhSrWlahi+AlRyZA3DK89OUeJB2cI+OqGOVea/I9gXJL CMtSqpliTsHPnlr6BEGHrpAsqY4ywJrhPmeHXyLNP7+5bKL9pPNNT+4bkqIYZ4JDmp/o 00CcShZ8QlqbAm0grMNbJXR/ChgN9PxmC0FlUyai1OusTQOGO2TcFLUdIR4EGygAyHs9 nju82r1aBh5ehSxLUQpz+PC/ZLZx6WoONZdgnI3ZfCq6xEB/L/5vVChfzKoARPAsjPPU Fahjl0ftFgxYWal6FEpThP711uuXguXRX+7FfzmGVnwVd5r4JaCcTrZZ/jD86G2BoeJP jxHg== X-Gm-Message-State: AFqh2kqL8uMFDDgFvX5ROaC8+NkzrFHE77w4OLKujABWJ1rFPeeHblGW Ohz98ltTQRU44awC22bVIik= X-Google-Smtp-Source: AMrXdXvosqUIN/kXLTaakeZK8FseoGw7jR3WUpS6RRILGeb2UeSriRhfvZOZdD/2bh83BKloIQMdkg== X-Received: by 2002:a05:6a20:1455:b0:af:e13e:cd67 with SMTP id a21-20020a056a20145500b000afe13ecd67mr76879982pzi.6.1672878016768; Wed, 04 Jan 2023 16:20:16 -0800 (PST) Received: from localhost (2603-800c-1a02-1bae-a7fa-157f-969a-4cde.res6.spectrum.com. [2603:800c:1a02:1bae:a7fa:157f:969a:4cde]) by smtp.gmail.com with ESMTPSA id p4-20020a170902e74400b00176dc67df44sm24755160plf.132.2023.01.04.16.20.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jan 2023 16:20:15 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: axboe@kernel.dk, josef@toxicpanda.com, hch@lst.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo Subject: [PATCH 2/4] blkcg: Restructure blkg_conf_prep() and friends Date: Wed, 4 Jan 2023 14:20:05 -1000 Message-Id: <20230105002007.157497-3-tj@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230105002007.157497-1-tj@kernel.org> References: <20230105002007.157497-1-tj@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We want to support lazy init of rq-qos policies so that iolatency is enabled lazily on configuration instead of gendisk initialization. The way blkg config helpers are structured now is a bit awkward for that. Let's restructure: * blkcg_conf_open_bdev() is renamed to blkg_conf_open_bdev(). The blkcg_ prefix was used because the bdev opening step is blkg-independent. However, the distinction is too subtle and confuses more than helps. Let's switch to blkg prefix so that it's consistent with the type and other helper names. * struct blkg_conf_ctx now remembers the original input string and is always initialized by the new blkg_conf_init(). * blkg_conf_open_bdev() is updated to take a pointer to blkg_conf_ctx like blkg_conf_prep() and can be called multiple times safely. Instead of modifying the double pointer to input string directly, blkg_conf_open_bdev() now sets blkg_conf_ctx->body. * blkg_conf_finish() is renamed to blkg_conf_exit() for symmetry and now must be called on all blkg_conf_ctx's which were initialized with blkg_conf_init(). Combined, this allows the users to either open the bdev first or do it altogether with blkg_conf_prep() which will help implementing lazy init of rq-qos policies. Users are updated accordingly. No behavior change is intended by this patch. Signed-off-by: Tejun Heo Cc: Josef Bacik Cc: Christoph Hellwig --- block/blk-cgroup.c | 105 +++++++++++++++++++++++++++--------------- block/blk-cgroup.h | 10 ++-- block/blk-iocost.c | 58 +++++++++++++---------- block/blk-iolatency.c | 8 ++-- block/blk-throttle.c | 16 ++++--- 5 files changed, 122 insertions(+), 75 deletions(-) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 99674e23cf88..d8e0625cd12d 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -626,68 +626,92 @@ u64 __blkg_prfill_u64(struct seq_file *sf, struct blkg_policy_data *pd, u64 v) EXPORT_SYMBOL_GPL(__blkg_prfill_u64); /** - * blkcg_conf_open_bdev - parse and open bdev for per-blkg config update - * @inputp: input string pointer + * blkg_conf_init - initialize a blkg_conf_ctx + * @ctx: blkg_conf_ctx to initialize + * @input: input string * - * Parse the device node prefix part, MAJ:MIN, of per-blkg config update - * from @input and get and return the matching bdev. *@inputp is - * updated to point past the device node prefix. Returns an ERR_PTR() - * value on error. + * Initialize @ctx which can be used to parse blkg config input string @input. + * Once initialized, @ctx can be used with blkg_conf_open_bdev() and + * blkg_conf_prep(), and must be cleaned up with blkg_conf_exit(). + */ +void blkg_conf_init(struct blkg_conf_ctx *ctx, char *input) +{ + *ctx = (struct blkg_conf_ctx){ .input = input }; +} +EXPORT_SYMBOL_GPL(blkg_conf_init); + +/** + * blkg_conf_open_bdev - parse and open bdev for per-blkg config update + * @ctx: blkg_conf_ctx initialized with blkg_conf_init() * - * Use this function iff blkg_conf_prep() can't be used for some reason. + * Parse the device node prefix part, MAJ:MIN, of per-blkg config update from + * @ctx->input and get and store the matching bdev in @ctx->bdev. @ctx->body is + * set to point past the device node prefix. + * + * This function may be called multiple times on @ctx and the extra calls become + * NOOPs. blkg_conf_prep() implicitly calls this function. Use this function + * explicitly if bdev access is needed without resolving the blkcg / policy part + * of @ctx->input. Returns -errno on error. */ -struct block_device *blkcg_conf_open_bdev(char **inputp) +int blkg_conf_open_bdev(struct blkg_conf_ctx *ctx) { - char *input = *inputp; + char *input = ctx->input; unsigned int major, minor; struct block_device *bdev; int key_len; + if (ctx->bdev) + return 0; + if (sscanf(input, "%u:%u%n", &major, &minor, &key_len) != 2) - return ERR_PTR(-EINVAL); + return -EINVAL; input += key_len; if (!isspace(*input)) - return ERR_PTR(-EINVAL); + return -EINVAL; input = skip_spaces(input); bdev = blkdev_get_no_open(MKDEV(major, minor)); if (!bdev) - return ERR_PTR(-ENODEV); + return -ENODEV; if (bdev_is_partition(bdev)) { blkdev_put_no_open(bdev); - return ERR_PTR(-ENODEV); + return -ENODEV; } - *inputp = input; - return bdev; + ctx->body = input; + ctx->bdev = bdev; + return 0; } /** * blkg_conf_prep - parse and prepare for per-blkg config update * @blkcg: target block cgroup * @pol: target policy - * @input: input string - * @ctx: blkg_conf_ctx to be filled + * @ctx: blkg_conf_ctx initialized with blkg_conf_init() + * + * Parse per-blkg config update from @ctx->input and initialize @ctx + * accordingly. On success, @ctx->body points to the part of @ctx->input + * following MAJ:MIN, @ctx->bdev points to the target block device and + * @ctx->blkg to the blkg being configured. * - * Parse per-blkg config update from @input and initialize @ctx with the - * result. @ctx->blkg points to the blkg to be updated and @ctx->body the - * part of @input following MAJ:MIN. This function returns with queue lock - * held and must be paired with blkg_conf_finish(). + * blkg_conf_open_bdev() may be called on @ctx beforehand. On success, this + * function returns with queue lock held and must be followed by + * blkg_conf_exit(). */ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, - char *input, struct blkg_conf_ctx *ctx) + struct blkg_conf_ctx *ctx) { - struct block_device *bdev; struct gendisk *disk; struct request_queue *q; struct blkcg_gq *blkg; int ret; - bdev = blkcg_conf_open_bdev(&input); - if (IS_ERR(bdev)) - return PTR_ERR(bdev); - disk = bdev->bd_disk; + ret = blkg_conf_open_bdev(ctx); + if (ret) + return ret; + + disk = ctx->bdev->bd_disk; q = disk->queue; /* @@ -765,9 +789,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, } success: blk_queue_exit(q); - ctx->bdev = bdev; ctx->blkg = blkg; - ctx->body = input; return 0; fail_preloaded: @@ -777,7 +799,6 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, fail_exit_queue: blk_queue_exit(q); fail: - blkdev_put_no_open(bdev); /* * If queue was bypassing, we should retry. Do so after a * short msleep(). It isn't strictly necessary but queue @@ -793,18 +814,26 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, EXPORT_SYMBOL_GPL(blkg_conf_prep); /** - * blkg_conf_finish - finish up per-blkg config update - * @ctx: blkg_conf_ctx initialized by blkg_conf_prep() + * blkg_conf_exit - clean up per-blkg config update + * @ctx: blkg_conf_ctx initialized with blkg_conf_init() * - * Finish up after per-blkg config update. This function must be paired - * with blkg_conf_prep(). + * Clean up after per-blkg config update. This function must be called on all + * blkg_conf_ctx's initialized with blkg_conf_init(). */ -void blkg_conf_finish(struct blkg_conf_ctx *ctx) +void blkg_conf_exit(struct blkg_conf_ctx *ctx) { - spin_unlock_irq(&bdev_get_queue(ctx->bdev)->queue_lock); - blkdev_put_no_open(ctx->bdev); + if (ctx->blkg) { + spin_unlock_irq(&bdev_get_queue(ctx->bdev)->queue_lock); + ctx->blkg = NULL; + } + + if (ctx->bdev) { + blkdev_put_no_open(ctx->bdev); + ctx->body = NULL; + ctx->bdev = NULL; + } } -EXPORT_SYMBOL_GPL(blkg_conf_finish); +EXPORT_SYMBOL_GPL(blkg_conf_exit); static void blkg_iostat_set(struct blkg_iostat *dst, struct blkg_iostat *src) { diff --git a/block/blk-cgroup.h b/block/blk-cgroup.h index 1e94e404eaa8..fe09e8b4c2a8 100644 --- a/block/blk-cgroup.h +++ b/block/blk-cgroup.h @@ -208,15 +208,17 @@ void blkcg_print_blkgs(struct seq_file *sf, struct blkcg *blkcg, u64 __blkg_prfill_u64(struct seq_file *sf, struct blkg_policy_data *pd, u64 v); struct blkg_conf_ctx { + char *input; + char *body; struct block_device *bdev; struct blkcg_gq *blkg; - char *body; }; -struct block_device *blkcg_conf_open_bdev(char **inputp); +void blkg_conf_init(struct blkg_conf_ctx *ctx, char *input); +int blkg_conf_open_bdev(struct blkg_conf_ctx *ctx); int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol, - char *input, struct blkg_conf_ctx *ctx); -void blkg_conf_finish(struct blkg_conf_ctx *ctx); + struct blkg_conf_ctx *ctx); +void blkg_conf_exit(struct blkg_conf_ctx *ctx); /** * bio_issue_as_root_blkg - see if this bio needs to be issued as root blkg diff --git a/block/blk-iocost.c b/block/blk-iocost.c index 6955605629e4..22a3639a7a05 100644 --- a/block/blk-iocost.c +++ b/block/blk-iocost.c @@ -3091,9 +3091,11 @@ static ssize_t ioc_weight_write(struct kernfs_open_file *of, char *buf, return nbytes; } - ret = blkg_conf_prep(blkcg, &blkcg_policy_iocost, buf, &ctx); + blkg_conf_init(&ctx, buf); + + ret = blkg_conf_prep(blkcg, &blkcg_policy_iocost, &ctx); if (ret) - return ret; + goto err; iocg = blkg_to_iocg(ctx.blkg); @@ -3112,12 +3114,14 @@ static ssize_t ioc_weight_write(struct kernfs_open_file *of, char *buf, weight_updated(iocg, &now); spin_unlock(&iocg->ioc->lock); - blkg_conf_finish(&ctx); + blkg_conf_exit(&ctx); return nbytes; einval: - blkg_conf_finish(&ctx); - return -EINVAL; + ret = -EINVAL; +err: + blkg_conf_exit(&ctx); + return ret; } static u64 ioc_qos_prfill(struct seq_file *sf, struct blkg_policy_data *pd, @@ -3172,19 +3176,22 @@ static const match_table_t qos_tokens = { static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input, size_t nbytes, loff_t off) { - struct block_device *bdev; + struct blkg_conf_ctx ctx; struct gendisk *disk; struct ioc *ioc; u32 qos[NR_QOS_PARAMS]; bool enable, user; - char *p; + char *body, *p; int ret; - bdev = blkcg_conf_open_bdev(&input); - if (IS_ERR(bdev)) - return PTR_ERR(bdev); + blkg_conf_init(&ctx, input); - disk = bdev->bd_disk; + ret = blkg_conf_open_bdev(&ctx); + if (ret) + goto err; + + body = ctx.body; + disk = ctx.bdev->bd_disk; ioc = q_to_ioc(disk->queue); if (!ioc) { ret = blk_iocost_init(disk); @@ -3201,7 +3208,7 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input, enable = ioc->enabled; user = ioc->user_qos_params; - while ((p = strsep(&input, " \t\n"))) { + while ((p = strsep(&body, " \t\n"))) { substring_t args[MAX_OPT_ARGS]; char buf[32]; int tok; @@ -3290,7 +3297,7 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input, blk_mq_unquiesce_queue(disk->queue); blk_mq_unfreeze_queue(disk->queue); - blkdev_put_no_open(bdev); + blkg_conf_exit(&ctx); return nbytes; einval: spin_unlock_irq(&ioc->lock); @@ -3300,7 +3307,7 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input, ret = -EINVAL; err: - blkdev_put_no_open(bdev); + blkg_conf_exit(&ctx); return ret; } @@ -3351,22 +3358,25 @@ static const match_table_t i_lcoef_tokens = { static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input, size_t nbytes, loff_t off) { - struct block_device *bdev; + struct blkg_conf_ctx ctx; struct request_queue *q; struct ioc *ioc; u64 u[NR_I_LCOEFS]; bool user; - char *p; + char *body, *p; int ret; - bdev = blkcg_conf_open_bdev(&input); - if (IS_ERR(bdev)) - return PTR_ERR(bdev); + blkg_conf_init(&ctx, input); + + ret = blkg_conf_open_bdev(&ctx); + if (ret) + goto err; - q = bdev_get_queue(bdev); + body = ctx.body; + q = bdev_get_queue(ctx.bdev); ioc = q_to_ioc(q); if (!ioc) { - ret = blk_iocost_init(bdev->bd_disk); + ret = blk_iocost_init(ctx.bdev->bd_disk); if (ret) goto err; ioc = q_to_ioc(q); @@ -3379,7 +3389,7 @@ static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input, memcpy(u, ioc->params.i_lcoefs, sizeof(u)); user = ioc->user_cost_model; - while ((p = strsep(&input, " \t\n"))) { + while ((p = strsep(&body, " \t\n"))) { substring_t args[MAX_OPT_ARGS]; char buf[32]; int tok; @@ -3426,7 +3436,7 @@ static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input, blk_mq_unquiesce_queue(q); blk_mq_unfreeze_queue(q); - blkdev_put_no_open(bdev); + blkg_conf_exit(&ctx); return nbytes; einval: @@ -3437,7 +3447,7 @@ static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input, ret = -EINVAL; err: - blkdev_put_no_open(bdev); + blkg_conf_exit(&ctx); return ret; } diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index ecdc10741836..3b3667f397a9 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -842,9 +842,11 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf, u64 oldval; int ret; - ret = blkg_conf_prep(blkcg, &blkcg_policy_iolatency, buf, &ctx); + blkg_conf_init(&ctx, buf); + + ret = blkg_conf_prep(blkcg, &blkcg_policy_iolatency, &ctx); if (ret) - return ret; + goto out; iolat = blkg_to_lat(ctx.blkg); p = ctx.body; @@ -880,7 +882,7 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf, iolatency_clear_scaling(blkg); ret = 0; out: - blkg_conf_finish(&ctx); + blkg_conf_exit(&ctx); return ret ?: nbytes; } diff --git a/block/blk-throttle.c b/block/blk-throttle.c index 6fb5a2f9e1ee..75841d1d9bf4 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -1369,9 +1369,11 @@ static ssize_t tg_set_conf(struct kernfs_open_file *of, int ret; u64 v; - ret = blkg_conf_prep(blkcg, &blkcg_policy_throtl, buf, &ctx); + blkg_conf_init(&ctx, buf); + + ret = blkg_conf_prep(blkcg, &blkcg_policy_throtl, &ctx); if (ret) - return ret; + goto out_finish; ret = -EINVAL; if (sscanf(ctx.body, "%llu", &v) != 1) @@ -1390,7 +1392,7 @@ static ssize_t tg_set_conf(struct kernfs_open_file *of, tg_conf_updated(tg, false); ret = 0; out_finish: - blkg_conf_finish(&ctx); + blkg_conf_exit(&ctx); return ret ?: nbytes; } @@ -1562,9 +1564,11 @@ static ssize_t tg_set_limit(struct kernfs_open_file *of, int ret; int index = of_cft(of)->private; - ret = blkg_conf_prep(blkcg, &blkcg_policy_throtl, buf, &ctx); + blkg_conf_init(&ctx, buf); + + ret = blkg_conf_prep(blkcg, &blkcg_policy_throtl, &ctx); if (ret) - return ret; + goto out_finish; tg = blkg_to_tg(ctx.blkg); tg_update_carryover(tg); @@ -1663,7 +1667,7 @@ static ssize_t tg_set_limit(struct kernfs_open_file *of, tg->td->limit_valid[LIMIT_LOW]); ret = 0; out_finish: - blkg_conf_finish(&ctx); + blkg_conf_exit(&ctx); return ret ?: nbytes; } From patchwork Thu Jan 5 00:20:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13089226 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78AEAC46467 for ; Thu, 5 Jan 2023 00:20:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235155AbjAEAUV (ORCPT ); Wed, 4 Jan 2023 19:20:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235073AbjAEAUU (ORCPT ); Wed, 4 Jan 2023 19:20:20 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C20AF43A30; Wed, 4 Jan 2023 16:20:19 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id fz16-20020a17090b025000b002269d6c2d83so2409601pjb.0; Wed, 04 Jan 2023 16:20:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=3WPbvsPoBO1wdRnnR1ui7BApmCnbJf4jlfv1HBKIBy8=; b=EZe5uvVOFYXNEuUGDh4QJb0VayBNgzRP6IhXwU6tFrtelFuoSP5DYf0ecdARuLhzmk GFtDqtggbTIuOgAsX6ChDBkwD45DtJ6462EB3grRGBPqXxkAKJMxr7A0aorIyVZoTe3S 9YxoGSFfVi8TFXaa/ACdn87SQR5CzZtHq30rOmccDaM8VjFL/By/6zPd48N26tQRXtEU 3ZeCauGAqrtucEJujs0qIuWfHaOAsRQYH8IwqAhRwIgSGhZauhstNvnjhMkA/0cJBa4I PogP6zCoOZuksFGGWCbBtUcoLRcM1unBSgBgTubfxaRtFIm8dYQTd9C9mBEcd/Hdagvl 3LXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=3WPbvsPoBO1wdRnnR1ui7BApmCnbJf4jlfv1HBKIBy8=; b=Bm12wUCyr4y5IR/M+Uza9S6777EyeVVukhX4zGMFEbTKe7YxLdzfBzvkGzQjW+A6q9 P87ZAXa6KTjomMJLYO6Q4aRgP8UbWoJw+Oo11PLaBVaFSKNKT9mT5cdgw/S2miUw7FT8 +Dx0IA0vO0PJuNTIng6HOdyxzVPNVyxmgZoJAbFuTzeQ3uHoIP5lHIwfyJIBh8ZgC3K8 GjtE4fZRJWHoJPTmWKRvtVyw69z9tkbCUV6Ro4H6/BWIm2l3VKmGDKjvnPsZXcAN5aUI zpXitXnseboivUfvyAplkUMkRNn86CuOGQEJX+jCg+IJbS0zhE5AP1xFgmaTre8O/cc1 VHHg== X-Gm-Message-State: AFqh2kony83KwLHBFlY1GHHzH6dvj31NPrcQS7EJArHzVaOTMDV5ritL Moqb/2WTiGMiMSkVEFdrlB7p1ji6mC0= X-Google-Smtp-Source: AMrXdXsz/HM8FFPzfd1LLR3BOfBojNIgcNCDT1KhX96VQ/3tbilAJa0YxHz0C+IFbxmc/o3PuuDgIw== X-Received: by 2002:a17:903:40cd:b0:180:f32c:7501 with SMTP id t13-20020a17090340cd00b00180f32c7501mr55907158pld.0.1672878019188; Wed, 04 Jan 2023 16:20:19 -0800 (PST) Received: from localhost (2603-800c-1a02-1bae-a7fa-157f-969a-4cde.res6.spectrum.com. [2603:800c:1a02:1bae:a7fa:157f:969a:4cde]) by smtp.gmail.com with ESMTPSA id j5-20020a170903024500b0019101215f63sm24718881plh.93.2023.01.04.16.20.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jan 2023 16:20:18 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: axboe@kernel.dk, josef@toxicpanda.com, hch@lst.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo Subject: [PATCH 3/4] blk-iolatency: s/blkcg_rq_qos/iolat_rq_qos/ Date: Wed, 4 Jan 2023 14:20:06 -1000 Message-Id: <20230105002007.157497-4-tj@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230105002007.157497-1-tj@kernel.org> References: <20230105002007.157497-1-tj@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The name was too generic given that there are multiple blkcg rq-qos policies. Signed-off-by: Tejun Heo Cc: Josef Bacik --- block/blk-iolatency.c | 2 +- block/blk-rq-qos.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 3b3667f397a9..3601345808d2 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -976,7 +976,7 @@ static void iolatency_pd_init(struct blkg_policy_data *pd) { struct iolatency_grp *iolat = pd_to_lat(pd); struct blkcg_gq *blkg = lat_to_blkg(iolat); - struct rq_qos *rqos = blkcg_rq_qos(blkg->q); + struct rq_qos *rqos = iolat_rq_qos(blkg->q); struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos); u64 now = ktime_to_ns(ktime_get()); int cpu; diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h index 1ef1f7d4bc3c..27f004fae66b 100644 --- a/block/blk-rq-qos.h +++ b/block/blk-rq-qos.h @@ -74,7 +74,7 @@ static inline struct rq_qos *wbt_rq_qos(struct request_queue *q) return rq_qos_id(q, RQ_QOS_WBT); } -static inline struct rq_qos *blkcg_rq_qos(struct request_queue *q) +static inline struct rq_qos *iolat_rq_qos(struct request_queue *q) { return rq_qos_id(q, RQ_QOS_LATENCY); } From patchwork Thu Jan 5 00:20:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13089227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C7DAC54E76 for ; Thu, 5 Jan 2023 00:20:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235212AbjAEAUb (ORCPT ); Wed, 4 Jan 2023 19:20:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235330AbjAEAU3 (ORCPT ); Wed, 4 Jan 2023 19:20:29 -0500 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8259143A3C; Wed, 4 Jan 2023 16:20:22 -0800 (PST) Received: by mail-pj1-x102a.google.com with SMTP id c8-20020a17090a4d0800b00225c3614161so331850pjg.5; Wed, 04 Jan 2023 16:20:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=ptP+TRS/lGASh4wVWy3dTAfwv7VhGgtUgLBdAb814ZI=; b=dHNJ4c9N/rgKzU5mtTSooEkFrH5UUxnDD5nGsNE5EId51rvRAEFyfAhXtEHSuvkEKt RQz0VvO9DgyXJyASACy+XR9jh21vRfPCoARuGggEn39kVznbeaIPpd6rMRw4tjNmUCYm uBSmkRkD3cTjynubwMsNDNlz4T4XY46wAvHyPdrx9w42VdVsHJUc9afbLDTU0tQ9/GyG uyRaYYarCYHh2NnTTqGV1R2DmL3a6E0Prs0NT6N59/co9PipQTULQbl7fzcdJoGH5VpW dAjfqAdwLn8Rg8Cf4ug5jxxSD5fTa7wQnfSeB4fwd/CsfBd6LttTmZ7MDhjB0PJOagPf BeKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ptP+TRS/lGASh4wVWy3dTAfwv7VhGgtUgLBdAb814ZI=; b=rSHLgAHtkO4Mh9RiwECNtMG+X7GzrPCTea1fWs7CTTbpyeeuK9V2k71wSau1YxJRaL XOQZ64nuswJFtPvqpXAEPASi93/uWBUB0FEwUmmMBFmwF/cW/CGlTH0WIg4E7mrZoDsR 5YZIuaLKIxLIUwnwYoy+UXaTpwp2k78BXAwaePncrBwpKVObmHTRiK3bqBgsWJ06TNMy vONkf26pIPoKxQcxKo4DZYnHsYBTf27cEmwse+69pwKJsYC6WMnoVQt2XVfOQPSod8tW saKDYjC3tVe0uPmMytVp2u4VLSavNUvs8SL4/XK9n7auy+qdvfi4RuTuq+e4/1+qv1Sw ByfQ== X-Gm-Message-State: AFqh2kqfs5goS4qxRKTTvICYtXZg8jEcJDEM4o1TBIprvVjQj/3xQUKr hAPm2uzfszTlOEsC4MpVDWE= X-Google-Smtp-Source: AMrXdXthf3BY6LGdsiU2Y84F3Su70EFygCAjFiOluzrSPRnj8T0e52EAujhenJDphU/3eUAoL6xaVw== X-Received: by 2002:a17:902:7d8e:b0:191:11ec:2028 with SMTP id a14-20020a1709027d8e00b0019111ec2028mr51457490plm.46.1672878021769; Wed, 04 Jan 2023 16:20:21 -0800 (PST) Received: from localhost (2603-800c-1a02-1bae-a7fa-157f-969a-4cde.res6.spectrum.com. [2603:800c:1a02:1bae:a7fa:157f:969a:4cde]) by smtp.gmail.com with ESMTPSA id x22-20020a63db56000000b004a737a6e62fsm371525pgi.14.2023.01.04.16.20.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jan 2023 16:20:21 -0800 (PST) Sender: Tejun Heo From: Tejun Heo To: axboe@kernel.dk, josef@toxicpanda.com, hch@lst.de Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo Subject: [PATCH 4/4] blk-iolatency: Make initialization lazy Date: Wed, 4 Jan 2023 14:20:07 -1000 Message-Id: <20230105002007.157497-5-tj@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230105002007.157497-1-tj@kernel.org> References: <20230105002007.157497-1-tj@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Other rq_qos policies such as wbt and iocost are lazy-initialized when they are configured for the first time for the device but iolatency is initialized unconditionally from blkcg_init_disk() during gendisk init. Lazy init is beneficial because rq_qos policies add runtime overhead when initialized as every IO has to walk all registered rq_qos callbacks. This patch switches iolatency to lazy initialization too so that it only registered its rq_qos policy when it is first configured. Note that there is a known race condition between blkcg config file writes and del_gendisk() and this patch makes iolatency susceptible to it by exposing the init path to race against the deletion path. However, that problem already exists in iocost and is being worked on. Signed-off-by: Tejun Heo Cc: Josef Bacik Cc: Christoph Hellwig --- block/blk-cgroup.c | 8 -------- block/blk-iolatency.c | 29 ++++++++++++++++++++++++++++- block/blk.h | 6 ------ 3 files changed, 28 insertions(+), 15 deletions(-) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index d8e0625cd12d..844579aff363 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -33,7 +33,6 @@ #include "blk-cgroup.h" #include "blk-ioprio.h" #include "blk-throttle.h" -#include "blk-rq-qos.h" /* * blkcg_pol_mutex protects blkcg_policy[] and policy [de]activation. @@ -1322,14 +1321,8 @@ int blkcg_init_disk(struct gendisk *disk) if (ret) goto err_ioprio_exit; - ret = blk_iolatency_init(disk); - if (ret) - goto err_throtl_exit; - return 0; -err_throtl_exit: - blk_throtl_exit(disk); err_ioprio_exit: blk_ioprio_exit(disk); err_destroy_all: @@ -1345,7 +1338,6 @@ int blkcg_init_disk(struct gendisk *disk) void blkcg_exit_disk(struct gendisk *disk) { blkg_destroy_all(disk); - rq_qos_exit(disk->queue); blk_throtl_exit(disk); } diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 3601345808d2..3484393dbc4a 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -755,7 +755,7 @@ static void blkiolatency_enable_work_fn(struct work_struct *work) } } -int blk_iolatency_init(struct gendisk *disk) +static int blk_iolatency_init(struct gendisk *disk) { struct request_queue *q = disk->queue; struct blk_iolatency *blkiolat; @@ -830,6 +830,29 @@ static void iolatency_clear_scaling(struct blkcg_gq *blkg) } } +static int blk_iolatency_try_init(struct blkg_conf_ctx *ctx) +{ + static DEFINE_MUTEX(init_mutex); + int ret; + + ret = blkg_conf_open_bdev(ctx); + if (ret) + return ret; + + /* + * blk_iolatency_init() may fail after rq_qos_add() succeeds which can + * confuse iolat_rq_qos() test. Make the test and init atomic. + */ + mutex_lock(&init_mutex); + + if (!iolat_rq_qos(ctx->bdev->bd_queue)) + ret = blk_iolatency_init(ctx->bdev->bd_disk); + + mutex_unlock(&init_mutex); + + return ret; +} + static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf, size_t nbytes, loff_t off) { @@ -844,6 +867,10 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf, blkg_conf_init(&ctx, buf); + ret = blk_iolatency_try_init(&ctx); + if (ret) + goto out; + ret = blkg_conf_prep(blkcg, &blkcg_policy_iolatency, &ctx); if (ret) goto out; diff --git a/block/blk.h b/block/blk.h index 4c3b3325219a..78f1706cddca 100644 --- a/block/blk.h +++ b/block/blk.h @@ -392,12 +392,6 @@ static inline struct bio *blk_queue_bounce(struct bio *bio, return bio; } -#ifdef CONFIG_BLK_CGROUP_IOLATENCY -int blk_iolatency_init(struct gendisk *disk); -#else -static inline int blk_iolatency_init(struct gendisk *disk) { return 0; }; -#endif - #ifdef CONFIG_BLK_DEV_ZONED void disk_free_zone_bitmaps(struct gendisk *disk); void disk_clear_zone_settings(struct gendisk *disk);