From patchwork Fri Sep 16 07:19:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12978194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12283C54EE9 for ; Fri, 16 Sep 2022 07:08:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229532AbiIPHI4 (ORCPT ); Fri, 16 Sep 2022 03:08:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230081AbiIPHIx (ORCPT ); Fri, 16 Sep 2022 03:08:53 -0400 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2EB5A2DA6; Fri, 16 Sep 2022 00:08:50 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4MTQ9q4vXszlCNR; Fri, 16 Sep 2022 15:07:11 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP2 (Coremail) with SMTP id Syh0CgD3SXP9ICRj02qwAw--.33483S5; Fri, 16 Sep 2022 15:08:49 +0800 (CST) From: Yu Kuai To: tj@kernel.org, axboe@kernel.dk, paolo.valente@linaro.org, jack@suse.cz Cc: cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com Subject: [patch v11 1/6] block, bfq: support to track if bfqq has pending requests Date: Fri, 16 Sep 2022 15:19:37 +0800 Message-Id: <20220916071942.214222-2-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220916071942.214222-1-yukuai1@huaweicloud.com> References: <20220916071942.214222-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgD3SXP9ICRj02qwAw--.33483S5 X-Coremail-Antispam: 1UD129KBjvJXoWxZFW8KFW3AFy5XFW8KF48WFg_yoW5ZF1rpa 9xKa17WF13Jr4rXry3Ja18Xwn2q3s5ur9rtrs2q34ayr47ArnIq3ZIyry8ZryIqr93Gr43 Zr1Yg3s7Zw17JFUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBE14x267AKxVW5JVWrJwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r1I6r4UM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kIc2 xKxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v2 6r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2 Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_ Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMI IF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUqAp5UUUUU = X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Yu Kuai If entity belongs to bfqq, then entity->in_groups_with_pending_reqs is not used currently. This patch use it to track if bfqq has pending requests through callers of weights_tree insertion and removal. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-iosched.c | 1 + block/bfq-iosched.h | 2 ++ block/bfq-wf2q.c | 24 ++++++++++++++++++++++-- 3 files changed, 25 insertions(+), 2 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index f769c90744fd..0dcae2f52896 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -6261,6 +6261,7 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd) */ bfqq->budget_timeout = jiffies; + bfq_del_bfqq_in_groups_with_pending_reqs(bfqq); bfq_weights_tree_remove(bfqd, bfqq); } diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 64ee618064ba..44e08b194749 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -1082,6 +1082,8 @@ void bfq_requeue_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq, bool expiration); void bfq_del_bfqq_busy(struct bfq_queue *bfqq, bool expiration); void bfq_add_bfqq_busy(struct bfq_queue *bfqq); +void bfq_add_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq); +void bfq_del_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq); /* --------------- end of interface of B-WF2Q+ ---------------- */ diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 8fc3da4c23bb..bd8f4ed84848 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -1646,6 +1646,22 @@ void bfq_requeue_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq, bfqq == bfqd->in_service_queue, expiration); } +void bfq_add_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + + if (!entity->in_groups_with_pending_reqs) + entity->in_groups_with_pending_reqs = true; +} + +void bfq_del_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + + if (entity->in_groups_with_pending_reqs) + entity->in_groups_with_pending_reqs = false; +} + /* * Called when the bfqq no longer has requests pending, remove it from * the service tree. As a special case, it can be invoked during an @@ -1668,8 +1684,10 @@ void bfq_del_bfqq_busy(struct bfq_queue *bfqq, bool expiration) bfq_deactivate_bfqq(bfqd, bfqq, true, expiration); - if (!bfqq->dispatched) + if (!bfqq->dispatched) { + bfq_del_bfqq_in_groups_with_pending_reqs(bfqq); bfq_weights_tree_remove(bfqd, bfqq); + } } /* @@ -1686,10 +1704,12 @@ void bfq_add_bfqq_busy(struct bfq_queue *bfqq) bfq_mark_bfqq_busy(bfqq); bfqd->busy_queues[bfqq->ioprio_class - 1]++; - if (!bfqq->dispatched) + if (!bfqq->dispatched) { + bfq_add_bfqq_in_groups_with_pending_reqs(bfqq); if (bfqq->wr_coeff == 1) bfq_weights_tree_add(bfqd, bfqq, &bfqd->queue_weights_tree); + } if (bfqq->wr_coeff > 1) bfqd->wr_busy_queues++; From patchwork Fri Sep 16 07:19:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12978193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 855A2C6FA90 for ; Fri, 16 Sep 2022 07:08:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230157AbiIPHIz (ORCPT ); Fri, 16 Sep 2022 03:08:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229625AbiIPHIx (ORCPT ); Fri, 16 Sep 2022 03:08:53 -0400 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3257DA3D28; Fri, 16 Sep 2022 00:08:51 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4MTQ9r0mGfzlCNJ; Fri, 16 Sep 2022 15:07:12 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP2 (Coremail) with SMTP id Syh0CgD3SXP9ICRj02qwAw--.33483S6; Fri, 16 Sep 2022 15:08:49 +0800 (CST) From: Yu Kuai To: tj@kernel.org, axboe@kernel.dk, paolo.valente@linaro.org, jack@suse.cz Cc: cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com Subject: [patch v11 2/6] block, bfq: record how many queues have pending requests Date: Fri, 16 Sep 2022 15:19:38 +0800 Message-Id: <20220916071942.214222-3-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220916071942.214222-1-yukuai1@huaweicloud.com> References: <20220916071942.214222-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgD3SXP9ICRj02qwAw--.33483S6 X-Coremail-Antispam: 1UD129KBjvJXoWxuF45tr1kWFWDGF1kGFyDZFb_yoW5ZFWxpa 98K3WUZr43Jrn5XrW5Ca10qwn7X3s5Zr9rtrWvv34Yyr47JrySv3ZIyr1Fvr109F1fGF43 Zr1Y9ryDuw17GFUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBE14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jryl82xGYIkIc2 x26xkF7I0E14v26r4j6ryUM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kIc2 xKxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v2 6r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2 Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_ Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMI IF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUc6pPUUUUU = X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Yu Kuai Prepare to refactor the counting of 'num_groups_with_pending_reqs'. Add a counter in bfq_group, update it while tracking if bfqq have pending requests and when bfq_bfqq_move() is called. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-cgroup.c | 10 ++++++++++ block/bfq-iosched.h | 1 + block/bfq-wf2q.c | 12 ++++++++++-- 3 files changed, 21 insertions(+), 2 deletions(-) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 144bca006463..4c37398e0b99 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -552,6 +552,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd) */ bfqg->bfqd = bfqd; bfqg->active_entities = 0; + bfqg->num_queues_with_pending_reqs = 0; bfqg->online = true; bfqg->rq_pos_tree = RB_ROOT; } @@ -641,6 +642,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, { struct bfq_entity *entity = &bfqq->entity; struct bfq_group *old_parent = bfqq_group(bfqq); + bool has_pending_reqs = false; /* * No point to move bfqq to the same group, which can happen when @@ -661,6 +663,11 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, */ bfqq->ref++; + if (entity->in_groups_with_pending_reqs) { + has_pending_reqs = true; + bfq_del_bfqq_in_groups_with_pending_reqs(bfqq); + } + /* If bfqq is empty, then bfq_bfqq_expire also invokes * bfq_del_bfqq_busy, thereby removing bfqq and its entity * from data structures related to current group. Otherwise we @@ -688,6 +695,9 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, /* pin down bfqg and its associated blkg */ bfqg_and_blkg_get(bfqg); + if (has_pending_reqs) + bfq_add_bfqq_in_groups_with_pending_reqs(bfqq); + if (bfq_bfqq_busy(bfqq)) { if (unlikely(!bfqd->nonrot_with_queueing)) bfq_pos_tree_add_move(bfqd, bfqq); diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 44e08b194749..338ff5418ea8 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -943,6 +943,7 @@ struct bfq_group { struct bfq_entity *my_entity; int active_entities; + int num_queues_with_pending_reqs; struct rb_root rq_pos_tree; diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index bd8f4ed84848..5549ccf09cd2 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -1650,16 +1650,24 @@ void bfq_add_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq) { struct bfq_entity *entity = &bfqq->entity; - if (!entity->in_groups_with_pending_reqs) + if (!entity->in_groups_with_pending_reqs) { entity->in_groups_with_pending_reqs = true; +#ifdef CONFIG_BFQ_GROUP_IOSCHED + bfqq_group(bfqq)->num_queues_with_pending_reqs++; +#endif + } } void bfq_del_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq) { struct bfq_entity *entity = &bfqq->entity; - if (entity->in_groups_with_pending_reqs) + if (entity->in_groups_with_pending_reqs) { entity->in_groups_with_pending_reqs = false; +#ifdef CONFIG_BFQ_GROUP_IOSCHED + bfqq_group(bfqq)->num_queues_with_pending_reqs--; +#endif + } } /* From patchwork Fri Sep 16 07:19:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12978192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41DF7ECAAD8 for ; Fri, 16 Sep 2022 07:08:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230146AbiIPHIy (ORCPT ); Fri, 16 Sep 2022 03:08:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229532AbiIPHIx (ORCPT ); Fri, 16 Sep 2022 03:08:53 -0400 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0CE5A3457; Fri, 16 Sep 2022 00:08:51 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4MTQ9r42JczlCM0; Fri, 16 Sep 2022 15:07:12 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP2 (Coremail) with SMTP id Syh0CgD3SXP9ICRj02qwAw--.33483S7; Fri, 16 Sep 2022 15:08:50 +0800 (CST) From: Yu Kuai To: tj@kernel.org, axboe@kernel.dk, paolo.valente@linaro.org, jack@suse.cz Cc: cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com Subject: [patch v11 3/6] block, bfq: refactor the counting of 'num_groups_with_pending_reqs' Date: Fri, 16 Sep 2022 15:19:39 +0800 Message-Id: <20220916071942.214222-4-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220916071942.214222-1-yukuai1@huaweicloud.com> References: <20220916071942.214222-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgD3SXP9ICRj02qwAw--.33483S7 X-Coremail-Antispam: 1UD129KBjvJXoW3Wr4DGF47Ar4fJrWxKF4fuFg_yoW3Aw48pa 93K3WUuw45Jrs3tr98Gw48Xrn7ur95CFW7trsYk34Ikr43Arn3tFnIkr4Fvry8ur93Aw12 vryj934DZ3W7AFDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBE14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JrWl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJw A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kIc2 xKxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v2 6r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2 Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_ Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMI IF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUd8n5UUUUU = X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Yu Kuai Currently, bfq can't handle sync io concurrently as long as they are not issued from root group. This is because 'bfqd->num_groups_with_pending_reqs > 0' is always true in bfq_asymmetric_scenario(). The way that bfqg is counted into 'num_groups_with_pending_reqs': Before this patch: 1) root group will never be counted. 2) Count if bfqg or it's child bfqgs have pending requests. 3) Don't count if bfqg and it's child bfqgs complete all the requests. After this patch: 1) root group is counted. 2) Count if bfqg have pending requests. 3) Don't count if bfqg complete all the requests. With this change, the occasion that only one group is activated can be detected, and next patch will support concurrent sync io in the occasion. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-iosched.c | 42 ------------------------------------------ block/bfq-iosched.h | 18 +++++++++--------- block/bfq-wf2q.c | 23 ++++++++--------------- 3 files changed, 17 insertions(+), 66 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 0dcae2f52896..970b302a7a3e 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -970,48 +970,6 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, void bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) { - struct bfq_entity *entity = bfqq->entity.parent; - - for_each_entity(entity) { - struct bfq_sched_data *sd = entity->my_sched_data; - - if (sd->next_in_service || sd->in_service_entity) { - /* - * entity is still active, because either - * next_in_service or in_service_entity is not - * NULL (see the comments on the definition of - * next_in_service for details on why - * in_service_entity must be checked too). - * - * As a consequence, its parent entities are - * active as well, and thus this loop must - * stop here. - */ - break; - } - - /* - * The decrement of num_groups_with_pending_reqs is - * not performed immediately upon the deactivation of - * entity, but it is delayed to when it also happens - * that the first leaf descendant bfqq of entity gets - * all its pending requests completed. The following - * instructions perform this delayed decrement, if - * needed. See the comments on - * num_groups_with_pending_reqs for details. - */ - if (entity->in_groups_with_pending_reqs) { - entity->in_groups_with_pending_reqs = false; - bfqd->num_groups_with_pending_reqs--; - } - } - - /* - * Next function is invoked last, because it causes bfqq to be - * freed if the following holds: bfqq is not in service and - * has no dispatched request. DO NOT use bfqq after the next - * function invocation. - */ __bfq_weights_tree_remove(bfqd, bfqq, &bfqd->queue_weights_tree); } diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 338ff5418ea8..257acb54c6dc 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -496,27 +496,27 @@ struct bfq_data { struct rb_root_cached queue_weights_tree; /* - * Number of groups with at least one descendant process that + * Number of groups with at least one process that * has at least one request waiting for completion. Note that * this accounts for also requests already dispatched, but not * yet completed. Therefore this number of groups may differ * (be larger) than the number of active groups, as a group is * considered active only if its corresponding entity has - * descendant queues with at least one request queued. This + * queues with at least one request queued. This * number is used to decide whether a scenario is symmetric. * For a detailed explanation see comments on the computation * of the variable asymmetric_scenario in the function * bfq_better_to_idle(). * * However, it is hard to compute this number exactly, for - * groups with multiple descendant processes. Consider a group - * that is inactive, i.e., that has no descendant process with + * groups with multiple processes. Consider a group + * that is inactive, i.e., that has no process with * pending I/O inside BFQ queues. Then suppose that * num_groups_with_pending_reqs is still accounting for this - * group, because the group has descendant processes with some + * group, because the group has processes with some * I/O request still in flight. num_groups_with_pending_reqs * should be decremented when the in-flight request of the - * last descendant process is finally completed (assuming that + * last process is finally completed (assuming that * nothing else has changed for the group in the meantime, in * terms of composition of the group and active/inactive state of child * groups and processes). To accomplish this, an additional @@ -525,7 +525,7 @@ struct bfq_data { * we resort to the following tradeoff between simplicity and * accuracy: for an inactive group that is still counted in * num_groups_with_pending_reqs, we decrement - * num_groups_with_pending_reqs when the first descendant + * num_groups_with_pending_reqs when the first * process of the group remains with no request waiting for * completion. * @@ -533,12 +533,12 @@ struct bfq_data { * carefulness: to avoid multiple decrements, we flag a group, * more precisely an entity representing a group, as still * counted in num_groups_with_pending_reqs when it becomes - * inactive. Then, when the first descendant queue of the + * inactive. Then, when the first queue of the * entity remains with no request waiting for completion, * num_groups_with_pending_reqs is decremented, and this flag * is reset. After this flag is reset for the entity, * num_groups_with_pending_reqs won't be decremented any - * longer in case a new descendant queue of the entity remains + * longer in case a new queue of the entity remains * with no request waiting for completion. */ unsigned int num_groups_with_pending_reqs; diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 5549ccf09cd2..5e8224c96921 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -984,19 +984,6 @@ static void __bfq_activate_entity(struct bfq_entity *entity, entity->on_st_or_in_serv = true; } -#ifdef CONFIG_BFQ_GROUP_IOSCHED - if (!bfq_entity_to_bfqq(entity)) { /* bfq_group */ - struct bfq_group *bfqg = - container_of(entity, struct bfq_group, entity); - struct bfq_data *bfqd = bfqg->bfqd; - - if (!entity->in_groups_with_pending_reqs) { - entity->in_groups_with_pending_reqs = true; - bfqd->num_groups_with_pending_reqs++; - } - } -#endif - bfq_update_fin_time_enqueue(entity, st, backshifted); } @@ -1653,7 +1640,8 @@ void bfq_add_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq) if (!entity->in_groups_with_pending_reqs) { entity->in_groups_with_pending_reqs = true; #ifdef CONFIG_BFQ_GROUP_IOSCHED - bfqq_group(bfqq)->num_queues_with_pending_reqs++; + if (!(bfqq_group(bfqq)->num_queues_with_pending_reqs++)) + bfqq->bfqd->num_groups_with_pending_reqs++; #endif } } @@ -1665,7 +1653,8 @@ void bfq_del_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq) if (entity->in_groups_with_pending_reqs) { entity->in_groups_with_pending_reqs = false; #ifdef CONFIG_BFQ_GROUP_IOSCHED - bfqq_group(bfqq)->num_queues_with_pending_reqs--; + if (!(--bfqq_group(bfqq)->num_queues_with_pending_reqs)) + bfqq->bfqd->num_groups_with_pending_reqs--; #endif } } @@ -1694,6 +1683,10 @@ void bfq_del_bfqq_busy(struct bfq_queue *bfqq, bool expiration) if (!bfqq->dispatched) { bfq_del_bfqq_in_groups_with_pending_reqs(bfqq); + /* + * Next function is invoked last, because it causes bfqq to be + * freed. DO NOT use bfqq after the next function invocation. + */ bfq_weights_tree_remove(bfqd, bfqq); } } From patchwork Fri Sep 16 07:19:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12978198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BB11C54EE9 for ; Fri, 16 Sep 2022 07:09:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230172AbiIPHJC (ORCPT ); Fri, 16 Sep 2022 03:09:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229959AbiIPHIx (ORCPT ); Fri, 16 Sep 2022 03:08:53 -0400 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A3E0A3D30; Fri, 16 Sep 2022 00:08:52 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4MTQ9X54Cbz6S1ll; Fri, 16 Sep 2022 15:06:56 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP2 (Coremail) with SMTP id Syh0CgD3SXP9ICRj02qwAw--.33483S8; Fri, 16 Sep 2022 15:08:50 +0800 (CST) From: Yu Kuai To: tj@kernel.org, axboe@kernel.dk, paolo.valente@linaro.org, jack@suse.cz Cc: cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com Subject: [patch v11 4/6] block, bfq: do not idle if only one group is activated Date: Fri, 16 Sep 2022 15:19:40 +0800 Message-Id: <20220916071942.214222-5-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220916071942.214222-1-yukuai1@huaweicloud.com> References: <20220916071942.214222-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgD3SXP9ICRj02qwAw--.33483S8 X-Coremail-Antispam: 1UD129KBjvJXoW7CFWUtr4fKr15Kw1xCw15Jwb_yoW8Gw48pa 9Igw4UCr1kAFnxXFW5WF4xX34kCa1kury7JF90934FyrWaqr9Fq3ZFk3WFv3W8XF93twsF vry5CryDZw17taDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPY14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2 kIc2xKxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E 14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r 4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUQSdkU UUUU= X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Yu Kuai Now that root group is counted into 'num_groups_with_pending_reqs', 'num_groups_with_pending_reqs > 0' is always true in bfq_asymmetric_scenario(). Thus change the condition to '> 1'. On the other hand, this change can enable concurrent sync io if only one group is activated. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-iosched.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 970b302a7a3e..6d95b0e488a8 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -820,7 +820,7 @@ bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq) * much easier to maintain the needed state: * 1) all active queues have the same weight, * 2) all active queues belong to the same I/O-priority class, - * 3) there are no active groups. + * 3) there is at most one active group. * In particular, the last condition is always true if hierarchical * support or the cgroups interface are not enabled, thus no state * needs to be maintained in this case. @@ -852,7 +852,7 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd, return varied_queue_weights || multiple_classes_busy #ifdef CONFIG_BFQ_GROUP_IOSCHED - || bfqd->num_groups_with_pending_reqs > 0 + || bfqd->num_groups_with_pending_reqs > 1 #endif ; } From patchwork Fri Sep 16 07:19:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12978197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF448ECAAD8 for ; Fri, 16 Sep 2022 07:09:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230009AbiIPHJA (ORCPT ); Fri, 16 Sep 2022 03:09:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230142AbiIPHIy (ORCPT ); Fri, 16 Sep 2022 03:08:54 -0400 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7230A3D33; Fri, 16 Sep 2022 00:08:52 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4MTQ9X2vDDzKFGP; Fri, 16 Sep 2022 15:06:56 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP2 (Coremail) with SMTP id Syh0CgD3SXP9ICRj02qwAw--.33483S9; Fri, 16 Sep 2022 15:08:50 +0800 (CST) From: Yu Kuai To: tj@kernel.org, axboe@kernel.dk, paolo.valente@linaro.org, jack@suse.cz Cc: cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com Subject: [patch v11 5/6] block, bfq: cleanup bfq_weights_tree add/remove apis Date: Fri, 16 Sep 2022 15:19:41 +0800 Message-Id: <20220916071942.214222-6-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220916071942.214222-1-yukuai1@huaweicloud.com> References: <20220916071942.214222-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgD3SXP9ICRj02qwAw--.33483S9 X-Coremail-Antispam: 1UD129KBjvJXoW3Jw1kZw4kKrWrArWxJw1fWFg_yoW7Zw4fp3 W3t34UG3W8AF4fXr4rJa1UZrnxtrn3Cw1Dtas3u3y5KrW7ArnFv3Wqy3WFvFyFgrWkKw43 Zr1Yq34kAF17XFUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPF14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2 kIc2xKxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E 14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Cr0_Gr1UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJV W8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfUOBTY UUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Yu Kuai The 'bfq_data' and 'rb_root_cached' can both be accessed through 'bfq_queue', thus only pass 'bfq_queue' as parameter. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara Acked-by: Paolo Valente --- block/bfq-iosched.c | 19 +++++++++---------- block/bfq-iosched.h | 10 +++------- block/bfq-wf2q.c | 18 ++++++------------ 3 files changed, 18 insertions(+), 29 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 6d95b0e488a8..4ad4fa0dad4a 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -870,9 +870,9 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd, * In most scenarios, the rate at which nodes are created/destroyed * should be low too. */ -void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq, - struct rb_root_cached *root) +void bfq_weights_tree_add(struct bfq_queue *bfqq) { + struct rb_root_cached *root = &bfqq->bfqd->queue_weights_tree; struct bfq_entity *entity = &bfqq->entity; struct rb_node **new = &(root->rb_root.rb_node), *parent = NULL; bool leftmost = true; @@ -944,13 +944,14 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq, * See the comments to the function bfq_weights_tree_add() for considerations * about overhead. */ -void __bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_queue *bfqq, - struct rb_root_cached *root) +void __bfq_weights_tree_remove(struct bfq_queue *bfqq) { + struct rb_root_cached *root; + if (!bfqq->weight_counter) return; + root = &bfqq->bfqd->queue_weights_tree; bfqq->weight_counter->num_active--; if (bfqq->weight_counter->num_active > 0) goto reset_entity_pointer; @@ -967,11 +968,9 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, * Invoke __bfq_weights_tree_remove on bfqq and decrement the number * of active groups for each queue's inactive parent entity. */ -void bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_queue *bfqq) +void bfq_weights_tree_remove(struct bfq_queue *bfqq) { - __bfq_weights_tree_remove(bfqd, bfqq, - &bfqd->queue_weights_tree); + __bfq_weights_tree_remove(bfqq); } /* @@ -6220,7 +6219,7 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd) bfqq->budget_timeout = jiffies; bfq_del_bfqq_in_groups_with_pending_reqs(bfqq); - bfq_weights_tree_remove(bfqd, bfqq); + bfq_weights_tree_remove(bfqq); } now_ns = ktime_get_ns(); diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 257acb54c6dc..4bb58ab0c90a 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -973,13 +973,9 @@ struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync); void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync); struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic); void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq); -void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq, - struct rb_root_cached *root); -void __bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_queue *bfqq, - struct rb_root_cached *root); -void bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_queue *bfqq); +void bfq_weights_tree_add(struct bfq_queue *bfqq); +void __bfq_weights_tree_remove(struct bfq_queue *bfqq); +void bfq_weights_tree_remove(struct bfq_queue *bfqq); void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq, bool compensate, enum bfqq_expiration reason); void bfq_put_queue(struct bfq_queue *bfqq); diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 5e8224c96921..124aaea6196e 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -707,7 +707,6 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st, struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); unsigned int prev_weight, new_weight; struct bfq_data *bfqd = NULL; - struct rb_root_cached *root; #ifdef CONFIG_BFQ_GROUP_IOSCHED struct bfq_sched_data *sd; struct bfq_group *bfqg; @@ -770,19 +769,15 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st, * queue, remove the entity from its old weight counter (if * there is a counter associated with the entity). */ - if (prev_weight != new_weight && bfqq) { - root = &bfqd->queue_weights_tree; - __bfq_weights_tree_remove(bfqd, bfqq, root); - } + if (prev_weight != new_weight && bfqq) + __bfq_weights_tree_remove(bfqq); entity->weight = new_weight; /* * Add the entity, if it is not a weight-raised queue, * to the counter associated with its new weight. */ - if (prev_weight != new_weight && bfqq && bfqq->wr_coeff == 1) { - /* If we get here, root has been initialized. */ - bfq_weights_tree_add(bfqd, bfqq, root); - } + if (prev_weight != new_weight && bfqq && bfqq->wr_coeff == 1) + bfq_weights_tree_add(bfqq); new_st->wsum += entity->weight; @@ -1687,7 +1682,7 @@ void bfq_del_bfqq_busy(struct bfq_queue *bfqq, bool expiration) * Next function is invoked last, because it causes bfqq to be * freed. DO NOT use bfqq after the next function invocation. */ - bfq_weights_tree_remove(bfqd, bfqq); + bfq_weights_tree_remove(bfqq); } } @@ -1708,8 +1703,7 @@ void bfq_add_bfqq_busy(struct bfq_queue *bfqq) if (!bfqq->dispatched) { bfq_add_bfqq_in_groups_with_pending_reqs(bfqq); if (bfqq->wr_coeff == 1) - bfq_weights_tree_add(bfqd, bfqq, - &bfqd->queue_weights_tree); + bfq_weights_tree_add(bfqq); } if (bfqq->wr_coeff > 1) From patchwork Fri Sep 16 07:19:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12978196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1E22C6FA90 for ; Fri, 16 Sep 2022 07:09:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230170AbiIPHI6 (ORCPT ); Fri, 16 Sep 2022 03:08:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229557AbiIPHIy (ORCPT ); Fri, 16 Sep 2022 03:08:54 -0400 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBB81A3D37; Fri, 16 Sep 2022 00:08:52 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4MTQ9s5lhxzlCKk; Fri, 16 Sep 2022 15:07:13 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.127.227]) by APP2 (Coremail) with SMTP id Syh0CgD3SXP9ICRj02qwAw--.33483S10; Fri, 16 Sep 2022 15:08:51 +0800 (CST) From: Yu Kuai To: tj@kernel.org, axboe@kernel.dk, paolo.valente@linaro.org, jack@suse.cz Cc: cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com Subject: [patch v11 6/6] block, bfq: cleanup __bfq_weights_tree_remove() Date: Fri, 16 Sep 2022 15:19:42 +0800 Message-Id: <20220916071942.214222-7-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220916071942.214222-1-yukuai1@huaweicloud.com> References: <20220916071942.214222-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgD3SXP9ICRj02qwAw--.33483S10 X-Coremail-Antispam: 1UD129KBjvJXoW7KF1UXr17Wr1fJF1UtF4fZrb_yoW5JF1Dp3 ZxKw4UW3WIkF4fXF48Ja1UZwn3trn5GrnrJas3u3yjgry7Arn2v3Z0y3WFvryFqrZ3Krsx Zr1Yg3s7JF1xGrUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPF14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq 3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7 IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4U M4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2 kIc2xKxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E 14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUCVW8JwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Cr0_Gr1UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJV W8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfUOBTY UUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Yu Kuai It's the same with bfq_weights_tree_remove() now. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-iosched.c | 11 +---------- block/bfq-iosched.h | 1 - block/bfq-wf2q.c | 2 +- 3 files changed, 2 insertions(+), 12 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 4ad4fa0dad4a..c14fb6b2a46d 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -944,7 +944,7 @@ void bfq_weights_tree_add(struct bfq_queue *bfqq) * See the comments to the function bfq_weights_tree_add() for considerations * about overhead. */ -void __bfq_weights_tree_remove(struct bfq_queue *bfqq) +void bfq_weights_tree_remove(struct bfq_queue *bfqq) { struct rb_root_cached *root; @@ -964,15 +964,6 @@ void __bfq_weights_tree_remove(struct bfq_queue *bfqq) bfq_put_queue(bfqq); } -/* - * Invoke __bfq_weights_tree_remove on bfqq and decrement the number - * of active groups for each queue's inactive parent entity. - */ -void bfq_weights_tree_remove(struct bfq_queue *bfqq) -{ - __bfq_weights_tree_remove(bfqq); -} - /* * Return expired entry, or NULL to just start from scratch in rbtree. */ diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 4bb58ab0c90a..7795aaf4454f 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -974,7 +974,6 @@ void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync); struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic); void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_weights_tree_add(struct bfq_queue *bfqq); -void __bfq_weights_tree_remove(struct bfq_queue *bfqq); void bfq_weights_tree_remove(struct bfq_queue *bfqq); void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq, bool compensate, enum bfqq_expiration reason); diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 124aaea6196e..5a02cb94d86e 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -770,7 +770,7 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st, * there is a counter associated with the entity). */ if (prev_weight != new_weight && bfqq) - __bfq_weights_tree_remove(bfqq); + bfq_weights_tree_remove(bfqq); entity->weight = new_weight; /* * Add the entity, if it is not a weight-raised queue,