From patchwork Wed Jun 1 11:43:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12866811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 784D6C433F5 for ; Wed, 1 Jun 2022 11:30:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352732AbiFALa0 (ORCPT ); Wed, 1 Jun 2022 07:30:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351526AbiFALaZ (ORCPT ); Wed, 1 Jun 2022 07:30:25 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 697BF6FA1F; Wed, 1 Jun 2022 04:30:19 -0700 (PDT) Received: from kwepemi100017.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LCn3q1lX4zjXGk; Wed, 1 Jun 2022 19:29:27 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100017.china.huawei.com (7.221.188.163) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 1 Jun 2022 19:30:17 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 1 Jun 2022 19:30:16 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next v9 1/4] block, bfq: support to track if bfqq has pending requests Date: Wed, 1 Jun 2022 19:43:37 +0800 Message-ID: <20220601114340.949953-2-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220601114340.949953-1-yukuai3@huawei.com> References: <20220601114340.949953-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If entity belongs to bfqq, then entity->in_groups_with_pending_reqs is not used currently. This patch use it to track if bfqq has pending requests through callers of weights_tree insertion and removal. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-iosched.c | 1 + block/bfq-iosched.h | 2 ++ block/bfq-wf2q.c | 24 ++++++++++++++++++++++-- 3 files changed, 25 insertions(+), 2 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 0d46cb728bbf..0ec21018daba 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -6263,6 +6263,7 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd) */ bfqq->budget_timeout = jiffies; + bfq_del_bfqq_in_groups_with_pending_reqs(bfqq); bfq_weights_tree_remove(bfqd, bfqq); } diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index ca8177d7bf7c..69c7d15417e5 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -1080,6 +1080,8 @@ void bfq_requeue_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq, void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq, bool expiration); void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq); +void bfq_add_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq); +void bfq_del_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq); /* --------------- end of interface of B-WF2Q+ ---------------- */ diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index f8eb340381cf..5704a0315cf2 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -1647,6 +1647,22 @@ void bfq_requeue_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq, bfqq == bfqd->in_service_queue, expiration); } +void bfq_add_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + + if (!entity->in_groups_with_pending_reqs) + entity->in_groups_with_pending_reqs = true; +} + +void bfq_del_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + + if (entity->in_groups_with_pending_reqs) + entity->in_groups_with_pending_reqs = false; +} + /* * Called when the bfqq no longer has requests pending, remove it from * the service tree. As a special case, it can be invoked during an @@ -1668,8 +1684,10 @@ void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq, bfq_deactivate_bfqq(bfqd, bfqq, true, expiration); - if (!bfqq->dispatched) + if (!bfqq->dispatched) { + bfq_del_bfqq_in_groups_with_pending_reqs(bfqq); bfq_weights_tree_remove(bfqd, bfqq); + } } /* @@ -1684,10 +1702,12 @@ void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq) bfq_mark_bfqq_busy(bfqq); bfqd->busy_queues[bfqq->ioprio_class - 1]++; - if (!bfqq->dispatched) + if (!bfqq->dispatched) { + bfq_add_bfqq_in_groups_with_pending_reqs(bfqq); if (bfqq->wr_coeff == 1) bfq_weights_tree_add(bfqd, bfqq, &bfqd->queue_weights_tree); + } if (bfqq->wr_coeff > 1) bfqd->wr_busy_queues++; From patchwork Wed Jun 1 11:43:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12866813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D167C433FE for ; Wed, 1 Jun 2022 11:30:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352741AbiFALa1 (ORCPT ); Wed, 1 Jun 2022 07:30:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352694AbiFALaZ (ORCPT ); Wed, 1 Jun 2022 07:30:25 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A84B6FD03; Wed, 1 Jun 2022 04:30:20 -0700 (PDT) Received: from kwepemi100014.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LCn2t3ML5zgYWG; Wed, 1 Jun 2022 19:28:38 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100014.china.huawei.com (7.221.188.106) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 1 Jun 2022 19:30:18 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 1 Jun 2022 19:30:17 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next v9 2/4] block, bfq: record how many queues have pending requests Date: Wed, 1 Jun 2022 19:43:38 +0800 Message-ID: <20220601114340.949953-3-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220601114340.949953-1-yukuai3@huawei.com> References: <20220601114340.949953-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Prepare to refactor the counting of 'num_groups_with_pending_reqs'. Add a counter in bfq_group, update it while tracking if bfqq have pending requests and when bfq_bfqq_move() is called. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-cgroup.c | 10 ++++++++++ block/bfq-iosched.h | 1 + block/bfq-wf2q.c | 12 ++++++++++-- 3 files changed, 21 insertions(+), 2 deletions(-) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 09574af83566..88c0c320ddf3 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -557,6 +557,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd) */ bfqg->bfqd = bfqd; bfqg->active_entities = 0; + bfqg->num_queues_with_pending_reqs = 0; bfqg->online = true; bfqg->rq_pos_tree = RB_ROOT; } @@ -646,6 +647,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, { struct bfq_entity *entity = &bfqq->entity; struct bfq_group *old_parent = bfqq_group(bfqq); + bool has_pending_reqs = false; /* * No point to move bfqq to the same group, which can happen when @@ -666,6 +668,11 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, */ bfqq->ref++; + if (entity->in_groups_with_pending_reqs) { + has_pending_reqs = true; + bfq_del_bfqq_in_groups_with_pending_reqs(bfqq); + } + /* If bfqq is empty, then bfq_bfqq_expire also invokes * bfq_del_bfqq_busy, thereby removing bfqq and its entity * from data structures related to current group. Otherwise we @@ -693,6 +700,9 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, /* pin down bfqg and its associated blkg */ bfqg_and_blkg_get(bfqg); + if (has_pending_reqs) + bfq_add_bfqq_in_groups_with_pending_reqs(bfqq); + if (bfq_bfqq_busy(bfqq)) { if (unlikely(!bfqd->nonrot_with_queueing)) bfq_pos_tree_add_move(bfqd, bfqq); diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 69c7d15417e5..de2446a9b7ab 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -943,6 +943,7 @@ struct bfq_group { struct bfq_entity *my_entity; int active_entities; + int num_queues_with_pending_reqs; struct rb_root rq_pos_tree; diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 5704a0315cf2..6f36f3fe5cc8 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -1651,16 +1651,24 @@ void bfq_add_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq) { struct bfq_entity *entity = &bfqq->entity; - if (!entity->in_groups_with_pending_reqs) + if (!entity->in_groups_with_pending_reqs) { entity->in_groups_with_pending_reqs = true; +#ifdef CONFIG_BFQ_GROUP_IOSCHED + bfqq_group(bfqq)->num_queues_with_pending_reqs++; +#endif + } } void bfq_del_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq) { struct bfq_entity *entity = &bfqq->entity; - if (entity->in_groups_with_pending_reqs) + if (entity->in_groups_with_pending_reqs) { entity->in_groups_with_pending_reqs = false; +#ifdef CONFIG_BFQ_GROUP_IOSCHED + bfqq_group(bfqq)->num_queues_with_pending_reqs--; +#endif + } } /* From patchwork Wed Jun 1 11:43:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12866815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E2C3C433F5 for ; Wed, 1 Jun 2022 11:30:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350019AbiFALaf (ORCPT ); Wed, 1 Jun 2022 07:30:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352706AbiFALaZ (ORCPT ); Wed, 1 Jun 2022 07:30:25 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6609F6FD34; Wed, 1 Jun 2022 04:30:21 -0700 (PDT) Received: from kwepemi100015.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LCn1D6JxGzRhbg; Wed, 1 Jun 2022 19:27:12 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100015.china.huawei.com (7.221.188.125) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 1 Jun 2022 19:30:19 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 1 Jun 2022 19:30:18 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next v9 3/4] block, bfq: refactor the counting of 'num_groups_with_pending_reqs' Date: Wed, 1 Jun 2022 19:43:39 +0800 Message-ID: <20220601114340.949953-4-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220601114340.949953-1-yukuai3@huawei.com> References: <20220601114340.949953-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently, bfq can't handle sync io concurrently as long as they are not issued from root group. This is because 'bfqd->num_groups_with_pending_reqs > 0' is always true in bfq_asymmetric_scenario(). The way that bfqg is counted into 'num_groups_with_pending_reqs': Before this patch: 1) root group will never be counted. 2) Count if bfqg or it's child bfqgs have pending requests. 3) Don't count if bfqg and it's child bfqgs complete all the requests. After this patch: 1) root group is counted. 2) Count if bfqg have pending requests. 3) Don't count if bfqg complete all the requests. With this change, the occasion that only one group is activated can be detected, and next patch will support concurrent sync io in the occasion. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-iosched.c | 42 ------------------------------------------ block/bfq-iosched.h | 18 +++++++++--------- block/bfq-wf2q.c | 19 ++++--------------- 3 files changed, 13 insertions(+), 66 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 0ec21018daba..03b04892440c 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -970,48 +970,6 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, void bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) { - struct bfq_entity *entity = bfqq->entity.parent; - - for_each_entity(entity) { - struct bfq_sched_data *sd = entity->my_sched_data; - - if (sd->next_in_service || sd->in_service_entity) { - /* - * entity is still active, because either - * next_in_service or in_service_entity is not - * NULL (see the comments on the definition of - * next_in_service for details on why - * in_service_entity must be checked too). - * - * As a consequence, its parent entities are - * active as well, and thus this loop must - * stop here. - */ - break; - } - - /* - * The decrement of num_groups_with_pending_reqs is - * not performed immediately upon the deactivation of - * entity, but it is delayed to when it also happens - * that the first leaf descendant bfqq of entity gets - * all its pending requests completed. The following - * instructions perform this delayed decrement, if - * needed. See the comments on - * num_groups_with_pending_reqs for details. - */ - if (entity->in_groups_with_pending_reqs) { - entity->in_groups_with_pending_reqs = false; - bfqd->num_groups_with_pending_reqs--; - } - } - - /* - * Next function is invoked last, because it causes bfqq to be - * freed if the following holds: bfqq is not in service and - * has no dispatched request. DO NOT use bfqq after the next - * function invocation. - */ __bfq_weights_tree_remove(bfqd, bfqq, &bfqd->queue_weights_tree); } diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index de2446a9b7ab..f0fce94583e4 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -496,27 +496,27 @@ struct bfq_data { struct rb_root_cached queue_weights_tree; /* - * Number of groups with at least one descendant process that + * Number of groups with at least one process that * has at least one request waiting for completion. Note that * this accounts for also requests already dispatched, but not * yet completed. Therefore this number of groups may differ * (be larger) than the number of active groups, as a group is * considered active only if its corresponding entity has - * descendant queues with at least one request queued. This + * queues with at least one request queued. This * number is used to decide whether a scenario is symmetric. * For a detailed explanation see comments on the computation * of the variable asymmetric_scenario in the function * bfq_better_to_idle(). * * However, it is hard to compute this number exactly, for - * groups with multiple descendant processes. Consider a group - * that is inactive, i.e., that has no descendant process with + * groups with multiple processes. Consider a group + * that is inactive, i.e., that has no process with * pending I/O inside BFQ queues. Then suppose that * num_groups_with_pending_reqs is still accounting for this - * group, because the group has descendant processes with some + * group, because the group has processes with some * I/O request still in flight. num_groups_with_pending_reqs * should be decremented when the in-flight request of the - * last descendant process is finally completed (assuming that + * last process is finally completed (assuming that * nothing else has changed for the group in the meantime, in * terms of composition of the group and active/inactive state of child * groups and processes). To accomplish this, an additional @@ -525,7 +525,7 @@ struct bfq_data { * we resort to the following tradeoff between simplicity and * accuracy: for an inactive group that is still counted in * num_groups_with_pending_reqs, we decrement - * num_groups_with_pending_reqs when the first descendant + * num_groups_with_pending_reqs when the first * process of the group remains with no request waiting for * completion. * @@ -533,12 +533,12 @@ struct bfq_data { * carefulness: to avoid multiple decrements, we flag a group, * more precisely an entity representing a group, as still * counted in num_groups_with_pending_reqs when it becomes - * inactive. Then, when the first descendant queue of the + * inactive. Then, when the first queue of the * entity remains with no request waiting for completion, * num_groups_with_pending_reqs is decremented, and this flag * is reset. After this flag is reset for the entity, * num_groups_with_pending_reqs won't be decremented any - * longer in case a new descendant queue of the entity remains + * longer in case a new queue of the entity remains * with no request waiting for completion. */ unsigned int num_groups_with_pending_reqs; diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 6f36f3fe5cc8..9c2842bedf97 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -984,19 +984,6 @@ static void __bfq_activate_entity(struct bfq_entity *entity, entity->on_st_or_in_serv = true; } -#ifdef CONFIG_BFQ_GROUP_IOSCHED - if (!bfq_entity_to_bfqq(entity)) { /* bfq_group */ - struct bfq_group *bfqg = - container_of(entity, struct bfq_group, entity); - struct bfq_data *bfqd = bfqg->bfqd; - - if (!entity->in_groups_with_pending_reqs) { - entity->in_groups_with_pending_reqs = true; - bfqd->num_groups_with_pending_reqs++; - } - } -#endif - bfq_update_fin_time_enqueue(entity, st, backshifted); } @@ -1654,7 +1641,8 @@ void bfq_add_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq) if (!entity->in_groups_with_pending_reqs) { entity->in_groups_with_pending_reqs = true; #ifdef CONFIG_BFQ_GROUP_IOSCHED - bfqq_group(bfqq)->num_queues_with_pending_reqs++; + if (!(bfqq_group(bfqq)->num_queues_with_pending_reqs++)) + bfqq->bfqd->num_groups_with_pending_reqs++; #endif } } @@ -1666,7 +1654,8 @@ void bfq_del_bfqq_in_groups_with_pending_reqs(struct bfq_queue *bfqq) if (entity->in_groups_with_pending_reqs) { entity->in_groups_with_pending_reqs = false; #ifdef CONFIG_BFQ_GROUP_IOSCHED - bfqq_group(bfqq)->num_queues_with_pending_reqs--; + if (!(--bfqq_group(bfqq)->num_queues_with_pending_reqs)) + bfqq->bfqd->num_groups_with_pending_reqs--; #endif } } From patchwork Wed Jun 1 11:43:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12866814 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2D3DC433EF for ; Wed, 1 Jun 2022 11:30:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352730AbiFALa2 (ORCPT ); Wed, 1 Jun 2022 07:30:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352718AbiFALaZ (ORCPT ); Wed, 1 Jun 2022 07:30:25 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8D566FD38; Wed, 1 Jun 2022 04:30:21 -0700 (PDT) Received: from kwepemi100012.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LCn2w3P3fz1K9Cd; Wed, 1 Jun 2022 19:28:40 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100012.china.huawei.com (7.221.188.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 1 Jun 2022 19:30:20 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 1 Jun 2022 19:30:19 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next v9 4/4] block, bfq: do not idle if only one group is activated Date: Wed, 1 Jun 2022 19:43:40 +0800 Message-ID: <20220601114340.949953-5-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220601114340.949953-1-yukuai3@huawei.com> References: <20220601114340.949953-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that root group is counted into 'num_groups_with_pending_reqs', 'num_groups_with_pending_reqs > 0' is always true in bfq_asymmetric_scenario(). Thus change the condition to '> 1'. On the other hand, this change can enable concurrent sync io if only one group is activated. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-iosched.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 03b04892440c..a2aa243505dc 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -820,7 +820,7 @@ bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq) * much easier to maintain the needed state: * 1) all active queues have the same weight, * 2) all active queues belong to the same I/O-priority class, - * 3) there are no active groups. + * 3) there are is at most one active group. * In particular, the last condition is always true if hierarchical * support or the cgroups interface are not enabled, thus no state * needs to be maintained in this case. @@ -852,7 +852,7 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd, return varied_queue_weights || multiple_classes_busy #ifdef CONFIG_BFQ_GROUP_IOSCHED - || bfqd->num_groups_with_pending_reqs > 0 + || bfqd->num_groups_with_pending_reqs > 1 #endif ; }