From patchwork Sat Nov 27 10:11:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12642201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2C72C433F5 for ; Sat, 27 Nov 2021 10:01:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354788AbhK0KEy (ORCPT ); Sat, 27 Nov 2021 05:04:54 -0500 Received: from szxga01-in.huawei.com ([45.249.212.187]:14989 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354086AbhK0KCn (ORCPT ); Sat, 27 Nov 2021 05:02:43 -0500 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4J1Rpq5n5VzZd5M; Sat, 27 Nov 2021 17:56:51 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:28 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 27 Nov 2021 17:59:27 +0800 From: Yu Kuai To: , , CC: , , , , Subject: [PATCH RFC 9/9] block, bfq: decrease 'num_groups_with_pending_reqs' earlier Date: Sat, 27 Nov 2021 18:11:32 +0800 Message-ID: <20211127101132.486806-10-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211127101132.486806-1-yukuai3@huawei.com> References: <20211127101132.486806-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently 'num_groups_with_pending_reqs' won't be decreased when the group doesn't have any pending requests, while any child group have any pending requests. The decrement is delayed to when all the child groups doesn't have any pending requests. For example: 1) t1 issue sync io on root group, t2 and t3 issue sync io on the same child group. num_groups_with_pending_reqs is 2 now. 2) t1 stopped, num_groups_with_pending_reqs is still 2. io from t2 and t3 still can't be handled concurrently. Fix the problem by decreasing 'num_groups_with_pending_reqs' immediately upon the deactivation of last entity of the group. Signed-off-by: Yu Kuai --- block/bfq-iosched.c | 58 ++++++++++++++++----------------------------- block/bfq-iosched.h | 16 ++++++------- 2 files changed, 29 insertions(+), 45 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 4239b3996e23..55925e1ee85d 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -873,6 +873,26 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, bfq_put_queue(bfqq); } +static void decrease_groups_with_pending_reqs(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ +#ifdef CONFIG_BFQ_GROUP_IOSCHED + struct bfq_entity *entity = bfqq->entity.parent; + struct bfq_group *bfqg = container_of(entity, struct bfq_group, entity); + + /* + * The decrement of num_groups_with_pending_reqs is performed + * immediately upon the deactivation of last entity that have pending + * requests + */ + if (!bfqg->num_entities_with_pending_reqs && + entity->in_groups_with_pending_reqs) { + entity->in_groups_with_pending_reqs = false; + bfqd->num_groups_with_pending_reqs--; + } +#endif +} + /* * Invoke __bfq_weights_tree_remove on bfqq and decrement the number * of active groups for each queue's inactive parent entity. @@ -880,46 +900,10 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, void bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) { - struct bfq_entity *entity = bfqq->entity.parent; - bfqq->ref++; __bfq_weights_tree_remove(bfqd, bfqq, &bfqd->queue_weights_tree); - - for_each_entity(entity) { - struct bfq_sched_data *sd = entity->my_sched_data; - - if (sd && (sd->next_in_service || sd->in_service_entity)) { - /* - * entity is still active, because either - * next_in_service or in_service_entity is not - * NULL (see the comments on the definition of - * next_in_service for details on why - * in_service_entity must be checked too). - * - * As a consequence, its parent entities are - * active as well, and thus this loop must - * stop here. - */ - break; - } - - /* - * The decrement of num_groups_with_pending_reqs is - * not performed immediately upon the deactivation of - * entity, but it is delayed to when it also happens - * that the first leaf descendant bfqq of entity gets - * all its pending requests completed. The following - * instructions perform this delayed decrement, if - * needed. See the comments on - * num_groups_with_pending_reqs for details. - */ - if (entity->in_groups_with_pending_reqs) { - entity->in_groups_with_pending_reqs = false; - bfqd->num_groups_with_pending_reqs--; - } - } - + decrease_groups_with_pending_reqs(bfqd, bfqq); bfq_put_queue(bfqq); } diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index df08bff89a70..7ae11f62900b 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -493,7 +493,7 @@ struct bfq_data { struct rb_root_cached queue_weights_tree; /* - * Number of groups with at least one descendant process that + * Number of groups with at least one process that * has at least one request waiting for completion. Note that * this accounts for also requests already dispatched, but not * yet completed. Therefore this number of groups may differ @@ -506,14 +506,14 @@ struct bfq_data { * bfq_better_to_idle(). * * However, it is hard to compute this number exactly, for - * groups with multiple descendant processes. Consider a group - * that is inactive, i.e., that has no descendant process with + * groups with multiple processes. Consider a group + * that is inactive, i.e., that has no process with * pending I/O inside BFQ queues. Then suppose that * num_groups_with_pending_reqs is still accounting for this - * group, because the group has descendant processes with some + * group, because the group has processes with some * I/O request still in flight. num_groups_with_pending_reqs * should be decremented when the in-flight request of the - * last descendant process is finally completed (assuming that + * last process is finally completed (assuming that * nothing else has changed for the group in the meantime, in * terms of composition of the group and active/inactive state of child * groups and processes). To accomplish this, an additional @@ -522,7 +522,7 @@ struct bfq_data { * we resort to the following tradeoff between simplicity and * accuracy: for an inactive group that is still counted in * num_groups_with_pending_reqs, we decrement - * num_groups_with_pending_reqs when the first descendant + * num_groups_with_pending_reqs when the last * process of the group remains with no request waiting for * completion. * @@ -530,12 +530,12 @@ struct bfq_data { * carefulness: to avoid multiple decrements, we flag a group, * more precisely an entity representing a group, as still * counted in num_groups_with_pending_reqs when it becomes - * inactive. Then, when the first descendant queue of the + * inactive. Then, when the last queue of the * entity remains with no request waiting for completion, * num_groups_with_pending_reqs is decremented, and this flag * is reset. After this flag is reset for the entity, * num_groups_with_pending_reqs won't be decremented any - * longer in case a new descendant queue of the entity remains + * longer in case a new queue of the entity remains * with no request waiting for completion. */ unsigned int num_groups_with_pending_reqs;