From patchwork Sat May 14 09:05:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12849730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3B17C433FE for ; Sat, 14 May 2022 08:51:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230226AbiENIvj (ORCPT ); Sat, 14 May 2022 04:51:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230085AbiENIvi (ORCPT ); Sat, 14 May 2022 04:51:38 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69EB0E0F9; Sat, 14 May 2022 01:51:36 -0700 (PDT) Received: from kwepemi100004.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4L0fNV3jFDz1JBsb; Sat, 14 May 2022 16:50:18 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100004.china.huawei.com (7.221.188.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:34 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:33 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next 1/8] block, bfq: cleanup bfq_weights_tree add/remove apis Date: Sat, 14 May 2022 17:05:15 +0800 Message-ID: <20220514090522.1669270-2-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220514090522.1669270-1-yukuai3@huawei.com> References: <20220514090522.1669270-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org They already pass 'bfqd' as the first parameter, there is no need to pass 'bfqd->queue_weights_tree' as another parameter. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-iosched.c | 12 ++++++------ block/bfq-iosched.h | 10 +++------- block/bfq-wf2q.c | 16 +++++----------- 3 files changed, 14 insertions(+), 24 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index d32b8340510a..bcbe78d71143 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -870,9 +870,9 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd, * In most scenarios, the rate at which nodes are created/destroyed * should be low too. */ -void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq, - struct rb_root_cached *root) +void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq) { + struct rb_root_cached *root = &bfqd->queue_weights_tree; struct bfq_entity *entity = &bfqq->entity; struct rb_node **new = &(root->rb_root.rb_node), *parent = NULL; bool leftmost = true; @@ -945,12 +945,13 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq, * about overhead. */ void __bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_queue *bfqq, - struct rb_root_cached *root) + struct bfq_queue *bfqq) { + struct rb_root_cached *root; if (!bfqq->weight_counter) return; + root = &bfqd->queue_weights_tree; bfqq->weight_counter->num_active--; if (bfqq->weight_counter->num_active > 0) goto reset_entity_pointer; @@ -970,8 +971,7 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, void bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) { - __bfq_weights_tree_remove(bfqd, bfqq, - &bfqd->queue_weights_tree); + __bfq_weights_tree_remove(bfqd, bfqq); } /* diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index c3bd8ab169e0..0a3415abb994 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -932,13 +932,9 @@ struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync); void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync); struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic); void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq); -void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq, - struct rb_root_cached *root); -void __bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_queue *bfqq, - struct rb_root_cached *root); -void bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_queue *bfqq); +void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq); +void __bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq); +void bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq, bool compensate, enum bfqq_expiration reason); void bfq_put_queue(struct bfq_queue *bfqq); diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 42464e6ff40c..ccd227fed1c3 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -727,7 +727,6 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st, struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); unsigned int prev_weight, new_weight; struct bfq_data *bfqd = NULL; - struct rb_root_cached *root; #ifdef CONFIG_BFQ_GROUP_IOSCHED struct bfq_sched_data *sd; struct bfq_group *bfqg; @@ -790,19 +789,15 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st, * queue, remove the entity from its old weight counter (if * there is a counter associated with the entity). */ - if (prev_weight != new_weight && bfqq) { - root = &bfqd->queue_weights_tree; - __bfq_weights_tree_remove(bfqd, bfqq, root); - } + if (prev_weight != new_weight && bfqq) + __bfq_weights_tree_remove(bfqd, bfqq); entity->weight = new_weight; /* * Add the entity, if it is not a weight-raised queue, * to the counter associated with its new weight. */ - if (prev_weight != new_weight && bfqq && bfqq->wr_coeff == 1) { - /* If we get here, root has been initialized. */ - bfq_weights_tree_add(bfqd, bfqq, root); - } + if (prev_weight != new_weight && bfqq && bfqq->wr_coeff == 1) + bfq_weights_tree_add(bfqd, bfqq); new_st->wsum += entity->weight; @@ -1695,8 +1690,7 @@ void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq) if (!bfqq->dispatched) if (bfqq->wr_coeff == 1) - bfq_weights_tree_add(bfqd, bfqq, - &bfqd->queue_weights_tree); + bfq_weights_tree_add(bfqd, bfqq); if (bfqq->wr_coeff > 1) bfqd->wr_busy_queues++; From patchwork Sat May 14 09:05:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12849731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AA41C433F5 for ; Sat, 14 May 2022 08:51:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230313AbiENIvl (ORCPT ); Sat, 14 May 2022 04:51:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230090AbiENIvj (ORCPT ); Sat, 14 May 2022 04:51:39 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72EBA11A0E; Sat, 14 May 2022 01:51:37 -0700 (PDT) Received: from kwepemi100002.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4L0fLh19hwzGp4t; Sat, 14 May 2022 16:48:44 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100002.china.huawei.com (7.221.188.188) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:35 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:34 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next 2/8] block, bfq: cleanup __bfq_weights_tree_remove() Date: Sat, 14 May 2022 17:05:16 +0800 Message-ID: <20220514090522.1669270-3-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220514090522.1669270-1-yukuai3@huawei.com> References: <20220514090522.1669270-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org It's the same with bfq_weights_tree_remove() now. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-iosched.c | 13 +------------ block/bfq-iosched.h | 1 - block/bfq-wf2q.c | 2 +- 3 files changed, 2 insertions(+), 14 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index bcbe78d71143..1d0141c450c0 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -944,8 +944,7 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq) * See the comments to the function bfq_weights_tree_add() for considerations * about overhead. */ -void __bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_queue *bfqq) +void bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq) { struct rb_root_cached *root; if (!bfqq->weight_counter) @@ -964,16 +963,6 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, bfq_put_queue(bfqq); } -/* - * Invoke __bfq_weights_tree_remove on bfqq and decrement the number - * of active groups for each queue's inactive parent entity. - */ -void bfq_weights_tree_remove(struct bfq_data *bfqd, - struct bfq_queue *bfqq) -{ - __bfq_weights_tree_remove(bfqd, bfqq); -} - /* * Return expired entry, or NULL to just start from scratch in rbtree. */ diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 0a3415abb994..bc54b9824b1e 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -933,7 +933,6 @@ void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync); struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic); void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq); -void __bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_weights_tree_remove(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq, bool compensate, enum bfqq_expiration reason); diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index ccd227fed1c3..2f3fb45a32c3 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -790,7 +790,7 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st, * there is a counter associated with the entity). */ if (prev_weight != new_weight && bfqq) - __bfq_weights_tree_remove(bfqd, bfqq); + bfq_weights_tree_remove(bfqd, bfqq); entity->weight = new_weight; /* * Add the entity, if it is not a weight-raised queue, From patchwork Sat May 14 09:05:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12849733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BCB6C4332F for ; Sat, 14 May 2022 08:51:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230303AbiENIvk (ORCPT ); Sat, 14 May 2022 04:51:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230102AbiENIvj (ORCPT ); Sat, 14 May 2022 04:51:39 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A66C911C17; Sat, 14 May 2022 01:51:37 -0700 (PDT) Received: from kwepemi100005.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4L0fLh6GVBzGpbN; Sat, 14 May 2022 16:48:44 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100005.china.huawei.com (7.221.188.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:35 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:35 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next 3/8] block, bfq: factor out code to update 'active_entities' Date: Sat, 14 May 2022 17:05:17 +0800 Message-ID: <20220514090522.1669270-4-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220514090522.1669270-1-yukuai3@huawei.com> References: <20220514090522.1669270-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Current code is a bit ugly and hard to read. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-wf2q.c | 61 +++++++++++++++++++++++++----------------------- 1 file changed, 32 insertions(+), 29 deletions(-) diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 2f3fb45a32c3..c58568a4b009 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -230,6 +230,26 @@ static void bfq_dec_busy_queues(struct bfq_queue *bfqq) bfqq->bfqd->num_groups_with_busy_queues--; } +static void bfq_inc_active_entities(struct bfq_entity *entity) +{ + struct bfq_sched_data *sd = entity->sched_data; + struct bfq_group *bfqg = container_of(sd, struct bfq_group, sched_data); + struct bfq_data *bfqd = (struct bfq_data *)bfqg->bfqd; + + if (bfqg != bfqd->root_group) + bfqg->active_entities++; +} + +static void bfq_dec_active_entities(struct bfq_entity *entity) +{ + struct bfq_sched_data *sd = entity->sched_data; + struct bfq_group *bfqg = container_of(sd, struct bfq_group, sched_data); + struct bfq_data *bfqd = (struct bfq_data *)bfqg->bfqd; + + if (bfqg != bfqd->root_group) + bfqg->active_entities--; +} + #else /* CONFIG_BFQ_GROUP_IOSCHED */ static bool bfq_update_parent_budget(struct bfq_entity *next_in_service) @@ -250,6 +270,14 @@ static void bfq_dec_busy_queues(struct bfq_queue *bfqq) { } +static void bfq_inc_active_entities(struct bfq_entity *entity) +{ +} + +static void bfq_dec_active_entities(struct bfq_entity *entity) +{ +} + #endif /* CONFIG_BFQ_GROUP_IOSCHED */ /* @@ -476,11 +504,6 @@ static void bfq_active_insert(struct bfq_service_tree *st, { struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); struct rb_node *node = &entity->rb_node; -#ifdef CONFIG_BFQ_GROUP_IOSCHED - struct bfq_sched_data *sd = NULL; - struct bfq_group *bfqg = NULL; - struct bfq_data *bfqd = NULL; -#endif bfq_insert(&st->active, entity); @@ -491,17 +514,10 @@ static void bfq_active_insert(struct bfq_service_tree *st, bfq_update_active_tree(node); -#ifdef CONFIG_BFQ_GROUP_IOSCHED - sd = entity->sched_data; - bfqg = container_of(sd, struct bfq_group, sched_data); - bfqd = (struct bfq_data *)bfqg->bfqd; -#endif if (bfqq) list_add(&bfqq->bfqq_list, &bfqq->bfqd->active_list); -#ifdef CONFIG_BFQ_GROUP_IOSCHED - if (bfqg != bfqd->root_group) - bfqg->active_entities++; -#endif + + bfq_inc_active_entities(entity); } /** @@ -578,29 +594,16 @@ static void bfq_active_extract(struct bfq_service_tree *st, { struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); struct rb_node *node; -#ifdef CONFIG_BFQ_GROUP_IOSCHED - struct bfq_sched_data *sd = NULL; - struct bfq_group *bfqg = NULL; - struct bfq_data *bfqd = NULL; -#endif node = bfq_find_deepest(&entity->rb_node); bfq_extract(&st->active, entity); if (node) bfq_update_active_tree(node); - -#ifdef CONFIG_BFQ_GROUP_IOSCHED - sd = entity->sched_data; - bfqg = container_of(sd, struct bfq_group, sched_data); - bfqd = (struct bfq_data *)bfqg->bfqd; -#endif if (bfqq) list_del(&bfqq->bfqq_list); -#ifdef CONFIG_BFQ_GROUP_IOSCHED - if (bfqg != bfqd->root_group) - bfqg->active_entities--; -#endif + + bfq_dec_active_entities(entity); } /** From patchwork Sat May 14 09:05:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12849734 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18DB5C433FE for ; Sat, 14 May 2022 08:51:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230326AbiENIvm (ORCPT ); Sat, 14 May 2022 04:51:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230131AbiENIvj (ORCPT ); Sat, 14 May 2022 04:51:39 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AEC116592; Sat, 14 May 2022 01:51:38 -0700 (PDT) Received: from kwepemi100001.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4L0fLj4R9qzGpd9; Sat, 14 May 2022 16:48:45 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100001.china.huawei.com (7.221.188.215) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:36 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:35 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next 4/8] block, bfq: don't declare 'bfqd' as type 'void *' in bfq_group Date: Sat, 14 May 2022 17:05:18 +0800 Message-ID: <20220514090522.1669270-5-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220514090522.1669270-1-yukuai3@huawei.com> References: <20220514090522.1669270-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Prevent unnecessary format conversion for bfqg->bfqd in multiple places. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-cgroup.c | 2 +- block/bfq-iosched.h | 2 +- block/bfq-wf2q.c | 8 +++----- 3 files changed, 5 insertions(+), 7 deletions(-) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 4d516879d9fa..b4e39ab4ad17 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -224,7 +224,7 @@ void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq, { blkg_rwstat_add(&bfqg->stats.queued, op, 1); bfqg_stats_end_empty_time(&bfqg->stats); - if (!(bfqq == ((struct bfq_data *)bfqg->bfqd)->in_service_queue)) + if (!(bfqq == bfqg->bfqd->in_service_queue)) bfqg_stats_set_start_group_wait_time(bfqg, bfqq_group(bfqq)); } diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index bc54b9824b1e..d57e4848f57f 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -894,7 +894,7 @@ struct bfq_group { struct bfq_entity entity; struct bfq_sched_data sched_data; - void *bfqd; + struct bfq_data *bfqd; struct bfq_queue *async_bfqq[2][IOPRIO_NR_LEVELS]; struct bfq_queue *async_idle_bfqq; diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index c58568a4b009..15b97687493a 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -234,9 +234,8 @@ static void bfq_inc_active_entities(struct bfq_entity *entity) { struct bfq_sched_data *sd = entity->sched_data; struct bfq_group *bfqg = container_of(sd, struct bfq_group, sched_data); - struct bfq_data *bfqd = (struct bfq_data *)bfqg->bfqd; - if (bfqg != bfqd->root_group) + if (bfqg != bfqg->bfqd->root_group) bfqg->active_entities++; } @@ -244,9 +243,8 @@ static void bfq_dec_active_entities(struct bfq_entity *entity) { struct bfq_sched_data *sd = entity->sched_data; struct bfq_group *bfqg = container_of(sd, struct bfq_group, sched_data); - struct bfq_data *bfqd = (struct bfq_data *)bfqg->bfqd; - if (bfqg != bfqd->root_group) + if (bfqg != bfqg->bfqd->root_group) bfqg->active_entities--; } @@ -741,7 +739,7 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st, else { sd = entity->my_sched_data; bfqg = container_of(sd, struct bfq_group, sched_data); - bfqd = (struct bfq_data *)bfqg->bfqd; + bfqd = bfqg->bfqd; } #endif From patchwork Sat May 14 09:05:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12849735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1DC3C43217 for ; Sat, 14 May 2022 08:51:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230106AbiENIvo (ORCPT ); Sat, 14 May 2022 04:51:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230288AbiENIvk (ORCPT ); Sat, 14 May 2022 04:51:40 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 246DF24F30; Sat, 14 May 2022 01:51:39 -0700 (PDT) Received: from kwepemi500021.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4L0fPB0ZDhzhZ0k; Sat, 14 May 2022 16:50:54 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi500021.china.huawei.com (7.221.188.245) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:37 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:36 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next 5/8] block, bfq: cleanup bfq_activate_requeue_entity() Date: Sat, 14 May 2022 17:05:19 +0800 Message-ID: <20220514090522.1669270-6-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220514090522.1669270-1-yukuai3@huawei.com> References: <20220514090522.1669270-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Just make the code a litter cleaner by removing the unnecessary variable 'sd'. Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-wf2q.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c index 15b97687493a..da189c732376 100644 --- a/block/bfq-wf2q.c +++ b/block/bfq-wf2q.c @@ -1085,12 +1085,12 @@ static void __bfq_requeue_entity(struct bfq_entity *entity) } static void __bfq_activate_requeue_entity(struct bfq_entity *entity, - struct bfq_sched_data *sd, bool non_blocking_wait_rq) { struct bfq_service_tree *st = bfq_entity_service_tree(entity); - if (sd->in_service_entity == entity || entity->tree == &st->active) + if (entity->sched_data->in_service_entity == entity || + entity->tree == &st->active) /* * in service or already queued on the active tree, * requeue or reposition @@ -1122,14 +1122,10 @@ static void bfq_activate_requeue_entity(struct bfq_entity *entity, bool non_blocking_wait_rq, bool requeue, bool expiration) { - struct bfq_sched_data *sd; - for_each_entity(entity) { - sd = entity->sched_data; - __bfq_activate_requeue_entity(entity, sd, non_blocking_wait_rq); - - if (!bfq_update_next_in_service(sd, entity, expiration) && - !requeue) + __bfq_activate_requeue_entity(entity, non_blocking_wait_rq); + if (!bfq_update_next_in_service(entity->sched_data, entity, + expiration) && !requeue) break; } } From patchwork Sat May 14 09:05:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12849732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0600C43219 for ; Sat, 14 May 2022 08:51:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230335AbiENIvn (ORCPT ); Sat, 14 May 2022 04:51:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230312AbiENIvk (ORCPT ); Sat, 14 May 2022 04:51:40 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1E3511A0E; Sat, 14 May 2022 01:51:39 -0700 (PDT) Received: from kwepemi100013.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4L0fPB5WvLzhYpV; Sat, 14 May 2022 16:50:54 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100013.china.huawei.com (7.221.188.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:38 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:37 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next 6/8] block, bfq: remove dead code for updating 'rq_in_driver' Date: Sat, 14 May 2022 17:05:20 +0800 Message-ID: <20220514090522.1669270-7-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220514090522.1669270-1-yukuai3@huawei.com> References: <20220514090522.1669270-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Such code are not even compiled since they are inside marco "#if 0". Signed-off-by: Yu Kuai Reviewed-by: Jan Kara --- block/bfq-iosched.c | 16 ---------------- 1 file changed, 16 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 1d0141c450c0..e36a16684fb4 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -2322,22 +2322,6 @@ static sector_t get_sdist(sector_t last_pos, struct request *rq) return 0; } -#if 0 /* Still not clear if we can do without next two functions */ -static void bfq_activate_request(struct request_queue *q, struct request *rq) -{ - struct bfq_data *bfqd = q->elevator->elevator_data; - - bfqd->rq_in_driver++; -} - -static void bfq_deactivate_request(struct request_queue *q, struct request *rq) -{ - struct bfq_data *bfqd = q->elevator->elevator_data; - - bfqd->rq_in_driver--; -} -#endif - static void bfq_remove_request(struct request_queue *q, struct request *rq) { From patchwork Sat May 14 09:05:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12849736 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABF57C433EF for ; Sat, 14 May 2022 08:51:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230498AbiENIvv (ORCPT ); Sat, 14 May 2022 04:51:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230333AbiENIvn (ORCPT ); Sat, 14 May 2022 04:51:43 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2155A55A7; Sat, 14 May 2022 01:51:41 -0700 (PDT) Received: from kwepemi500024.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4L0fPR4M64zgYhd; Sat, 14 May 2022 16:51:07 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi500024.china.huawei.com (7.221.188.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:38 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:38 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next 7/8] block, bfq: cleanup bfq_bfqq_update_budg_for_activation() Date: Sat, 14 May 2022 17:05:21 +0800 Message-ID: <20220514090522.1669270-8-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220514090522.1669270-1-yukuai3@huawei.com> References: <20220514090522.1669270-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org It will only be called from bfq_bfqq_handle_idle_busy_switch() in specific code branch, there is no need to precaculate 'bfqq_wants_to_preempt' each time bfq_bfqq_handle_idle_busy_switch() is caleld. Signed-off-by: Yu Kuai --- block/bfq-iosched.c | 32 +++++++------------------------- 1 file changed, 7 insertions(+), 25 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index e36a16684fb4..1e57d76c8dd3 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -1555,10 +1555,11 @@ static int bfq_min_budget(struct bfq_data *bfqd) * responsibility of handling the above case 2. */ static bool bfq_bfqq_update_budg_for_activation(struct bfq_data *bfqd, - struct bfq_queue *bfqq, - bool arrived_in_time) + struct bfq_queue *bfqq) { struct bfq_entity *entity = &bfqq->entity; + bool arrived_in_time = ktime_get_ns() <= bfqq->ttime.last_end_request + + bfqd->bfq_slice_idle * 3; /* * In the next compound condition, we check also whether there @@ -1567,7 +1568,7 @@ static bool bfq_bfqq_update_budg_for_activation(struct bfq_data *bfqd, * would be expired immediately after being selected for * service. This would only cause useless overhead. */ - if (bfq_bfqq_non_blocking_wait_rq(bfqq) && arrived_in_time && + if (arrived_in_time && bfq_bfqq_non_blocking_wait_rq(bfqq) && bfq_bfqq_budget_left(bfqq) > 0) { /* * We do not clear the flag non_blocking_wait_rq here, as @@ -1768,17 +1769,7 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, bool *interactive) { bool soft_rt, in_burst, wr_or_deserves_wr, - bfqq_wants_to_preempt, - idle_for_long_time = bfq_bfqq_idle_for_long_time(bfqd, bfqq), - /* - * See the comments on - * bfq_bfqq_update_budg_for_activation for - * details on the usage of the next variable. - */ - arrived_in_time = ktime_get_ns() <= - bfqq->ttime.last_end_request + - bfqd->bfq_slice_idle * 3; - + idle_for_long_time = bfq_bfqq_idle_for_long_time(bfqd, bfqq); /* * bfqq deserves to be weight-raised if: @@ -1816,14 +1807,6 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, (bfqq->bic || RQ_BIC(rq)->stably_merged) && (*interactive || soft_rt))); - /* - * Using the last flag, update budget and check whether bfqq - * may want to preempt the in-service queue. - */ - bfqq_wants_to_preempt = - bfq_bfqq_update_budg_for_activation(bfqd, bfqq, - arrived_in_time); - /* * If bfqq happened to be activated in a burst, but has been * idle for much more than an interactive queue, then we @@ -1879,8 +1862,7 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, * guarantees or throughput. As for guarantees, we care * explicitly about two cases. The first is that bfqq has to * recover a service hole, as explained in the comments on - * bfq_bfqq_update_budg_for_activation(), i.e., that - * bfqq_wants_to_preempt is true. However, if bfqq does not + * bfq_bfqq_update_budg_for_activation(). However, if bfqq does not * carry time-critical I/O, then bfqq's bandwidth is less * important than that of queues that carry time-critical I/O. * So, as a further constraint, we consider this case only if @@ -1918,7 +1900,7 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, * (2) this switch of bfqq to busy changes the scenario. */ if (bfqd->in_service_queue && - ((bfqq_wants_to_preempt && + ((bfq_bfqq_update_budg_for_activation(bfqd, bfqq) && bfqq->wr_coeff >= bfqd->in_service_queue->wr_coeff) || bfq_bfqq_higher_class_or_weight(bfqq, bfqd->in_service_queue) || !bfq_better_to_idle(bfqd->in_service_queue)) && From patchwork Sat May 14 09:05:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 12849737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 054EEC433EF for ; Sat, 14 May 2022 08:52:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230358AbiENIwR (ORCPT ); Sat, 14 May 2022 04:52:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230350AbiENIvp (ORCPT ); Sat, 14 May 2022 04:51:45 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86604252AB; Sat, 14 May 2022 01:51:42 -0700 (PDT) Received: from kwepemi500023.china.huawei.com (unknown [172.30.72.56]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4L0fJT0s0jzCsWd; Sat, 14 May 2022 16:46:49 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi500023.china.huawei.com (7.221.188.76) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:39 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 14 May 2022 16:51:38 +0800 From: Yu Kuai To: , , , CC: , , , , Subject: [PATCH -next 8/8] block, bfq: cleanup bfq_bfqq_handle_idle_busy_switch() Date: Sat, 14 May 2022 17:05:22 +0800 Message-ID: <20220514090522.1669270-9-yukuai3@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220514090522.1669270-1-yukuai3@huawei.com> References: <20220514090522.1669270-1-yukuai3@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org 'wr_or_deserves_wr' is only used in bfq_update_bfqq_wr_on_rq_arrival(), which is only called from bfq_bfqq_handle_idle_busy_switch() in specific code branch, thus there is no need to precaculate 'wr_or_deserves_wr' each time bfq_bfqq_handle_idle_busy_switch() is called. Signed-off-by: Yu Kuai --- block/bfq-iosched.c | 110 +++++++++++++++++++++++++------------------- 1 file changed, 62 insertions(+), 48 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 1e57d76c8dd3..cea8cb3f5ee2 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -1624,15 +1624,65 @@ static unsigned long bfq_smallest_from_now(void) return jiffies - MAX_JIFFY_OFFSET; } +/* + * bfqq deserves to be weight-raised if: + * - it is sync, + * - it does not belong to a large burst, + * - it has been idle for enough time or is soft real-time, + * - is linked to a bfq_io_cq (it is not shared in any sense), + * - has a default weight (otherwise we assume the user wanted + * to control its weight explicitly) + * + * Merged bfq_queues are kept out of weight-raising + * (low-latency) mechanisms. The reason is that these queues + * are usually created for non-interactive and + * non-soft-real-time tasks. Yet this is not the case for + * stably-merged queues. These queues are merged just because + * they are created shortly after each other. So they may + * easily serve the I/O of an interactive or soft-real time + * application, if the application happens to spawn multiple + * processes. So let also stably-merged queued enjoy weight + * raising. + */ +static bool bfqq_wr_or_deserves_wr(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + struct request *rq, + bool interactive, bool soft_rt) +{ + if (!bfqd->low_latency) + return false; + + if (bfqq->wr_coeff > 1) + return true; + + if (!bfq_bfqq_sync(bfqq)) + return false; + + if (!bfqq->bic && !RQ_BIC(rq)->stably_merged) + return false; + + if (!interactive && !soft_rt) + return false; + + return true; +} + static void bfq_update_bfqq_wr_on_rq_arrival(struct bfq_data *bfqd, struct bfq_queue *bfqq, unsigned int old_wr_coeff, - bool wr_or_deserves_wr, - bool interactive, - bool in_burst, - bool soft_rt) -{ - if (old_wr_coeff == 1 && wr_or_deserves_wr) { + struct request *rq, + bool interactive) +{ + bool in_burst = bfq_bfqq_in_large_burst(bfqq); + bool soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 && + !BFQQ_TOTALLY_SEEKY(bfqq) && + !in_burst && + time_is_before_jiffies(bfqq->soft_rt_next_start) && + bfqq->dispatched == 0 && + bfqq->entity.new_weight == 40; + + if (old_wr_coeff == 1 && + bfqq_wr_or_deserves_wr(bfqd, bfqq, rq, interactive, soft_rt)) { /* start a weight-raising period */ if (interactive) { bfqq->service_from_wr = 0; @@ -1674,9 +1724,9 @@ static void bfq_update_bfqq_wr_on_rq_arrival(struct bfq_data *bfqd, if (interactive) { /* update wr coeff and duration */ bfqq->wr_coeff = bfqd->bfq_wr_coeff; bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); - } else if (in_burst) + } else if (in_burst) { bfqq->wr_coeff = 1; - else if (soft_rt) { + } else if (soft_rt) { /* * The application is now or still meeting the * requirements for being deemed soft rt. We @@ -1768,44 +1818,11 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, struct request *rq, bool *interactive) { - bool soft_rt, in_burst, wr_or_deserves_wr, - idle_for_long_time = bfq_bfqq_idle_for_long_time(bfqd, bfqq); + bool in_burst = bfq_bfqq_in_large_burst(bfqq); + bool idle_for_long_time = bfq_bfqq_idle_for_long_time(bfqd, bfqq); - /* - * bfqq deserves to be weight-raised if: - * - it is sync, - * - it does not belong to a large burst, - * - it has been idle for enough time or is soft real-time, - * - is linked to a bfq_io_cq (it is not shared in any sense), - * - has a default weight (otherwise we assume the user wanted - * to control its weight explicitly) - */ - in_burst = bfq_bfqq_in_large_burst(bfqq); - soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 && - !BFQQ_TOTALLY_SEEKY(bfqq) && - !in_burst && - time_is_before_jiffies(bfqq->soft_rt_next_start) && - bfqq->dispatched == 0 && - bfqq->entity.new_weight == 40; *interactive = !in_burst && idle_for_long_time && bfqq->entity.new_weight == 40; - /* - * Merged bfq_queues are kept out of weight-raising - * (low-latency) mechanisms. The reason is that these queues - * are usually created for non-interactive and - * non-soft-real-time tasks. Yet this is not the case for - * stably-merged queues. These queues are merged just because - * they are created shortly after each other. So they may - * easily serve the I/O of an interactive or soft-real time - * application, if the application happens to spawn multiple - * processes. So let also stably-merged queued enjoy weight - * raising. - */ - wr_or_deserves_wr = bfqd->low_latency && - (bfqq->wr_coeff > 1 || - (bfq_bfqq_sync(bfqq) && - (bfqq->bic || RQ_BIC(rq)->stably_merged) && - (*interactive || soft_rt))); /* * If bfqq happened to be activated in a burst, but has been @@ -1840,11 +1857,8 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, if (time_is_before_jiffies(bfqq->split_time + bfqd->bfq_wr_min_idle_time)) { bfq_update_bfqq_wr_on_rq_arrival(bfqd, bfqq, - old_wr_coeff, - wr_or_deserves_wr, - *interactive, - in_burst, - soft_rt); + old_wr_coeff, rq, + *interactive); if (old_wr_coeff != bfqq->wr_coeff) bfqq->entity.prio_changed = 1;