diff mbox series

[v7,9/9] blk-throttle: clean up flag 'THROTL_TG_PENDING'

Message ID 20220802140415.2960284-10-yukuai1@huaweicloud.com (mailing list archive)
State New, archived
Headers show
Series bugfix and cleanup for blk-throttle | expand

Commit Message

Yu Kuai Aug. 2, 2022, 2:04 p.m. UTC
From: Yu Kuai <yukuai3@huawei.com>

All related operations are inside 'queue_lock', there is no need to use
the flag, we only need to make sure throtl_enqueue_tg() is called when
the first bio is throttled, and throtl_dequeue_tg() is called when the
last throttled bio is dispatched. There are no functional changes in
this patch.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/blk-throttle.c | 22 ++++++++--------------
 block/blk-throttle.h |  7 +++----
 2 files changed, 11 insertions(+), 18 deletions(-)

Comments

Tejun Heo Aug. 16, 2022, 8:14 p.m. UTC | #1
On Tue, Aug 02, 2022 at 10:04:15PM +0800, Yu Kuai wrote:
> From: Yu Kuai <yukuai3@huawei.com>
> 
> All related operations are inside 'queue_lock', there is no need to use
> the flag, we only need to make sure throtl_enqueue_tg() is called when
> the first bio is throttled, and throtl_dequeue_tg() is called when the
> last throttled bio is dispatched. There are no functional changes in
> this patch.

I don't know whether this is better or not. It's minutely less lines of code
but also makes the code a bit more fragile. I'm ambivalent. At any rate,
please move these trivial patches to the head of the series or post them
separately.

Thanks.
Yu Kuai Aug. 17, 2022, 1:45 a.m. UTC | #2
Hi, Tejun!

在 2022/08/17 4:14, Tejun Heo 写道:
> On Tue, Aug 02, 2022 at 10:04:15PM +0800, Yu Kuai wrote:
>> From: Yu Kuai <yukuai3@huawei.com>
>>
>> All related operations are inside 'queue_lock', there is no need to use
>> the flag, we only need to make sure throtl_enqueue_tg() is called when
>> the first bio is throttled, and throtl_dequeue_tg() is called when the
>> last throttled bio is dispatched. There are no functional changes in
>> this patch.
> 
> I don't know whether this is better or not. It's minutely less lines of code
> but also makes the code a bit more fragile. I'm ambivalent. At any rate,
> please move these trivial patches to the head of the series or post them
> separately.

Can I ask why do you think this patch makes the code a bit more fragile?

By the way, I'll post these trivial patches separately.

Thanks,
Kuai
> 
> Thanks.
>
Tejun Heo Aug. 17, 2022, 5:54 p.m. UTC | #3
Hello,

On Wed, Aug 17, 2022 at 09:45:13AM +0800, Yu Kuai wrote:
> > I don't know whether this is better or not. It's minutely less lines of code
> > but also makes the code a bit more fragile. I'm ambivalent. At any rate,
> > please move these trivial patches to the head of the series or post them
> > separately.
> 
> Can I ask why do you think this patch makes the code a bit more fragile?

It's just one step further removed. Before, the flag was trivially in sync
with the on queue status. After, the relationship is more indirect and
easier to break accidentally. Not that it's a major problem. Just not sure
what the benefit of the change is.

> By the way, I'll post these trivial patches separately.

Sounds great.

Thanks.
Yu Kuai Aug. 18, 2022, 9:29 a.m. UTC | #4
Hi, Tejun!

在 2022/08/18 1:54, Tejun Heo 写道:
> Hello,
> 
> On Wed, Aug 17, 2022 at 09:45:13AM +0800, Yu Kuai wrote:
>>> I don't know whether this is better or not. It's minutely less lines of code
>>> but also makes the code a bit more fragile. I'm ambivalent. At any rate,
>>> please move these trivial patches to the head of the series or post them
>>> separately.
>>
>> Can I ask why do you think this patch makes the code a bit more fragile?
> 
> It's just one step further removed. Before, the flag was trivially in sync
> with the on queue status. After, the relationship is more indirect and
> easier to break accidentally. Not that it's a major problem. Just not sure
> what the benefit of the change is.

If you are worried about that, I can keep the flag, then the last two
patches will cleanup:

Before, the flag will be set and cleared frequently when each each bio
is handled.

After, the flag will only set then the first bio is throttled, and
it's cleared when last bio is dispatched.

Of course, if you think this cleanup is not necessary, I'll drop the
last two patches.

Thanks,
Kuai
> 
>> By the way, I'll post these trivial patches separately.
> 
> Sounds great.
> 
> Thanks.
>
Tejun Heo Aug. 19, 2022, 5:35 p.m. UTC | #5
On Thu, Aug 18, 2022 at 05:29:39PM +0800, Yu Kuai wrote:
> Hi, Tejun!
> 
> 在 2022/08/18 1:54, Tejun Heo 写道:
> > Hello,
> > 
> > On Wed, Aug 17, 2022 at 09:45:13AM +0800, Yu Kuai wrote:
> > > > I don't know whether this is better or not. It's minutely less lines of code
> > > > but also makes the code a bit more fragile. I'm ambivalent. At any rate,
> > > > please move these trivial patches to the head of the series or post them
> > > > separately.
> > > 
> > > Can I ask why do you think this patch makes the code a bit more fragile?
> > 
> > It's just one step further removed. Before, the flag was trivially in sync
> > with the on queue status. After, the relationship is more indirect and
> > easier to break accidentally. Not that it's a major problem. Just not sure
> > what the benefit of the change is.
> 
> If you are worried about that, I can keep the flag, then the last two
> patches will cleanup:

I wasn't necessarily worried. It's more that I couldn't tell why the code is
better afterwards. Maybe update the commit message to explain why the new
code is better?

Thanks.
diff mbox series

Patch

diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 6b2096e95221..778c0131adb1 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -561,23 +561,16 @@  static void tg_service_queue_add(struct throtl_grp *tg)
 
 static void throtl_enqueue_tg(struct throtl_grp *tg)
 {
-	if (!(tg->flags & THROTL_TG_PENDING)) {
-		tg_service_queue_add(tg);
-		tg->flags |= THROTL_TG_PENDING;
-		tg->service_queue.parent_sq->nr_pending++;
-	}
+	tg_service_queue_add(tg);
+	tg->service_queue.parent_sq->nr_pending++;
 }
 
 static void throtl_dequeue_tg(struct throtl_grp *tg)
 {
-	if (tg->flags & THROTL_TG_PENDING) {
-		struct throtl_service_queue *parent_sq =
-			tg->service_queue.parent_sq;
+	struct throtl_service_queue *parent_sq = tg->service_queue.parent_sq;
 
-		throtl_rb_erase(&tg->rb_node, parent_sq);
-		--parent_sq->nr_pending;
-		tg->flags &= ~THROTL_TG_PENDING;
-	}
+	throtl_rb_erase(&tg->rb_node, parent_sq);
+	--parent_sq->nr_pending;
 }
 
 /* Call with queue lock held */
@@ -1026,8 +1019,9 @@  static void throtl_add_bio_tg(struct bio *bio, struct throtl_qnode *qn,
 
 	throtl_qnode_add_bio(bio, qn, &sq->queued[rw]);
 
+	if (!sq->nr_queued[READ] && !sq->nr_queued[WRITE])
+		throtl_enqueue_tg(tg);
 	sq->nr_queued[rw]++;
-	throtl_enqueue_tg(tg);
 }
 
 static void tg_update_disptime(struct throtl_grp *tg)
@@ -1382,7 +1376,7 @@  static void tg_conf_updated(struct throtl_grp *tg, bool global)
 	throtl_start_new_slice(tg, READ, false);
 	throtl_start_new_slice(tg, WRITE, false);
 
-	if (tg->flags & THROTL_TG_PENDING) {
+	if (sq->nr_queued[READ] || sq->nr_queued[WRITE]) {
 		tg_update_disptime(tg);
 		throtl_schedule_next_dispatch(sq->parent_sq, true);
 	}
diff --git a/block/blk-throttle.h b/block/blk-throttle.h
index c9545616ba12..2ae5ac8fe76e 100644
--- a/block/blk-throttle.h
+++ b/block/blk-throttle.h
@@ -53,10 +53,9 @@  struct throtl_service_queue {
 };
 
 enum tg_state_flags {
-	THROTL_TG_PENDING	= 1 << 0,	/* on parent's pending tree */
-	THROTL_TG_WAS_EMPTY	= 1 << 1,	/* bio_lists[] became non-empty */
-	THROTL_TG_HAS_IOPS_LIMIT = 1 << 2,	/* tg has iops limit */
-	THROTL_TG_CANCELING	= 1 << 3,	/* starts to cancel bio */
+	THROTL_TG_WAS_EMPTY	= 1 << 0,	/* bio_lists[] became non-empty */
+	THROTL_TG_HAS_IOPS_LIMIT = 1 << 1,	/* tg has iops limit */
+	THROTL_TG_CANCELING	= 1 << 2,	/* starts to cancel bio */
 };
 
 enum {