diff mbox series

[2/3] blk-mq: clear 'active_queues' immediately when 'nr_active' is decreased to 0

Message ID 20201226102808.2534966-3-yukuai3@huawei.com (mailing list archive)
State New, archived
Headers show
Series fix the performance fluctuation due to shared tagset | expand

Commit Message

Yu Kuai Dec. 26, 2020, 10:28 a.m. UTC
Currently 'active_queues' is cleared when there is no more IO in 5
secondes. Thus if multiple hardware queues share a tag set, and
some queues occasionally issue small amount of IO, some queues might
can't get enough tags while the utilization rate of total tags is
less than 100% because 'active_queues' is never decreased.

Thus clear 'active_queues' immediately when 'nr_active' is
decreased to 0.

Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/blk-mq.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/block/blk-mq.h b/block/blk-mq.h
index f7212bacfa56..228c5c442be4 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -212,8 +212,10 @@  static inline void __blk_mq_dec_active_requests(struct blk_mq_hw_ctx *hctx)
 {
 	if (blk_mq_is_sbitmap_shared(hctx->flags))
 		atomic_dec(&hctx->queue->nr_active_requests_shared_sbitmap);
-	else if (!atomic_dec_return(&hctx->nr_active))
+	else if (!atomic_dec_return(&hctx->nr_active)) {
+		blk_mq_tag_idle(hctx);
 		blk_mq_dtag_idle(hctx);
+	}
 }
 
 static inline int __blk_mq_active_requests(struct blk_mq_hw_ctx *hctx)