diff mbox series

[1/8] block: Provide blk_mq_sched_get_icq()

Message ID 20211125133645.27483-1-jack@suse.cz (mailing list archive)
State New, archived
Headers show
Series bfq: Limit number of allocated scheduler tags per cgroup | expand

Commit Message

Jan Kara Nov. 25, 2021, 1:36 p.m. UTC
Currently we lookup ICQ only after the request is allocated. However BFQ
will want to decide how many scheduler tags it allows a given bfq queue
(effectively a process) to consume based on cgroup weight. So provide a
function blk_mq_sched_get_icq() so that BFQ can lookup ICQ earlier.

Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 block/blk-mq-sched.c | 26 +++++++++++++++-----------
 block/blk-mq-sched.h |  1 +
 2 files changed, 16 insertions(+), 11 deletions(-)

Comments

Jens Axboe Nov. 25, 2021, 4:04 p.m. UTC | #1
On Thu, 25 Nov 2021 14:36:34 +0100, Jan Kara wrote:
> Currently we lookup ICQ only after the request is allocated. However BFQ
> will want to decide how many scheduler tags it allows a given bfq queue
> (effectively a process) to consume based on cgroup weight. So provide a
> function blk_mq_sched_get_icq() so that BFQ can lookup ICQ earlier.
> 
> 

Applied, thanks!

[1/8] block: Provide blk_mq_sched_get_icq()
      commit: 4896c4e64ba5d5d5acdbcf68c5910dd4f6d8fa62
[2/8] bfq: Track number of allocated requests in bfq_entity
      commit: 421165c5bb2e7c7480290a229ea7a24512237494
[3/8] bfq: Store full bitmap depth in bfq_data
      commit: e0ef40059557df144110865953ea4c0b87c11ac5
[4/8] bfq: Limit number of requests consumed by each cgroup
      commit: 3d7a7c45e29d5d1f5a9622557acb47443e8b6e28
[5/8] bfq: Limit waker detection in time
      commit: d7eb68e3958fc91711f5df981c517fec9da35c42
[6/8] bfq: Provide helper to generate bfqq name
      commit: 2bbd0f81ac7050bfd537437a65579d49bc2128c1
[7/8] bfq: Log waker detections
      commit: e330e2ab2c40e624029cf208c9505cad2b3c81fd
[8/8] bfq: Do not let waker requests skip proper accounting
      commit: b488606166844e7fb03e5995dbc9d608bbd57c05

Best regards,
diff mbox series

Patch

diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index b942b38000e5..98c6a97729f2 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -18,9 +18,8 @@ 
 #include "blk-mq-tag.h"
 #include "blk-wbt.h"
 
-void blk_mq_sched_assign_ioc(struct request *rq)
+struct io_cq *blk_mq_sched_get_icq(struct request_queue *q)
 {
-	struct request_queue *q = rq->q;
 	struct io_context *ioc;
 	struct io_cq *icq;
 
@@ -28,22 +27,27 @@  void blk_mq_sched_assign_ioc(struct request *rq)
 	if (unlikely(!current->io_context))
 		create_task_io_context(current, GFP_ATOMIC, q->node);
 
-	/*
-	 * May not have an IO context if it's a passthrough request
-	 */
+	/* May not have an IO context if context creation failed */
 	ioc = current->io_context;
 	if (!ioc)
-		return;
+		return NULL;
 
 	spin_lock_irq(&q->queue_lock);
 	icq = ioc_lookup_icq(ioc, q);
 	spin_unlock_irq(&q->queue_lock);
+	if (icq)
+		return icq;
+	return ioc_create_icq(ioc, q, GFP_ATOMIC);
+}
+EXPORT_SYMBOL(blk_mq_sched_get_icq);
 
-	if (!icq) {
-		icq = ioc_create_icq(ioc, q, GFP_ATOMIC);
-		if (!icq)
-			return;
-	}
+void blk_mq_sched_assign_ioc(struct request *rq)
+{
+	struct io_cq *icq;
+
+	icq = blk_mq_sched_get_icq(rq->q);
+	if (!icq)
+		return;
 	get_io_context(icq->ioc);
 	rq->elv.icq = icq;
 }
diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
index 25d1034952b6..add651ec06da 100644
--- a/block/blk-mq-sched.h
+++ b/block/blk-mq-sched.h
@@ -8,6 +8,7 @@ 
 
 #define MAX_SCHED_RQ (16 * BLKDEV_DEFAULT_RQ)
 
+struct io_cq *blk_mq_sched_get_icq(struct request_queue *q);
 void blk_mq_sched_assign_ioc(struct request *rq);
 
 bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,