diff mbox

[V5,6/7] blk-mq-sched: improve dispatching from sw queue

Message ID 20170930102720.30219-7-ming.lei@redhat.com (mailing list archive)
State Not Applicable
Headers show

Commit Message

Ming Lei Sept. 30, 2017, 10:27 a.m. UTC
SCSI devices use host-wide tagset, and the shared
driver tag space is often quite big. Meantime
there is also queue depth for each lun(.cmd_per_lun),
which is often small.

So lots of requests may stay in sw queue, and we
always flush all belonging to same hw queue and
dispatch them all to driver, unfortunately it is
easy to cause queue busy because of the small
per-lun queue depth. Once these requests are flushed
out, they have to stay in hctx->dispatch, and no bio
merge can participate into these requests, and
sequential IO performance is hurted.

This patch improves dispatching from sw queue when
there is per-request-queue queue depth by taking
request one by one from sw queue, just like the way
of IO scheduler.

Reviewed-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com>
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: Tom Nguyen <tom81094@gmail.com>
Tested-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-sched.c   | 53 ++++++++++++++++++++++++++++++++++++++++++++++++--
 include/linux/blk-mq.h |  2 ++
 2 files changed, 53 insertions(+), 2 deletions(-)

Comments

Christoph Hellwig Oct. 3, 2017, 9:05 a.m. UTC | #1
On Sat, Sep 30, 2017 at 06:27:19PM +0800, Ming Lei wrote:
> SCSI devices use host-wide tagset, and the shared
> driver tag space is often quite big. Meantime
> there is also queue depth for each lun(.cmd_per_lun),
> which is often small.
> 
> So lots of requests may stay in sw queue, and we
> always flush all belonging to same hw queue and
> dispatch them all to driver, unfortunately it is
> easy to cause queue busy because of the small
> per-lun queue depth. Once these requests are flushed
> out, they have to stay in hctx->dispatch, and no bio
> merge can participate into these requests, and
> sequential IO performance is hurted.
> 
> This patch improves dispatching from sw queue when
> there is per-request-queue queue depth by taking
> request one by one from sw queue, just like the way
> of IO scheduler.

Once again, use your line length real estate for your change logs..

> +static void blk_mq_do_dispatch_ctx(struct request_queue *q,
> +				   struct blk_mq_hw_ctx *hctx)
> +{
> +	LIST_HEAD(rq_list);
> +	struct blk_mq_ctx *ctx = READ_ONCE(hctx->dispatch_from);
> +	bool dispatched;
> +
> +	do {
> +		struct request *rq;
> +
> +		rq = blk_mq_dequeue_from_ctx(hctx, ctx);

This probably should be merged wit hthe patch that introduceѕ
blk_mq_dequeue_from_ctx.

> +		if (!rq)
> +			break;
> +		list_add(&rq->queuelist, &rq_list);

Btw, do we really need to return a request from blk_mq_dequeue_from_ctx,
or would it be easier to just add it directly to the list passed as
an argument.  I did wonder that about the existing code already.

> +
> +		/* round robin for fair dispatch */
> +		ctx = blk_mq_next_ctx(hctx, rq->mq_ctx);
> +
> +		dispatched = blk_mq_dispatch_rq_list(q, &rq_list);
> +	} while (dispatched);
> +
> +	if (!dispatched)
> +		WRITE_ONCE(hctx->dispatch_from, ctx);

No need for the dispatched argument, just write this as:

	 } while (blk_mq_dispatch_rq_list(q, &rq_list));
	
	WRITE_ONCE(hctx->dispatch_from, ctx);

and do an early return instead of a break from inside the loop.

> +	} else if (!has_sched_dispatch && !q->queue_depth) {
> +		/*
> +		 * If there is no per-request_queue depth, we
> +		 * flush all requests in this hw queue, otherwise
> +		 * pick up request one by one from sw queue for
> +		 * avoiding to mess up I/O merge when dispatch
> +		 * run out of resource, which can be triggered
> +		 * easily by per-request_queue queue depth
> +		 */

Use your 80 line real estate for comments..
Ming Lei Oct. 9, 2017, 10:15 a.m. UTC | #2
On Tue, Oct 03, 2017 at 02:05:27AM -0700, Christoph Hellwig wrote:
> On Sat, Sep 30, 2017 at 06:27:19PM +0800, Ming Lei wrote:
> > SCSI devices use host-wide tagset, and the shared
> > driver tag space is often quite big. Meantime
> > there is also queue depth for each lun(.cmd_per_lun),
> > which is often small.
> > 
> > So lots of requests may stay in sw queue, and we
> > always flush all belonging to same hw queue and
> > dispatch them all to driver, unfortunately it is
> > easy to cause queue busy because of the small
> > per-lun queue depth. Once these requests are flushed
> > out, they have to stay in hctx->dispatch, and no bio
> > merge can participate into these requests, and
> > sequential IO performance is hurted.
> > 
> > This patch improves dispatching from sw queue when
> > there is per-request-queue queue depth by taking
> > request one by one from sw queue, just like the way
> > of IO scheduler.
> 
> Once again, use your line length real estate for your change logs..

OK.

> 
> > +static void blk_mq_do_dispatch_ctx(struct request_queue *q,
> > +				   struct blk_mq_hw_ctx *hctx)
> > +{
> > +	LIST_HEAD(rq_list);
> > +	struct blk_mq_ctx *ctx = READ_ONCE(hctx->dispatch_from);
> > +	bool dispatched;
> > +
> > +	do {
> > +		struct request *rq;
> > +
> > +		rq = blk_mq_dequeue_from_ctx(hctx, ctx);
> 
> This probably should be merged wit hthe patch that introduceѕ
> blk_mq_dequeue_from_ctx.

OK.

> 
> > +		if (!rq)
> > +			break;
> > +		list_add(&rq->queuelist, &rq_list);
> 
> Btw, do we really need to return a request from blk_mq_dequeue_from_ctx,
> or would it be easier to just add it directly to the list passed as
> an argument.  I did wonder that about the existing code already.

I suggest to keep them as current way in this patch, but we can clean
them up all in another patch.

> 
> > +
> > +		/* round robin for fair dispatch */
> > +		ctx = blk_mq_next_ctx(hctx, rq->mq_ctx);
> > +
> > +		dispatched = blk_mq_dispatch_rq_list(q, &rq_list);
> > +	} while (dispatched);
> > +
> > +	if (!dispatched)
> > +		WRITE_ONCE(hctx->dispatch_from, ctx);
> 
> No need for the dispatched argument, just write this as:
> 
> 	 } while (blk_mq_dispatch_rq_list(q, &rq_list));
> 	
> 	WRITE_ONCE(hctx->dispatch_from, ctx);
> 
> and do an early return instead of a break from inside the loop.

OK.

> 
> > +	} else if (!has_sched_dispatch && !q->queue_depth) {
> > +		/*
> > +		 * If there is no per-request_queue depth, we
> > +		 * flush all requests in this hw queue, otherwise
> > +		 * pick up request one by one from sw queue for
> > +		 * avoiding to mess up I/O merge when dispatch
> > +		 * run out of resource, which can be triggered
> > +		 * easily by per-request_queue queue depth
> > +		 */
> 
> Use your 80 line real estate for comments..

OK.
diff mbox

Patch

diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 538f363f39ca..3ba112d9dc15 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -105,6 +105,42 @@  static void blk_mq_do_dispatch_sched(struct request_queue *q,
 	} while (blk_mq_dispatch_rq_list(q, &rq_list));
 }
 
+static struct blk_mq_ctx *blk_mq_next_ctx(struct blk_mq_hw_ctx *hctx,
+					  struct blk_mq_ctx *ctx)
+{
+	unsigned idx = ctx->index_hw;
+
+	if (++idx == hctx->nr_ctx)
+		idx = 0;
+
+	return hctx->ctxs[idx];
+}
+
+static void blk_mq_do_dispatch_ctx(struct request_queue *q,
+				   struct blk_mq_hw_ctx *hctx)
+{
+	LIST_HEAD(rq_list);
+	struct blk_mq_ctx *ctx = READ_ONCE(hctx->dispatch_from);
+	bool dispatched;
+
+	do {
+		struct request *rq;
+
+		rq = blk_mq_dequeue_from_ctx(hctx, ctx);
+		if (!rq)
+			break;
+		list_add(&rq->queuelist, &rq_list);
+
+		/* round robin for fair dispatch */
+		ctx = blk_mq_next_ctx(hctx, rq->mq_ctx);
+
+		dispatched = blk_mq_dispatch_rq_list(q, &rq_list);
+	} while (dispatched);
+
+	if (!dispatched)
+		WRITE_ONCE(hctx->dispatch_from, ctx);
+}
+
 void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx)
 {
 	struct request_queue *q = hctx->queue;
@@ -142,18 +178,31 @@  void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx)
 	if (!list_empty(&rq_list)) {
 		blk_mq_sched_mark_restart_hctx(hctx);
 		do_sched_dispatch = blk_mq_dispatch_rq_list(q, &rq_list);
-	} else if (!has_sched_dispatch) {
+	} else if (!has_sched_dispatch && !q->queue_depth) {
+		/*
+		 * If there is no per-request_queue depth, we
+		 * flush all requests in this hw queue, otherwise
+		 * pick up request one by one from sw queue for
+		 * avoiding to mess up I/O merge when dispatch
+		 * run out of resource, which can be triggered
+		 * easily by per-request_queue queue depth
+		 */
 		blk_mq_flush_busy_ctxs(hctx, &rq_list);
 		blk_mq_dispatch_rq_list(q, &rq_list);
 	}
 
+	if (!do_sched_dispatch)
+		return;
+
 	/*
 	 * We want to dispatch from the scheduler if there was nothing
 	 * on the dispatch list or we were able to dispatch from the
 	 * dispatch list.
 	 */
-	if (do_sched_dispatch && has_sched_dispatch)
+	if (has_sched_dispatch)
 		blk_mq_do_dispatch_sched(q, e, hctx);
+	else
+		blk_mq_do_dispatch_ctx(q, hctx);
 }
 
 bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 2747469cedaf..fccabe00fb55 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -30,6 +30,8 @@  struct blk_mq_hw_ctx {
 
 	struct sbitmap		ctx_map;
 
+	struct blk_mq_ctx	*dispatch_from;
+
 	struct blk_mq_ctx	**ctxs;
 	unsigned int		nr_ctx;