Message ID | 20170911161647.GA55061@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 09/11/2017 10:16 AM, Mike Snitzer wrote: > Here is v2 that should obviate the need to rename blk_mq_insert_request > (by using bools to control run_queue and async). > > As for inserting directly into dispatch, if that can be done that is > great but I'd prefer to have that be a follow-up optimization. This > fixes the regression in question, and does so in well-known terms. > > What do you think? I think it looks reasonable. My only concern is the use of the software queues. Depending on the scheduler, they may or may not be used. I'd need to review the code, but my first thought is that this would break if you use blk_mq_insert_request() on a device that is managed by mq-deadline or bfq, for instance. Schedulers are free to use the software queues, but they are also free to ignore them and use internal queuing. Looking at the code, looks like this was changed slightly at some point, we always flush the software queues, if any of them contain requests. So it's probably fine. My earlier suggestion to use just hctx->dispatch for the IO and bypass the software queues completely. The use case for the dispatch list is the same, regardless of whether the device has a scheduler attached or not.
On Mon, Sep 11 2017 at 4:51pm -0400, Jens Axboe <axboe@kernel.dk> wrote: > On 09/11/2017 10:16 AM, Mike Snitzer wrote: > > Here is v2 that should obviate the need to rename blk_mq_insert_request > > (by using bools to control run_queue and async). > > > > As for inserting directly into dispatch, if that can be done that is > > great but I'd prefer to have that be a follow-up optimization. This > > fixes the regression in question, and does so in well-known terms. > > > > What do you think? > > I think it looks reasonable. My only concern is the use of the software > queues. Depending on the scheduler, they may or may not be used. I'd > need to review the code, but my first thought is that this would break > if you use blk_mq_insert_request() on a device that is managed by > mq-deadline or bfq, for instance. Schedulers are free to use the > software queues, but they are also free to ignore them and use internal > queuing. > > Looking at the code, looks like this was changed slightly at some point, > we always flush the software queues, if any of them contain requests. So > it's probably fine. OK good, but is that too brittle to rely on? Something that might change in the future? > My earlier suggestion to use just hctx->dispatch for the IO and bypass > the software queues completely. The use case for the dispatch list is > the same, regardless of whether the device has a scheduler attached or > not. I'm missing how these details relate to the goal of bypassing any scheduler that might be attached. Are you saying the attached elevator would still get in the way? Looking at blk_mq_sched_insert_request(), submission when an elevator isn't attached is exactly what I made blk_mq_insert_request() do (which is exactly what it did in the past). In the case of DM multipath, nothing else should be submitting IO to the device so elevator shouldn't be used -- only interface for submitting IO would be blk_mq_insert_request(). So even if a scheduler is attached it should be bypassed right? Mike
diff --git a/block/blk-core.c b/block/blk-core.c index dbecbf4..9085013 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2330,7 +2330,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request * if (q->mq_ops) { if (blk_queue_io_stat(q)) blk_account_io_start(rq, true); - blk_mq_sched_insert_request(rq, false, true, false, false); + blk_mq_insert_request(rq, true, false); return BLK_STS_OK; } diff --git a/block/blk-mq.c b/block/blk-mq.c index 4603b11..05d9f7c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1357,6 +1357,25 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, blk_mq_hctx_mark_pending(hctx, ctx); } +static inline void blk_mq_queue_io(struct blk_mq_hw_ctx *hctx, + struct blk_mq_ctx *ctx, + struct request *rq) +{ + spin_lock(&ctx->lock); + __blk_mq_insert_request(hctx, rq, false); + spin_unlock(&ctx->lock); +} + +void blk_mq_insert_request(struct request *rq, bool run_queue, bool async) +{ + struct blk_mq_ctx *ctx = rq->mq_ctx; + struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(rq->q, ctx->cpu); + + blk_mq_queue_io(hctx, ctx, rq); + if (run_queue) + blk_mq_run_hw_queue(hctx, async); +} + void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, struct list_head *list) @@ -1450,15 +1469,6 @@ static inline bool hctx_allow_merges(struct blk_mq_hw_ctx *hctx) !blk_queue_nomerges(hctx->queue); } -static inline void blk_mq_queue_io(struct blk_mq_hw_ctx *hctx, - struct blk_mq_ctx *ctx, - struct request *rq) -{ - spin_lock(&ctx->lock); - __blk_mq_insert_request(hctx, rq, false); - spin_unlock(&ctx->lock); -} - static blk_qc_t request_to_qc_t(struct blk_mq_hw_ctx *hctx, struct request *rq) { if (rq->tag != -1) diff --git a/block/blk-mq.h b/block/blk-mq.h index 60b01c0..01067b2 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -54,6 +54,7 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, */ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, bool at_head); +void blk_mq_insert_request(struct request *rq, bool run_queue, bool async); void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, struct list_head *list);