Message ID | 1491839696-24783-2-git-send-email-axboe@fb.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Looks good,
Reviewed-by: Christoph Hellwig <hch@lst.de>
On Mon, 2017-04-10 at 09:54 -0600, Jens Axboe wrote: > They serve the exact same purpose. Get rid of the non-delayed > work variant, and just run it without delay for the normal case. Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com>
On Mon, 2017-04-10 at 09:54 -0600, Jens Axboe wrote: > void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx) > { > - cancel_work(&hctx->run_work); > + cancel_delayed_work(&hctx->run_work); > cancel_delayed_work(&hctx->delay_work); > set_bit(BLK_MQ_S_STOPPED, &hctx->state); > } Hello Jens, I would like to change the above cancel_*work() calls into cancel_*work_sync() calls because this code is used when e.g. switching between I/O schedulers and no .queue_rq() calls must be ongoing while switching between schedulers. Do you want to integrate that change into this patch or do you want me to post a separate patch? In the latter case, should I start from your for-next branch to develop that patch or from your for-next branch + this patch series? Thanks, Bart.
On 04/11/2017 12:00 PM, Bart Van Assche wrote: > On Mon, 2017-04-10 at 09:54 -0600, Jens Axboe wrote: >> void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx) >> { >> - cancel_work(&hctx->run_work); >> + cancel_delayed_work(&hctx->run_work); >> cancel_delayed_work(&hctx->delay_work); >> set_bit(BLK_MQ_S_STOPPED, &hctx->state); >> } > > Hello Jens, > > I would like to change the above cancel_*work() calls into cancel_*work_sync() > calls because this code is used when e.g. switching between I/O schedulers and > no .queue_rq() calls must be ongoing while switching between schedulers. Do you > want to integrate that change into this patch or do you want me to post a > separate patch? In the latter case, should I start from your for-next branch > to develop that patch or from your for-next branch + this patch series? I agree, we should make it _sync(). I'll just make the edit in the patch when I send it out again. I was waiting for further comments on patch 3/3.
On Fri, 2017-04-14 at 14:02 -0600, Jens Axboe wrote:
> I was waiting for further comments on patch 3/3.
Hello Jens,
Patch 3/3 is probably fine but I hope that you understand that the introduction
of a new race condition does not make me enthusiast. Should your explanation of
why that race is harmless perhaps be added as a comment?
Bart.
On Mon, Apr 10, 2017 at 09:54:54AM -0600, Jens Axboe wrote: > They serve the exact same purpose. Get rid of the non-delayed > work variant, and just run it without delay for the normal case. > > Signed-off-by: Jens Axboe <axboe@fb.com> > --- > block/blk-core.c | 2 +- > block/blk-mq.c | 27 ++++++--------------------- > include/linux/blk-mq.h | 3 +-- > 3 files changed, 8 insertions(+), 24 deletions(-) > > diff --git a/block/blk-core.c b/block/blk-core.c > index 8654aa0cef6d..d58541e4dc7b 100644 > --- a/block/blk-core.c > +++ b/block/blk-core.c > @@ -269,7 +269,7 @@ void blk_sync_queue(struct request_queue *q) > int i; > > queue_for_each_hw_ctx(q, hctx, i) { > - cancel_work_sync(&hctx->run_work); > + cancel_delayed_work_sync(&hctx->run_work); > cancel_delayed_work_sync(&hctx->delay_work); > } > } else { > diff --git a/block/blk-mq.c b/block/blk-mq.c > index e2ef7b460924..7afba6ab5a96 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -1168,13 +1168,9 @@ static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async, > put_cpu(); > } > > - if (msecs == 0) > - kblockd_schedule_work_on(blk_mq_hctx_next_cpu(hctx), > - &hctx->run_work); > - else > - kblockd_schedule_delayed_work_on(blk_mq_hctx_next_cpu(hctx), > - &hctx->delayed_run_work, > - msecs_to_jiffies(msecs)); > + kblockd_schedule_delayed_work_on(blk_mq_hctx_next_cpu(hctx), > + &hctx->run_work, > + msecs_to_jiffies(msecs)); > } > > void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs) > @@ -1226,7 +1222,7 @@ EXPORT_SYMBOL(blk_mq_queue_stopped); > > void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx) > { > - cancel_work(&hctx->run_work); > + cancel_delayed_work(&hctx->run_work); > cancel_delayed_work(&hctx->delay_work); > set_bit(BLK_MQ_S_STOPPED, &hctx->state); > } > @@ -1284,17 +1280,7 @@ static void blk_mq_run_work_fn(struct work_struct *work) > { > struct blk_mq_hw_ctx *hctx; > > - hctx = container_of(work, struct blk_mq_hw_ctx, run_work); > - > - __blk_mq_run_hw_queue(hctx); > -} > - > -static void blk_mq_delayed_run_work_fn(struct work_struct *work) > -{ > - struct blk_mq_hw_ctx *hctx; > - > - hctx = container_of(work, struct blk_mq_hw_ctx, delayed_run_work.work); > - > + hctx = container_of(work, struct blk_mq_hw_ctx, run_work.work); > __blk_mq_run_hw_queue(hctx); > } > > @@ -1899,8 +1885,7 @@ static int blk_mq_init_hctx(struct request_queue *q, > if (node == NUMA_NO_NODE) > node = hctx->numa_node = set->numa_node; > > - INIT_WORK(&hctx->run_work, blk_mq_run_work_fn); > - INIT_DELAYED_WORK(&hctx->delayed_run_work, blk_mq_delayed_run_work_fn); > + INIT_DELAYED_WORK(&hctx->run_work, blk_mq_run_work_fn); > INIT_DELAYED_WORK(&hctx->delay_work, blk_mq_delay_work_fn); > spin_lock_init(&hctx->lock); > INIT_LIST_HEAD(&hctx->dispatch); > diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h > index d75de612845d..2b4573a9ccf4 100644 > --- a/include/linux/blk-mq.h > +++ b/include/linux/blk-mq.h > @@ -15,7 +15,7 @@ struct blk_mq_hw_ctx { > unsigned long state; /* BLK_MQ_S_* flags */ > } ____cacheline_aligned_in_smp; > > - struct work_struct run_work; > + struct delayed_work run_work; > cpumask_var_t cpumask; > int next_cpu; > int next_cpu_batch; > @@ -51,7 +51,6 @@ struct blk_mq_hw_ctx { > > atomic_t nr_active; > > - struct delayed_work delayed_run_work; > struct delayed_work delay_work; > > struct hlist_node cpuhp_dead; > -- > 2.7.4 Reviewed-by: Ming Lei <ming.lei@redhat.com> Thanks, Ming
diff --git a/block/blk-core.c b/block/blk-core.c index 8654aa0cef6d..d58541e4dc7b 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -269,7 +269,7 @@ void blk_sync_queue(struct request_queue *q) int i; queue_for_each_hw_ctx(q, hctx, i) { - cancel_work_sync(&hctx->run_work); + cancel_delayed_work_sync(&hctx->run_work); cancel_delayed_work_sync(&hctx->delay_work); } } else { diff --git a/block/blk-mq.c b/block/blk-mq.c index e2ef7b460924..7afba6ab5a96 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1168,13 +1168,9 @@ static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async, put_cpu(); } - if (msecs == 0) - kblockd_schedule_work_on(blk_mq_hctx_next_cpu(hctx), - &hctx->run_work); - else - kblockd_schedule_delayed_work_on(blk_mq_hctx_next_cpu(hctx), - &hctx->delayed_run_work, - msecs_to_jiffies(msecs)); + kblockd_schedule_delayed_work_on(blk_mq_hctx_next_cpu(hctx), + &hctx->run_work, + msecs_to_jiffies(msecs)); } void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs) @@ -1226,7 +1222,7 @@ EXPORT_SYMBOL(blk_mq_queue_stopped); void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx) { - cancel_work(&hctx->run_work); + cancel_delayed_work(&hctx->run_work); cancel_delayed_work(&hctx->delay_work); set_bit(BLK_MQ_S_STOPPED, &hctx->state); } @@ -1284,17 +1280,7 @@ static void blk_mq_run_work_fn(struct work_struct *work) { struct blk_mq_hw_ctx *hctx; - hctx = container_of(work, struct blk_mq_hw_ctx, run_work); - - __blk_mq_run_hw_queue(hctx); -} - -static void blk_mq_delayed_run_work_fn(struct work_struct *work) -{ - struct blk_mq_hw_ctx *hctx; - - hctx = container_of(work, struct blk_mq_hw_ctx, delayed_run_work.work); - + hctx = container_of(work, struct blk_mq_hw_ctx, run_work.work); __blk_mq_run_hw_queue(hctx); } @@ -1899,8 +1885,7 @@ static int blk_mq_init_hctx(struct request_queue *q, if (node == NUMA_NO_NODE) node = hctx->numa_node = set->numa_node; - INIT_WORK(&hctx->run_work, blk_mq_run_work_fn); - INIT_DELAYED_WORK(&hctx->delayed_run_work, blk_mq_delayed_run_work_fn); + INIT_DELAYED_WORK(&hctx->run_work, blk_mq_run_work_fn); INIT_DELAYED_WORK(&hctx->delay_work, blk_mq_delay_work_fn); spin_lock_init(&hctx->lock); INIT_LIST_HEAD(&hctx->dispatch); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index d75de612845d..2b4573a9ccf4 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -15,7 +15,7 @@ struct blk_mq_hw_ctx { unsigned long state; /* BLK_MQ_S_* flags */ } ____cacheline_aligned_in_smp; - struct work_struct run_work; + struct delayed_work run_work; cpumask_var_t cpumask; int next_cpu; int next_cpu_batch; @@ -51,7 +51,6 @@ struct blk_mq_hw_ctx { atomic_t nr_active; - struct delayed_work delayed_run_work; struct delayed_work delay_work; struct hlist_node cpuhp_dead;
They serve the exact same purpose. Get rid of the non-delayed work variant, and just run it without delay for the normal case. Signed-off-by: Jens Axboe <axboe@fb.com> --- block/blk-core.c | 2 +- block/blk-mq.c | 27 ++++++--------------------- include/linux/blk-mq.h | 3 +-- 3 files changed, 8 insertions(+), 24 deletions(-)