Message ID | 1489078694.2597.5.camel@sandisk.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, Mar 10, 2017 at 12:58 AM, Bart Van Assche <Bart.VanAssche@sandisk.com> wrote: > On Thu, 2017-03-09 at 21:02 +0800, Ming Lei wrote: >> Before commit 780db2071a(blk-mq: decouble blk-mq freezing >> from generic bypassing), the dying flag is checked before >> entering queue, and Tejun converts the checking into .mq_freeze_depth, >> and assumes the counter is increased just after dying flag >> is set. Unfortunately we doesn't do that in blk_set_queue_dying(). >> >> This patch calls blk_mq_freeze_queue_start() for blk-mq in >> blk_set_queue_dying(), so that we can block new I/O coming >> once the queue is set as dying. >> >> Given blk_set_queue_dying() is always called in remove path >> of block device, and queue will be cleaned up later, we don't >> need to worry about undo of the counter. >> >> Cc: Tejun Heo <tj@kernel.org> >> Signed-off-by: Ming Lei <tom.leiming@gmail.com> >> --- >> block/blk-core.c | 7 +++++-- >> 1 file changed, 5 insertions(+), 2 deletions(-) >> >> diff --git a/block/blk-core.c b/block/blk-core.c >> index 0eeb99ef654f..559487e58296 100644 >> --- a/block/blk-core.c >> +++ b/block/blk-core.c >> @@ -500,9 +500,12 @@ void blk_set_queue_dying(struct request_queue *q) >> queue_flag_set(QUEUE_FLAG_DYING, q); >> spin_unlock_irq(q->queue_lock); >> >> - if (q->mq_ops) >> + if (q->mq_ops) { >> blk_mq_wake_waiters(q); >> - else { >> + >> + /* block new I/O coming */ >> + blk_mq_freeze_queue_start(q); >> + } else { >> struct request_list *rl; >> >> spin_lock_irq(q->queue_lock); > > The comment above blk_mq_freeze_queue_start() should explain more clearly > why that call is needed. Additionally, I think this patch makes the The comment of "block new I/O coming" has been added, and let me know what others are needed, :-) > blk_freeze_queue() call in blk_cleanup_queue() superfluous. How about the > (entirely untested) patch below? I don't think we need to wait in blk_set_queue_dying(), and the purpose of this patch is to block new I/O coming once dying iset as pointed in the comment, and the change in blk_cleanup_queue() isn't necessary too, since that is exactly where we should drain the queue. Thanks, Ming Lei
diff --git a/block/blk-core.c b/block/blk-core.c index 1086dac8724c..3ce48f2d65cf 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -500,6 +500,12 @@ void blk_set_queue_dying(struct request_queue *q) queue_flag_set(QUEUE_FLAG_DYING, q); spin_unlock_irq(q->queue_lock); + /* + * Force blk_queue_enter() and blk_mq_queue_enter() to check the + * "dying" flag. + */ + blk_freeze_queue(q); + if (q->mq_ops) blk_mq_wake_waiters(q); else { @@ -555,7 +561,7 @@ void blk_cleanup_queue(struct request_queue *q) * Drain all requests queued before DYING marking. Set DEAD flag to * prevent that q->request_fn() gets invoked after draining finished. */ - blk_freeze_queue(q); + WARN_ON_ONCE(!atomic_read(&q->mq_freeze_depth)); spin_lock_irq(lock); if (!q->mq_ops) __blk_drain_queue(q, true);