Message ID | 20190308174006.5032-4-keith.busch@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/5] blk-mq: Export reading mq request state | expand |
On Fri, 2019-03-08 at 10:40 -0700, Keith Busch wrote: > End the entered requests on a quieced queue directly rather than flush > them through the low level driver's queue_rq(). > > Signed-off-by: Keith Busch <keith.busch@intel.com> > --- > drivers/nvme/host/core.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index cc5d9a83d5af..7095406bb293 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -94,6 +94,13 @@ static void nvme_put_subsystem(struct nvme_subsystem *subsys); > static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl, > unsigned nsid); > > +static bool nvme_fail_request(struct blk_mq_hw_ctx *hctx, struct request *req, > + void *data, bool reserved) > +{ > + blk_mq_end_request(req, BLK_STS_IOERR); > + return true; > +} Calling blk_mq_end_request() from outside the .queue_rq() or .complete() callback functions is wrong. Did you perhaps want to call blk_mq_complete_request()? Bart.
On Fri, Mar 08, 2019 at 10:15:27AM -0800, Bart Van Assche wrote: > On Fri, 2019-03-08 at 10:40 -0700, Keith Busch wrote: > > End the entered requests on a quieced queue directly rather than flush > > them through the low level driver's queue_rq(). > > > > Signed-off-by: Keith Busch <keith.busch@intel.com> > > --- > > drivers/nvme/host/core.c | 10 ++++++++-- > > 1 file changed, 8 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > > index cc5d9a83d5af..7095406bb293 100644 > > --- a/drivers/nvme/host/core.c > > +++ b/drivers/nvme/host/core.c > > @@ -94,6 +94,13 @@ static void nvme_put_subsystem(struct nvme_subsystem *subsys); > > static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl, > > unsigned nsid); > > > > +static bool nvme_fail_request(struct blk_mq_hw_ctx *hctx, struct request *req, > > + void *data, bool reserved) > > +{ > > + blk_mq_end_request(req, BLK_STS_IOERR); > > + return true; > > +} > > Calling blk_mq_end_request() from outside the .queue_rq() or .complete() > callback functions is wrong. Did you perhaps want to call > blk_mq_complete_request()? This callback can only see requests in MQ_RQ_IDLE state, and bkl_mq_end_request() is the correct way to end those that never entered a driver's queue_rq().
On Fri, 2019-03-08 at 11:19 -0700, Keith Busch wrote: > On Fri, Mar 08, 2019 at 10:15:27AM -0800, Bart Van Assche wrote: > > On Fri, 2019-03-08 at 10:40 -0700, Keith Busch wrote: > > > End the entered requests on a quieced queue directly rather than flush > > > them through the low level driver's queue_rq(). > > > > > > Signed-off-by: Keith Busch <keith.busch@intel.com> > > > --- > > > drivers/nvme/host/core.c | 10 ++++++++-- > > > 1 file changed, 8 insertions(+), 2 deletions(-) > > > > > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > > > index cc5d9a83d5af..7095406bb293 100644 > > > --- a/drivers/nvme/host/core.c > > > +++ b/drivers/nvme/host/core.c > > > @@ -94,6 +94,13 @@ static void nvme_put_subsystem(struct nvme_subsystem *subsys); > > > static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl, > > > unsigned nsid); > > > > > > +static bool nvme_fail_request(struct blk_mq_hw_ctx *hctx, struct request *req, > > > + void *data, bool reserved) > > > +{ > > > + blk_mq_end_request(req, BLK_STS_IOERR); > > > + return true; > > > +} > > > > Calling blk_mq_end_request() from outside the .queue_rq() or .complete() > > callback functions is wrong. Did you perhaps want to call > > blk_mq_complete_request()? > > This callback can only see requests in MQ_RQ_IDLE state, and > bkl_mq_end_request() is the correct way to end those that never entered > a driver's queue_rq(). Hi Keith, What guarantees that nvme_fail_request() only sees requests in the idle state? From block/blk-mq-tag.c: /** * blk_mq_queue_tag_busy_iter - iterate over all requests with a driver tag * [ ... ] */ Thanks, Bart.
On Fri, Mar 08, 2019 at 01:54:06PM -0800, Bart Van Assche wrote: > On Fri, 2019-03-08 at 11:19 -0700, Keith Busch wrote: > > On Fri, Mar 08, 2019 at 10:15:27AM -0800, Bart Van Assche wrote: > > > On Fri, 2019-03-08 at 10:40 -0700, Keith Busch wrote: > > > > End the entered requests on a quieced queue directly rather than flush > > > > them through the low level driver's queue_rq(). > > > > > > > > Signed-off-by: Keith Busch <keith.busch@intel.com> > > > > --- > > > > drivers/nvme/host/core.c | 10 ++++++++-- > > > > 1 file changed, 8 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > > > > index cc5d9a83d5af..7095406bb293 100644 > > > > --- a/drivers/nvme/host/core.c > > > > +++ b/drivers/nvme/host/core.c > > > > @@ -94,6 +94,13 @@ static void nvme_put_subsystem(struct nvme_subsystem *subsys); > > > > static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl, > > > > unsigned nsid); > > > > > > > > +static bool nvme_fail_request(struct blk_mq_hw_ctx *hctx, struct request *req, > > > > + void *data, bool reserved) > > > > +{ > > > > + blk_mq_end_request(req, BLK_STS_IOERR); > > > > + return true; > > > > +} > > > > > > Calling blk_mq_end_request() from outside the .queue_rq() or .complete() > > > callback functions is wrong. Did you perhaps want to call > > > blk_mq_complete_request()? > > > > This callback can only see requests in MQ_RQ_IDLE state, and > > bkl_mq_end_request() is the correct way to end those that never entered > > a driver's queue_rq(). > > Hi Keith, > > What guarantees that nvme_fail_request() only sees requests in the idle state? > From block/blk-mq-tag.c: > > /** > * blk_mq_queue_tag_busy_iter - iterate over all requests with a driver tag > * [ ... ] > */ It's the driver's responsibility to ensure the queue is quiesced before requesting the iteration. When we call it through nvme_kill_queues(), the queues were already quiesced before calling that. The only other place it's called is on a frozen queue, so it's actually a no-op there since there no requests once frozen.
Hi Keith How about introducing a per hctx queue_rq callback, then install a separate .queue_rq callback for the dead hctx. Then we just need to start and complete the request there. Thanks Jianchao
On Sun, Mar 10, 2019 at 08:58:21PM -0700, jianchao.wang wrote: > Hi Keith > > How about introducing a per hctx queue_rq callback, then install a > separate .queue_rq callback for the dead hctx. Then we just need to > start and complete the request there. That sounds like it could work, though I think just returning BLK_STS_IOERR like we currently do is better than starting a completing it. But adding a new callback that can change at runtime is a more complicated, and would affect a lot more drivers that I am not able to test.
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index cc5d9a83d5af..7095406bb293 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -94,6 +94,13 @@ static void nvme_put_subsystem(struct nvme_subsystem *subsys); static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl, unsigned nsid); +static bool nvme_fail_request(struct blk_mq_hw_ctx *hctx, struct request *req, + void *data, bool reserved) +{ + blk_mq_end_request(req, BLK_STS_IOERR); + return true; +} + static void nvme_set_queue_dying(struct nvme_ns *ns) { /* @@ -104,8 +111,7 @@ static void nvme_set_queue_dying(struct nvme_ns *ns) return; revalidate_disk(ns->disk); blk_set_queue_dying(ns->queue); - /* Forcibly unquiesce queues to avoid blocking dispatch */ - blk_mq_unquiesce_queue(ns->queue); + blk_mq_queue_tag_busy_iter(ns->queue, nvme_fail_request, NULL); } static void nvme_queue_scan(struct nvme_ctrl *ctrl)
End the entered requests on a quieced queue directly rather than flush them through the low level driver's queue_rq(). Signed-off-by: Keith Busch <keith.busch@intel.com> --- drivers/nvme/host/core.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)