Message ID | 20221003094344.242593-3-sagi@grimberg.me (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | nvme-mpath: Add IO stats support | expand |
On 10/3/22 11:43, Sagi Grimberg wrote: > Our mpath stack device is just a shim that selects a bottom namespace > and submits the bio to it without any fancy splitting. This also means > that we don't clone the bio or have any context to the bio beyond > submission. However it really sucks that we don't see the mpath device > io stats. > > Given that the mpath device can't do that without adding some context > to it, we let the bottom device do it on its behalf (somewhat similar > to the approach taken in nvme_trace_bio_complete). > > When the IO starts, we account the request for multipath IO stats using > REQ_NVME_MPATH_IO_STATS nvme_request flag to avoid queue io stats disable > in the middle of the request. > > Signed-off-by: Sagi Grimberg <sagi@grimberg.me> > --- > drivers/nvme/host/core.c | 4 ++++ > drivers/nvme/host/multipath.c | 25 +++++++++++++++++++++++++ > drivers/nvme/host/nvme.h | 12 ++++++++++++ > 3 files changed, 41 insertions(+) > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index 64fd772de817..d5a54ddf73f2 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -384,6 +384,8 @@ static inline void nvme_end_req(struct request *req) > nvme_log_error(req); > nvme_end_req_zoned(req); > nvme_trace_bio_complete(req); > + if (req->cmd_flags & REQ_NVME_MPATH) > + nvme_mpath_end_request(req); > blk_mq_end_request(req, status); > } > > @@ -421,6 +423,8 @@ EXPORT_SYMBOL_GPL(nvme_complete_rq); > > void nvme_start_request(struct request *rq) > { > + if (rq->cmd_flags & REQ_NVME_MPATH) > + nvme_mpath_start_request(rq); > blk_mq_start_request(rq); > } > EXPORT_SYMBOL_GPL(nvme_start_request); Why don't you move the check for REQ_NVME_MPATH into nvme_mpath_{start,end}_request? Cheers, Hannes
On 10/4/22 09:11, Hannes Reinecke wrote: > On 10/3/22 11:43, Sagi Grimberg wrote: >> Our mpath stack device is just a shim that selects a bottom namespace >> and submits the bio to it without any fancy splitting. This also means >> that we don't clone the bio or have any context to the bio beyond >> submission. However it really sucks that we don't see the mpath device >> io stats. >> >> Given that the mpath device can't do that without adding some context >> to it, we let the bottom device do it on its behalf (somewhat similar >> to the approach taken in nvme_trace_bio_complete). >> >> When the IO starts, we account the request for multipath IO stats using >> REQ_NVME_MPATH_IO_STATS nvme_request flag to avoid queue io stats disable >> in the middle of the request. >> >> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> >> --- >> drivers/nvme/host/core.c | 4 ++++ >> drivers/nvme/host/multipath.c | 25 +++++++++++++++++++++++++ >> drivers/nvme/host/nvme.h | 12 ++++++++++++ >> 3 files changed, 41 insertions(+) >> >> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c >> index 64fd772de817..d5a54ddf73f2 100644 >> --- a/drivers/nvme/host/core.c >> +++ b/drivers/nvme/host/core.c >> @@ -384,6 +384,8 @@ static inline void nvme_end_req(struct request *req) >> nvme_log_error(req); >> nvme_end_req_zoned(req); >> nvme_trace_bio_complete(req); >> + if (req->cmd_flags & REQ_NVME_MPATH) >> + nvme_mpath_end_request(req); >> blk_mq_end_request(req, status); >> } >> @@ -421,6 +423,8 @@ EXPORT_SYMBOL_GPL(nvme_complete_rq); >> void nvme_start_request(struct request *rq) >> { >> + if (rq->cmd_flags & REQ_NVME_MPATH) >> + nvme_mpath_start_request(rq); >> blk_mq_start_request(rq); >> } >> EXPORT_SYMBOL_GPL(nvme_start_request); > > Why don't you move the check for REQ_NVME_MPATH into > nvme_mpath_{start,end}_request? I'm less fond of calling a function that may or may not do anything... But it is a pattern that exists in the code, if people prefer it I can change it.
On 10/4/22 2:19 AM, Sagi Grimberg wrote: > > > On 10/4/22 09:11, Hannes Reinecke wrote: >> On 10/3/22 11:43, Sagi Grimberg wrote: >>> Our mpath stack device is just a shim that selects a bottom namespace >>> and submits the bio to it without any fancy splitting. This also means >>> that we don't clone the bio or have any context to the bio beyond >>> submission. However it really sucks that we don't see the mpath device >>> io stats. >>> >>> Given that the mpath device can't do that without adding some context >>> to it, we let the bottom device do it on its behalf (somewhat similar >>> to the approach taken in nvme_trace_bio_complete). >>> >>> When the IO starts, we account the request for multipath IO stats using >>> REQ_NVME_MPATH_IO_STATS nvme_request flag to avoid queue io stats disable >>> in the middle of the request. >>> >>> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> >>> --- >>> drivers/nvme/host/core.c | 4 ++++ >>> drivers/nvme/host/multipath.c | 25 +++++++++++++++++++++++++ >>> drivers/nvme/host/nvme.h | 12 ++++++++++++ >>> 3 files changed, 41 insertions(+) >>> >>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c >>> index 64fd772de817..d5a54ddf73f2 100644 >>> --- a/drivers/nvme/host/core.c >>> +++ b/drivers/nvme/host/core.c >>> @@ -384,6 +384,8 @@ static inline void nvme_end_req(struct request *req) >>> nvme_log_error(req); >>> nvme_end_req_zoned(req); >>> nvme_trace_bio_complete(req); >>> + if (req->cmd_flags & REQ_NVME_MPATH) >>> + nvme_mpath_end_request(req); >>> blk_mq_end_request(req, status); >>> } >>> @@ -421,6 +423,8 @@ EXPORT_SYMBOL_GPL(nvme_complete_rq); >>> void nvme_start_request(struct request *rq) >>> { >>> + if (rq->cmd_flags & REQ_NVME_MPATH) >>> + nvme_mpath_start_request(rq); >>> blk_mq_start_request(rq); >>> } >>> EXPORT_SYMBOL_GPL(nvme_start_request); >> >> Why don't you move the check for REQ_NVME_MPATH into nvme_mpath_{start,end}_request? > > I'm less fond of calling a function that may or may not > do anything... > > But it is a pattern that exists in the code, if people prefer > it I can change it. I prefer it the way that you have it, avoids a function call for the hot path of not being multipath.
On 10/3/22 3:43 AM, Sagi Grimberg wrote: > Our mpath stack device is just a shim that selects a bottom namespace > and submits the bio to it without any fancy splitting. This also means > that we don't clone the bio or have any context to the bio beyond > submission. However it really sucks that we don't see the mpath device > io stats. > > Given that the mpath device can't do that without adding some context > to it, we let the bottom device do it on its behalf (somewhat similar > to the approach taken in nvme_trace_bio_complete). > > When the IO starts, we account the request for multipath IO stats using > REQ_NVME_MPATH_IO_STATS nvme_request flag to avoid queue io stats disable > in the middle of the request. Reviewed-by: Jens Axboe <axboe@kernel.dk>
On Mon, Oct 03, 2022 at 12:43:44PM +0300, Sagi Grimberg wrote: > Our mpath stack device is just a shim that selects a bottom namespace > and submits the bio to it without any fancy splitting. This also means > that we don't clone the bio or have any context to the bio beyond > submission. However it really sucks that we don't see the mpath device > io stats. > > Given that the mpath device can't do that without adding some context > to it, we let the bottom device do it on its behalf (somewhat similar > to the approach taken in nvme_trace_bio_complete). > > When the IO starts, we account the request for multipath IO stats using > REQ_NVME_MPATH_IO_STATS nvme_request flag to avoid queue io stats disable > in the middle of the request. An unfortunate side effect is that a successful error failover will get accounted for twice in the mpath device, but cloning to create a separate context just to track iostats for that unusual condition is much worse. Reviewed-by: Keith Busch <kbusch@kernel.org> > +void nvme_mpath_start_request(struct request *rq) > +{ > + struct nvme_ns *ns = rq->q->queuedata; > + struct gendisk *disk = ns->head->disk; > + > + if (!blk_queue_io_stat(disk->queue) || blk_rq_is_passthrough(rq)) > + return; > + > + nvme_req(rq)->flags |= NVME_MPATH_IO_STATS; > + nvme_req(rq)->start_time = bdev_start_io_acct(disk->part0, > + blk_rq_bytes(rq) >> SECTOR_SHIFT, > + req_op(rq), jiffies); > +} > +void nvme_mpath_end_request(struct request *rq) > +{ > + struct nvme_ns *ns = rq->q->queuedata; > + > + if (!(nvme_req(rq)->flags & NVME_MPATH_IO_STATS)) > + return; > + bdev_end_io_acct(ns->head->disk->part0, req_op(rq), > + nvme_req(rq)->start_time); > +} I think these also can be static inline.
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 64fd772de817..d5a54ddf73f2 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -384,6 +384,8 @@ static inline void nvme_end_req(struct request *req) nvme_log_error(req); nvme_end_req_zoned(req); nvme_trace_bio_complete(req); + if (req->cmd_flags & REQ_NVME_MPATH) + nvme_mpath_end_request(req); blk_mq_end_request(req, status); } @@ -421,6 +423,8 @@ EXPORT_SYMBOL_GPL(nvme_complete_rq); void nvme_start_request(struct request *rq) { + if (rq->cmd_flags & REQ_NVME_MPATH) + nvme_mpath_start_request(rq); blk_mq_start_request(rq); } EXPORT_SYMBOL_GPL(nvme_start_request); diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index b9cf17cbbbd5..5ef43f54aab6 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -114,6 +114,30 @@ void nvme_failover_req(struct request *req) kblockd_schedule_work(&ns->head->requeue_work); } +void nvme_mpath_start_request(struct request *rq) +{ + struct nvme_ns *ns = rq->q->queuedata; + struct gendisk *disk = ns->head->disk; + + if (!blk_queue_io_stat(disk->queue) || blk_rq_is_passthrough(rq)) + return; + + nvme_req(rq)->flags |= NVME_MPATH_IO_STATS; + nvme_req(rq)->start_time = bdev_start_io_acct(disk->part0, + blk_rq_bytes(rq) >> SECTOR_SHIFT, + req_op(rq), jiffies); +} + +void nvme_mpath_end_request(struct request *rq) +{ + struct nvme_ns *ns = rq->q->queuedata; + + if (!(nvme_req(rq)->flags & NVME_MPATH_IO_STATS)) + return; + bdev_end_io_acct(ns->head->disk->part0, req_op(rq), + nvme_req(rq)->start_time); +} + void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl) { struct nvme_ns *ns; @@ -502,6 +526,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head) blk_queue_flag_set(QUEUE_FLAG_NONROT, head->disk->queue); blk_queue_flag_set(QUEUE_FLAG_NOWAIT, head->disk->queue); + blk_queue_flag_set(QUEUE_FLAG_IO_STAT, head->disk->queue); /* * This assumes all controllers that refer to a namespace either * support poll queues or not. That is not a strict guarantee, diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index c4d1a4e9b961..c4edc91b1358 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -162,6 +162,9 @@ struct nvme_request { u8 retries; u8 flags; u16 status; +#ifdef CONFIG_NVME_MULTIPATH + unsigned long start_time; +#endif struct nvme_ctrl *ctrl; }; @@ -173,6 +176,7 @@ struct nvme_request { enum { NVME_REQ_CANCELLED = (1 << 0), NVME_REQ_USERCMD = (1 << 1), + NVME_MPATH_IO_STATS = (1 << 2), }; static inline struct nvme_request *nvme_req(struct request *req) @@ -862,6 +866,8 @@ bool nvme_mpath_clear_current_path(struct nvme_ns *ns); void nvme_mpath_revalidate_paths(struct nvme_ns *ns); void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl); void nvme_mpath_shutdown_disk(struct nvme_ns_head *head); +void nvme_mpath_start_request(struct request *rq); +void nvme_mpath_end_request(struct request *rq); static inline void nvme_trace_bio_complete(struct request *req) { @@ -947,6 +953,12 @@ static inline void nvme_mpath_start_freeze(struct nvme_subsystem *subsys) static inline void nvme_mpath_default_iopolicy(struct nvme_subsystem *subsys) { } +static inline void nvme_mpath_start_request(struct request *rq) +{ +} +static inline void nvme_mpath_end_request(struct request *rq) +{ +} #endif /* CONFIG_NVME_MULTIPATH */ int nvme_revalidate_zones(struct nvme_ns *ns);
Our mpath stack device is just a shim that selects a bottom namespace and submits the bio to it without any fancy splitting. This also means that we don't clone the bio or have any context to the bio beyond submission. However it really sucks that we don't see the mpath device io stats. Given that the mpath device can't do that without adding some context to it, we let the bottom device do it on its behalf (somewhat similar to the approach taken in nvme_trace_bio_complete). When the IO starts, we account the request for multipath IO stats using REQ_NVME_MPATH_IO_STATS nvme_request flag to avoid queue io stats disable in the middle of the request. Signed-off-by: Sagi Grimberg <sagi@grimberg.me> --- drivers/nvme/host/core.c | 4 ++++ drivers/nvme/host/multipath.c | 25 +++++++++++++++++++++++++ drivers/nvme/host/nvme.h | 12 ++++++++++++ 3 files changed, 41 insertions(+)