diff mbox series

blk-mq: avoid extending delays of active hctx from blk_mq_delay_run_hw_queues

Message ID 20220131203337.GA17666@redhat (mailing list archive)
State New, archived
Headers show
Series blk-mq: avoid extending delays of active hctx from blk_mq_delay_run_hw_queues | expand

Commit Message

David Jeffery Jan. 31, 2022, 8:33 p.m. UTC
When blk_mq_delay_run_hw_queues sets an hctx to run in the future, it can
reset the delay length for an already pending delayed work run_work. This
creates a scenario where multiple hctx may have their queues set to run,
but if one runs first and finds nothing to do, it can reset the delay of
another hctx and stall the other hctx's ability to run requests.

To avoid this I/O stall when an hctx's run_work is already pending,
leave it untouched to run at its current designated time rather than
extending its delay. The work will still run which keeps closed the race
calling blk_mq_delay_run_hw_queues is needed for while also avoiding the
I/O stall.

Signed-off-by: David Jeffery <djeffery@redhat.com>
---
 block/blk-mq.c |    8 ++++++++
 1 file changed, 8 insertions(+)

Comments

Laurence Oberman Feb. 1, 2022, 1:39 p.m. UTC | #1
On Mon, 2022-01-31 at 15:33 -0500, David Jeffery wrote:
> When blk_mq_delay_run_hw_queues sets an hctx to run in the future, it
> can
> reset the delay length for an already pending delayed work run_work.
> This
> creates a scenario where multiple hctx may have their queues set to
> run,
> but if one runs first and finds nothing to do, it can reset the delay
> of
> another hctx and stall the other hctx's ability to run requests.
> 
> To avoid this I/O stall when an hctx's run_work is already pending,
> leave it untouched to run at its current designated time rather than
> extending its delay. The work will still run which keeps closed the
> race
> calling blk_mq_delay_run_hw_queues is needed for while also avoiding
> the
> I/O stall.
> 
> Signed-off-by: David Jeffery <djeffery@redhat.com>
> ---
>  block/blk-mq.c |    8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index f3bf3358a3bb..ae46eb4bf547 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2177,6 +2177,14 @@ void blk_mq_delay_run_hw_queues(struct
> request_queue *q, unsigned long msecs)
>  	queue_for_each_hw_ctx(q, hctx, i) {
>  		if (blk_mq_hctx_stopped(hctx))
>  			continue;
> +		/*
> +		 * If there is already a run_work pending, leave the
> +		 * pending delay untouched. Otherwise, a hctx can stall
> +		 * if another hctx is re-delaying the other's work
> +		 * before the work executes.
> +		 */
> +		if (delayed_work_pending(&hctx->run_work))
> +			continue;
>  		/*
>  		 * Dispatch from this hctx either if there's no hctx
> preferred
>  		 * by IO scheduler or if it has requests that bypass
> the
> 

Ming is aware of this patch and had asked David to submit it.
David already explained his reasoning internally.
It's for an already reported issue by a customer.

Reviewed-by:
Laurence Oberman <loberman@redhat.com>
Ming Lei Feb. 8, 2022, 2:45 a.m. UTC | #2
On Tue, Feb 1, 2022 at 4:34 AM David Jeffery <djeffery@redhat.com> wrote:
>
> When blk_mq_delay_run_hw_queues sets an hctx to run in the future, it can
> reset the delay length for an already pending delayed work run_work. This
> creates a scenario where multiple hctx may have their queues set to run,
> but if one runs first and finds nothing to do, it can reset the delay of
> another hctx and stall the other hctx's ability to run requests.
>
> To avoid this I/O stall when an hctx's run_work is already pending,
> leave it untouched to run at its current designated time rather than
> extending its delay. The work will still run which keeps closed the race
> calling blk_mq_delay_run_hw_queues is needed for while also avoiding the
> I/O stall.
>
> Signed-off-by: David Jeffery <djeffery@redhat.com>
> ---
>  block/blk-mq.c |    8 ++++++++
>  1 file changed, 8 insertions(+)
>
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index f3bf3358a3bb..ae46eb4bf547 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2177,6 +2177,14 @@ void blk_mq_delay_run_hw_queues(struct request_queue *q, unsigned long msecs)
>         queue_for_each_hw_ctx(q, hctx, i) {
>                 if (blk_mq_hctx_stopped(hctx))
>                         continue;
> +               /*
> +                * If there is already a run_work pending, leave the
> +                * pending delay untouched. Otherwise, a hctx can stall
> +                * if another hctx is re-delaying the other's work
> +                * before the work executes.
> +                */
> +               if (delayed_work_pending(&hctx->run_work))
> +                       continue;

The issue is triggered on BFQ, since BFQ's has_work() may return true,
however its ->dispatch_request() may return NULL, so
blk_mq_delay_run_hw_queues()
is run for delay schedule.

In case of multiple hw queue, the described issue may be triggered, and cause io
stall for long time. And there are only 3 in-tree callers of
blk_mq_delay_run_hw_queues(),
David's fix works well for the 3 users, so this patch looks fine:

Reviewed-by: Ming Lei <ming.lei@redhat.com>

Thanks,
John Pittman Feb. 14, 2022, 2:50 p.m. UTC | #3
This patch has now been tested in the customer environment and results
were good (fixed the hangs).

On Mon, Feb 7, 2022 at 9:45 PM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Tue, Feb 1, 2022 at 4:34 AM David Jeffery <djeffery@redhat.com> wrote:
> >
> > When blk_mq_delay_run_hw_queues sets an hctx to run in the future, it can
> > reset the delay length for an already pending delayed work run_work. This
> > creates a scenario where multiple hctx may have their queues set to run,
> > but if one runs first and finds nothing to do, it can reset the delay of
> > another hctx and stall the other hctx's ability to run requests.
> >
> > To avoid this I/O stall when an hctx's run_work is already pending,
> > leave it untouched to run at its current designated time rather than
> > extending its delay. The work will still run which keeps closed the race
> > calling blk_mq_delay_run_hw_queues is needed for while also avoiding the
> > I/O stall.
> >
> > Signed-off-by: David Jeffery <djeffery@redhat.com>
> > ---
> >  block/blk-mq.c |    8 ++++++++
> >  1 file changed, 8 insertions(+)
> >
> >
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index f3bf3358a3bb..ae46eb4bf547 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -2177,6 +2177,14 @@ void blk_mq_delay_run_hw_queues(struct request_queue *q, unsigned long msecs)
> >         queue_for_each_hw_ctx(q, hctx, i) {
> >                 if (blk_mq_hctx_stopped(hctx))
> >                         continue;
> > +               /*
> > +                * If there is already a run_work pending, leave the
> > +                * pending delay untouched. Otherwise, a hctx can stall
> > +                * if another hctx is re-delaying the other's work
> > +                * before the work executes.
> > +                */
> > +               if (delayed_work_pending(&hctx->run_work))
> > +                       continue;
>
> The issue is triggered on BFQ, since BFQ's has_work() may return true,
> however its ->dispatch_request() may return NULL, so
> blk_mq_delay_run_hw_queues()
> is run for delay schedule.
>
> In case of multiple hw queue, the described issue may be triggered, and cause io
> stall for long time. And there are only 3 in-tree callers of
> blk_mq_delay_run_hw_queues(),
> David's fix works well for the 3 users, so this patch looks fine:
>
> Reviewed-by: Ming Lei <ming.lei@redhat.com>
>
> Thanks,
>
Jens Axboe Feb. 17, 2022, 2:48 a.m. UTC | #4
On Mon, 31 Jan 2022 15:33:37 -0500, David Jeffery wrote:
> When blk_mq_delay_run_hw_queues sets an hctx to run in the future, it can
> reset the delay length for an already pending delayed work run_work. This
> creates a scenario where multiple hctx may have their queues set to run,
> but if one runs first and finds nothing to do, it can reset the delay of
> another hctx and stall the other hctx's ability to run requests.
> 
> To avoid this I/O stall when an hctx's run_work is already pending,
> leave it untouched to run at its current designated time rather than
> extending its delay. The work will still run which keeps closed the race
> calling blk_mq_delay_run_hw_queues is needed for while also avoiding the
> I/O stall.
> 
> [...]

Applied, thanks!

[1/1] blk-mq: avoid extending delays of active hctx from blk_mq_delay_run_hw_queues
      commit: 8f5fea65b06de1cc51d4fc23fb4d378d1abd6ed7

Best regards,
Laurence Oberman Feb. 22, 2022, 2:31 p.m. UTC | #5
On Mon, 2022-02-14 at 09:50 -0500, John Pittman wrote:
> This patch has now been tested in the customer environment and
> results
> were good (fixed the hangs).
> 
> On Mon, Feb 7, 2022 at 9:45 PM Ming Lei <ming.lei@redhat.com> wrote:
> > 
> > On Tue, Feb 1, 2022 at 4:34 AM David Jeffery <djeffery@redhat.com>
> > wrote:
> > > 
> > > When blk_mq_delay_run_hw_queues sets an hctx to run in the
> > > future, it can
> > > reset the delay length for an already pending delayed work
> > > run_work. This
> > > creates a scenario where multiple hctx may have their queues set
> > > to run,
> > > but if one runs first and finds nothing to do, it can reset the
> > > delay of
> > > another hctx and stall the other hctx's ability to run requests.
> > > 
> > > To avoid this I/O stall when an hctx's run_work is already
> > > pending,
> > > leave it untouched to run at its current designated time rather
> > > than
> > > extending its delay. The work will still run which keeps closed
> > > the race
> > > calling blk_mq_delay_run_hw_queues is needed for while also
> > > avoiding the
> > > I/O stall.
> > > 

Hello
> > > Signed-off-by: David Jeffery <djeffery@redhat.com>
> > > ---
> > >  block/blk-mq.c |    8 ++++++++
> > >  1 file changed, 8 insertions(+)
> > > 
> > > 
> > > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > > index f3bf3358a3bb..ae46eb4bf547 100644
> > > --- a/block/blk-mq.c
> > > +++ b/block/blk-mq.c
> > > @@ -2177,6 +2177,14 @@ void blk_mq_delay_run_hw_queues(struct
> > > request_queue *q, unsigned long msecs)
> > >         queue_for_each_hw_ctx(q, hctx, i) {
> > >                 if (blk_mq_hctx_stopped(hctx))
> > >                         continue;
> > > +               /*
> > > +                * If there is already a run_work pending, leave
> > > the
> > > +                * pending delay untouched. Otherwise, a hctx can
> > > stall
> > > +                * if another hctx is re-delaying the other's
> > > work
> > > +                * before the work executes.
> > > +                */
> > > +               if (delayed_work_pending(&hctx->run_work))
> > > +                       continue;
> > 
> > The issue is triggered on BFQ, since BFQ's has_work() may return
> > true,
> > however its ->dispatch_request() may return NULL, so
> > blk_mq_delay_run_hw_queues()
> > is run for delay schedule.
> > 
> > In case of multiple hw queue, the described issue may be triggered,
> > and cause io
> > stall for long time. And there are only 3 in-tree callers of
> > blk_mq_delay_run_hw_queues(),
> > David's fix works well for the 3 users, so this patch looks fine:
> > 
> > Reviewed-by: Ming Lei <ming.lei@redhat.com>
> > 
> > Thanks,
> > 
> 
> 

Hello

Jens, gentle ping, can we get this in please
Sincerely

Laurence and the RH team
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index f3bf3358a3bb..ae46eb4bf547 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2177,6 +2177,14 @@  void blk_mq_delay_run_hw_queues(struct request_queue *q, unsigned long msecs)
 	queue_for_each_hw_ctx(q, hctx, i) {
 		if (blk_mq_hctx_stopped(hctx))
 			continue;
+		/*
+		 * If there is already a run_work pending, leave the
+		 * pending delay untouched. Otherwise, a hctx can stall
+		 * if another hctx is re-delaying the other's work
+		 * before the work executes.
+		 */
+		if (delayed_work_pending(&hctx->run_work))
+			continue;
 		/*
 		 * Dispatch from this hctx either if there's no hctx preferred
 		 * by IO scheduler or if it has requests that bypass the