diff mbox series

[V2] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()

Message ID 20230901134916.2415386-1-ming.lei@redhat.com (mailing list archive)
State New, archived
Headers show
Series [V2] io_uring: fix IO hang in io_wq_put_and_exit from do_exit() | expand

Commit Message

Ming Lei Sept. 1, 2023, 1:49 p.m. UTC
io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
Meantime io_wq IO code path may share resource with normal iopoll code
path.

So if any HIPRI request is submittd via io_wq, this request may not get resouce
for moving on, given iopoll isn't possible in io_wq_put_and_exit().

The issue can be triggered when terminating 't/io_uring -n4 /dev/nullb0'
with default null_blk parameters.

Fix it by always cancelling all requests in io_wq by adding helper of
io_uring_cancel_wq(), and this way is reasonable because io_wq destroying
follows canceling requests immediately.

Closes: https://lore.kernel.org/linux-block/3893581.1691785261@warthog.procyon.org.uk/
Reported-by: David Howells <dhowells@redhat.com>
Cc: Chengming Zhou <zhouchengming@bytedance.com>,
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
V2:
	- avoid to mess up io_uring_cancel_generic() by adding one new
	  helper for canceling io_wq requests

 io_uring/io_uring.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

Comments

Jens Axboe Sept. 1, 2023, 2:47 p.m. UTC | #1
On 9/1/23 7:49 AM, Ming Lei wrote:
> io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
> in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
> Meantime io_wq IO code path may share resource with normal iopoll code
> path.
> 
> So if any HIPRI request is submittd via io_wq, this request may not get resouce
> for moving on, given iopoll isn't possible in io_wq_put_and_exit().
> 
> The issue can be triggered when terminating 't/io_uring -n4 /dev/nullb0'
> with default null_blk parameters.
> 
> Fix it by always cancelling all requests in io_wq by adding helper of
> io_uring_cancel_wq(), and this way is reasonable because io_wq destroying
> follows canceling requests immediately.

This does look much cleaner, but the unconditional cancel_all == true
makes me a bit nervous in case the ring is being shared.

Do we really need to cancel these bits? Can't we get by with something
trivial like just stopping retrying if the original task is exiting?

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index c6d9e4677073..95316c0c3830 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1939,7 +1939,7 @@ void io_wq_submit_work(struct io_wq_work *work)
 		 * If REQ_F_NOWAIT is set, then don't wait or retry with
 		 * poll. -EAGAIN is final for that case.
 		 */
-		if (req->flags & REQ_F_NOWAIT)
+		if (req->flags & REQ_F_NOWAIT || req->task->flags & PF_EXITING)
 			break;
 
 		/*
Ming Lei Sept. 1, 2023, 3:12 p.m. UTC | #2
On Fri, Sep 01, 2023 at 08:47:28AM -0600, Jens Axboe wrote:
> On 9/1/23 7:49 AM, Ming Lei wrote:
> > io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
> > in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
> > Meantime io_wq IO code path may share resource with normal iopoll code
> > path.
> > 
> > So if any HIPRI request is submittd via io_wq, this request may not get resouce
> > for moving on, given iopoll isn't possible in io_wq_put_and_exit().
> > 
> > The issue can be triggered when terminating 't/io_uring -n4 /dev/nullb0'
> > with default null_blk parameters.
> > 
> > Fix it by always cancelling all requests in io_wq by adding helper of
> > io_uring_cancel_wq(), and this way is reasonable because io_wq destroying
> > follows canceling requests immediately.
> 
> This does look much cleaner, but the unconditional cancel_all == true
> makes me a bit nervous in case the ring is being shared.

Here we just cancel requests in io_wq, which is per-task actually.

Yeah, ctx->iopoll_ctx could be shared, but if it is used in this way,
the event can't be avoided to reap from remote context.

> 
> Do we really need to cancel these bits? Can't we get by with something
> trivial like just stopping retrying if the original task is exiting?
> 
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index c6d9e4677073..95316c0c3830 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -1939,7 +1939,7 @@ void io_wq_submit_work(struct io_wq_work *work)
>  		 * If REQ_F_NOWAIT is set, then don't wait or retry with
>  		 * poll. -EAGAIN is final for that case.
>  		 */
> -		if (req->flags & REQ_F_NOWAIT)
> +		if (req->flags & REQ_F_NOWAIT || req->task->flags & PF_EXITING)
>  			break;

This way isn't enough, any request submitted to io_wq before do_exit()
need to be reaped by io_iopoll_try_reap_events() explicitly.

Not mention IO_URING_F_NONBLOCK isn't set, so io_issue_sqe() may hang
forever.


Thanks,
Ming
Jens Axboe Sept. 1, 2023, 3:13 p.m. UTC | #3
On 9/1/23 9:12 AM, Ming Lei wrote:
> On Fri, Sep 01, 2023 at 08:47:28AM -0600, Jens Axboe wrote:
>> On 9/1/23 7:49 AM, Ming Lei wrote:
>>> io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
>>> in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
>>> Meantime io_wq IO code path may share resource with normal iopoll code
>>> path.
>>>
>>> So if any HIPRI request is submittd via io_wq, this request may not get resouce
>>> for moving on, given iopoll isn't possible in io_wq_put_and_exit().
>>>
>>> The issue can be triggered when terminating 't/io_uring -n4 /dev/nullb0'
>>> with default null_blk parameters.
>>>
>>> Fix it by always cancelling all requests in io_wq by adding helper of
>>> io_uring_cancel_wq(), and this way is reasonable because io_wq destroying
>>> follows canceling requests immediately.
>>
>> This does look much cleaner, but the unconditional cancel_all == true
>> makes me a bit nervous in case the ring is being shared.
> 
> Here we just cancel requests in io_wq, which is per-task actually.

Ah yeah good point, it's just the tctx related bits.

> Yeah, ctx->iopoll_ctx could be shared, but if it is used in this way,
> the event can't be avoided to reap from remote context.
> 
>>
>> Do we really need to cancel these bits? Can't we get by with something
>> trivial like just stopping retrying if the original task is exiting?
>>
>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>> index c6d9e4677073..95316c0c3830 100644
>> --- a/io_uring/io_uring.c
>> +++ b/io_uring/io_uring.c
>> @@ -1939,7 +1939,7 @@ void io_wq_submit_work(struct io_wq_work *work)
>>  		 * If REQ_F_NOWAIT is set, then don't wait or retry with
>>  		 * poll. -EAGAIN is final for that case.
>>  		 */
>> -		if (req->flags & REQ_F_NOWAIT)
>> +		if (req->flags & REQ_F_NOWAIT || req->task->flags & PF_EXITING)
>>  			break;
> 
> This way isn't enough, any request submitted to io_wq before do_exit()
> need to be reaped by io_iopoll_try_reap_events() explicitly.
> 
> Not mention IO_URING_F_NONBLOCK isn't set, so io_issue_sqe() may hang
> forever.

Yep it's not enough, and since we do only cancel per-task, I think this
patch looks fine as-is and is probably the right way to go. Thanks Ming.
Jens Axboe Sept. 1, 2023, 3:13 p.m. UTC | #4
On Fri, 01 Sep 2023 21:49:16 +0800, Ming Lei wrote:
> io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
> in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
> Meantime io_wq IO code path may share resource with normal iopoll code
> path.
> 
> So if any HIPRI request is submittd via io_wq, this request may not get resouce
> for moving on, given iopoll isn't possible in io_wq_put_and_exit().
> 
> [...]

Applied, thanks!

[1/1] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()
      commit: b484a40dc1f16edb58e5430105a021e1916e6f27

Best regards,
Jens Axboe Sept. 7, 2023, 3:36 p.m. UTC | #5
On 9/1/23 9:13 AM, Jens Axboe wrote:
> 
> On Fri, 01 Sep 2023 21:49:16 +0800, Ming Lei wrote:
>> io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
>> in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
>> Meantime io_wq IO code path may share resource with normal iopoll code
>> path.
>>
>> So if any HIPRI request is submittd via io_wq, this request may not get resouce
>> for moving on, given iopoll isn't possible in io_wq_put_and_exit().
>>
>> [...]
> 
> Applied, thanks!
> 
> [1/1] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()
>       commit: b484a40dc1f16edb58e5430105a021e1916e6f27

This causes a regression with the test/thread-exit.t test case, as it's
canceling requests from other tasks as well. I will drop this patch for
now.
Pavel Begunkov Sept. 8, 2023, 1:03 a.m. UTC | #6
On 9/7/23 16:36, Jens Axboe wrote:
> On 9/1/23 9:13 AM, Jens Axboe wrote:
>>
>> On Fri, 01 Sep 2023 21:49:16 +0800, Ming Lei wrote:
>>> io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
>>> in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
>>> Meantime io_wq IO code path may share resource with normal iopoll code
>>> path.
>>>
>>> So if any HIPRI request is submittd via io_wq, this request may not get resouce
>>> for moving on, given iopoll isn't possible in io_wq_put_and_exit().
>>>>> [...]
>>
>> Applied, thanks!
>>
>> [1/1] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()
>>        commit: b484a40dc1f16edb58e5430105a021e1916e6f27
> 
> This causes a regression with the test/thread-exit.t test case, as it's
> canceling requests from other tasks as well. I will drop this patch for
> now.

And this one has never hit my mailbox... Anyway, I'm confused with
the issue:

1) request tracking is done only for io_uring polling io_uring, which
shouldn't be the case for t/io_uring, so it's probably unrelated?

2) In case of iopoll, io-wq only submits a request but doesn't wait/poll
for it. If io_issue_sqe() returned -EAGAIN or an error, the request is
considered not yet submitted to block and can be just cancelled normally
without any dancing like io_iopoll_try_reap_events().


3) If we condense the code it sounds like it effectively will be
like this:

void io_wq_exit_start(struct io_wq *wq)
{
	set_bit(IO_WQ_BIT_EXIT, &wq->state);
}

io_uring_cancel_generic()
{
	if (tctx->io_wq)
		io_wq_exit_start(tctx->io_wq);
	io_uring_clean_tctx(tctx);
	...
}

We set the flag, interrupt workers (TIF_NOTIFY_SIGNAL + wake up), and
wait for them. Workers are woken up (or just return), see
the flag and return. At least that's how it was intended to work.

What's missing? Racing for IO_WQ_BIT_EXIT? Not breaking on IO_WQ_BIT_EXIT
correctly? Not honouring / clearing TIF_NOTIFY_SIGNAL?

I'll try to repro later
Ming Lei Sept. 8, 2023, 6:22 a.m. UTC | #7
On Fri, Sep 08, 2023 at 02:03:11AM +0100, Pavel Begunkov wrote:
> On 9/7/23 16:36, Jens Axboe wrote:
> > On 9/1/23 9:13 AM, Jens Axboe wrote:
> > > 
> > > On Fri, 01 Sep 2023 21:49:16 +0800, Ming Lei wrote:
> > > > io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
> > > > in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
> > > > Meantime io_wq IO code path may share resource with normal iopoll code
> > > > path.
> > > > 
> > > > So if any HIPRI request is submittd via io_wq, this request may not get resouce
> > > > for moving on, given iopoll isn't possible in io_wq_put_and_exit().
> > > > > > [...]
> > > 
> > > Applied, thanks!
> > > 
> > > [1/1] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()
> > >        commit: b484a40dc1f16edb58e5430105a021e1916e6f27
> > 
> > This causes a regression with the test/thread-exit.t test case, as it's
> > canceling requests from other tasks as well. I will drop this patch for
> > now.
> 
> And this one has never hit my mailbox... Anyway, I'm confused with
> the issue:
> 
> 1) request tracking is done only for io_uring polling io_uring, which

request tracking isn't done on FIXED_FILE IO, which is used by t/io_uring.

> shouldn't be the case for t/io_uring, so it's probably unrelated?

So io_uring_try_cancel_requests() won't be called because
tctx_inflight() returns zero.

> 
> 2) In case of iopoll, io-wq only submits a request but doesn't wait/poll
> for it. If io_issue_sqe() returned -EAGAIN or an error, the request is
> considered not yet submitted to block and can be just cancelled normally
> without any dancing like io_iopoll_try_reap_events().

io_issue_sqe() may never return since IO_URING_F_NONBLOCK isn't set
for iopoll, and recently polled IO doesn't imply nowait in commit
2bc057692599 ("block: don't make REQ_POLLED imply REQ_NOWAIT"), this
commit is actually fragile, cause it is easy to cause io hang if
iopoll & submission is done in same context.

This one should be easy to address, either the following change
or revert 2bc057692599 ("block: don't make REQ_POLLED imply
REQ_NOWAIT").

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index ad636954abae..d8419689ad3e 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1930,6 +1930,9 @@ void io_wq_submit_work(struct io_wq_work *work)
                }
        }

+       if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
+               issue_flags |= IO_URING_F_NONBLOCK;
+
        do {
                ret = io_issue_sqe(req, issue_flags);
                if (ret != -EAGAIN)

> 
> 
> 3) If we condense the code it sounds like it effectively will be
> like this:
> 
> void io_wq_exit_start(struct io_wq *wq)
> {
> 	set_bit(IO_WQ_BIT_EXIT, &wq->state);
> }
> 
> io_uring_cancel_generic()
> {
> 	if (tctx->io_wq)
> 		io_wq_exit_start(tctx->io_wq);
> 	io_uring_clean_tctx(tctx);
> 	...
> }
> 
> We set the flag, interrupt workers (TIF_NOTIFY_SIGNAL + wake up), and
> wait for them. Workers are woken up (or just return), see
> the flag and return. At least that's how it was intended to work.
> 
> What's missing? Racing for IO_WQ_BIT_EXIT? Not breaking on IO_WQ_BIT_EXIT
> correctly? Not honouring / clearing TIF_NOTIFY_SIGNAL?
> 
> I'll try to repro later

After applying your patch of 9256b9371204 ("io_uring: break out of iowq
iopoll on teardown") and the above patch, the issue still can be triggered,
and the worker is keeping to call io_issue_sqe() for ever, and
io_wq_worker_stopped() returns false. So do_exit() isn't called on
t/io_uring pthread yet, meantime I guess either iopoll is terminated or not
get chance to run.


Thanks,
Ming
Pavel Begunkov Sept. 8, 2023, 4:01 p.m. UTC | #8
On 9/8/23 07:22, Ming Lei wrote:
> On Fri, Sep 08, 2023 at 02:03:11AM +0100, Pavel Begunkov wrote:
>> On 9/7/23 16:36, Jens Axboe wrote:
>>> On 9/1/23 9:13 AM, Jens Axboe wrote:
>>>>
>>>> On Fri, 01 Sep 2023 21:49:16 +0800, Ming Lei wrote:
>>>>> io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
>>>>> in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
>>>>> Meantime io_wq IO code path may share resource with normal iopoll code
>>>>> path.
>>>>>
>>>>> So if any HIPRI request is submittd via io_wq, this request may not get resouce
>>>>> for moving on, given iopoll isn't possible in io_wq_put_and_exit().
>>>>>>> [...]
>>>>
>>>> Applied, thanks!
>>>>
>>>> [1/1] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()
>>>>         commit: b484a40dc1f16edb58e5430105a021e1916e6f27
>>>
>>> This causes a regression with the test/thread-exit.t test case, as it's
>>> canceling requests from other tasks as well. I will drop this patch for
>>> now.
>>
>> And this one has never hit my mailbox... Anyway, I'm confused with
>> the issue:
>>

Thanks CC'ing while resending, I don't know what's up with lore

>> 1) request tracking is done only for io_uring polling io_uring, which
> 
> request tracking isn't done on FIXED_FILE IO, which is used by t/io_uring.
> 
>> shouldn't be the case for t/io_uring, so it's probably unrelated?
> 
> So io_uring_try_cancel_requests() won't be called because
> tctx_inflight() returns zero.

That's true for fixed files, but it also holds for non fixed files
as well apart from a narrow case t/io_uring doesn't exercise

>> 2) In case of iopoll, io-wq only submits a request but doesn't wait/poll
>> for it. If io_issue_sqe() returned -EAGAIN or an error, the request is
>> considered not yet submitted to block and can be just cancelled normally
>> without any dancing like io_iopoll_try_reap_events().
> 
> io_issue_sqe() may never return since IO_URING_F_NONBLOCK isn't set
> for iopoll, and recently polled IO doesn't imply nowait in commit

missing nowait semantics should be fine, io_uring_clean_tctx()
sends a signal to workers, which should break them out of waiting.

...
>> We set the flag, interrupt workers (TIF_NOTIFY_SIGNAL + wake up), and
>> wait for them. Workers are woken up (or just return), see
>> the flag and return. At least that's how it was intended to work.
>>
>> What's missing? Racing for IO_WQ_BIT_EXIT? Not breaking on IO_WQ_BIT_EXIT
>> correctly? Not honouring / clearing TIF_NOTIFY_SIGNAL?
>>
>> I'll try to repro later
> 
> After applying your patch of 9256b9371204 ("io_uring: break out of iowq
> iopoll on teardown") and the above patch, the issue still can be triggered,

Yeah, it doesn't appear that it'd help, it was rather something
randomly spotted than targeting your issue.

> and the worker is keeping to call io_issue_sqe() for ever, and
> io_wq_worker_stopped() returns false. So do_exit() isn't called on
> t/io_uring pthread yet, meantime I guess either iopoll is terminated or not
> get chance to run.

That prior to task exit then, right? I wonder if we're missing
signal checking in that loop like below or just need limit the
number of re-submission attempts.

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 6cce8948bddf..c6f566200d04 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1944,6 +1944,8 @@ void io_wq_submit_work(struct io_wq_work *work)
  				break;
  			if (io_wq_worker_stopped())
  				break;
+			if (signal_pending(current))
+				break;
  			cond_resched();
  			continue;
  		}
diff mbox series

Patch

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index f4591b912ea8..7b3518f96c3b 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -3298,6 +3298,37 @@  static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
 	return percpu_counter_sum(&tctx->inflight);
 }
 
+static void io_uring_cancel_wq(struct io_uring_task *tctx)
+{
+	int ret;
+
+	if (!tctx->io_wq)
+		return;
+
+	/*
+	 * FIXED_FILE request isn't tracked in do_exit(), and these
+	 * requests may be submitted to our io_wq as iopoll, so have to
+	 * cancel them before destroying io_wq for avoiding IO hang
+	 */
+	do {
+		struct io_tctx_node *node;
+		unsigned long index;
+
+		ret = 0;
+		xa_for_each(&tctx->xa, index, node) {
+			struct io_ring_ctx *ctx = node->ctx;
+			struct io_task_cancel cancel = { .task = current, .all = true, };
+			enum io_wq_cancel cret;
+
+			io_iopoll_try_reap_events(ctx);
+			cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb,
+				       &cancel, true);
+			ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
+			cond_resched();
+		}
+	} while (ret);
+}
+
 /*
  * Find any io_uring ctx that this task has registered or done IO on, and cancel
  * requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation.
@@ -3369,6 +3400,7 @@  __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
 		finish_wait(&tctx->wait, &wait);
 	} while (1);
 
+	io_uring_cancel_wq(tctx);
 	io_uring_clean_tctx(tctx);
 	if (cancel_all) {
 		/*