diff mbox series

Subject: io_uring: Fix bug in io_fallback_req_func that can cause deadlock

Message ID 20230512095655.8968-1-luhongfei@vivo.com (mailing list archive)
State New
Headers show
Series Subject: io_uring: Fix bug in io_fallback_req_func that can cause deadlock | expand

Commit Message

Lu Hongfei May 12, 2023, 9:56 a.m. UTC
There was a bug in io_fallback_req_func that can cause deadlocks
because uring_lock was not released when return.
This patch releases the uring_lock before return.

Signed-off-by: luhongfei <luhongfei@vivo.com>
---
 io_uring/io_uring.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
 mode change 100644 => 100755 io_uring/io_uring.c

Comments

Jens Axboe May 12, 2023, 1:58 p.m. UTC | #1
On 5/12/23 3:56?AM, luhongfei wrote:
> There was a bug in io_fallback_req_func that can cause deadlocks
> because uring_lock was not released when return.
> This patch releases the uring_lock before return.
> 
> Signed-off-by: luhongfei <luhongfei@vivo.com>
> ---
>  io_uring/io_uring.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>  mode change 100644 => 100755 io_uring/io_uring.c
> 
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 3bca7a79efda..1af793c7b3da
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -252,8 +252,10 @@ static __cold void io_fallback_req_func(struct work_struct *work)
>  	mutex_lock(&ctx->uring_lock);
>  	llist_for_each_entry_safe(req, tmp, node, io_task_work.node)
>  		req->io_task_work.func(req, &ts);
> -	if (WARN_ON_ONCE(!ts.locked))
> +	if (WARN_ON_ONCE(!ts.locked)) {
> +		mutex_unlock(&ctx->uring_lock);
>  		return;
> +	}
>  	io_submit_flush_completions(ctx);
>  	mutex_unlock(&ctx->uring_lock);
>  }

I'm guessing you found this by reading the code, and didn't actually hit
it? Because it looks fine as-is. We lock the ctx->uring_lock, and set
ts.locked == true. If ts.locked is false, then someone unlocked the ring
further down, which is unexpected (hence the WARN_ON_ONCE()). But if
that did happen, then we definitely don't want to unlock it again.

Because of that, I don't think you're patch is correct.
diff mbox series

Patch

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 3bca7a79efda..1af793c7b3da
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -252,8 +252,10 @@  static __cold void io_fallback_req_func(struct work_struct *work)
 	mutex_lock(&ctx->uring_lock);
 	llist_for_each_entry_safe(req, tmp, node, io_task_work.node)
 		req->io_task_work.func(req, &ts);
-	if (WARN_ON_ONCE(!ts.locked))
+	if (WARN_ON_ONCE(!ts.locked)) {
+		mutex_unlock(&ctx->uring_lock);
 		return;
+	}
 	io_submit_flush_completions(ctx);
 	mutex_unlock(&ctx->uring_lock);
 }