diff mbox series

[1/1] io_uring: fix mutex_unlock with unreferenced ctx

Message ID 929d30ff7f0a27793e8b36f398ae12788cf04899.1701617803.git.asml.silence@gmail.com (mailing list archive)
State New
Headers show
Series [1/1] io_uring: fix mutex_unlock with unreferenced ctx | expand

Commit Message

Pavel Begunkov Dec. 3, 2023, 3:37 p.m. UTC
Callers of mutex_unlock() have to make sure that the mutex stays alive
for the whole duration of the function call. For io_uring that means
that the following pattern is not valid unless we ensure that the
context outlives the mutex_unlock() call.

mutex_lock(&ctx->uring_lock);
req_put(req); // typically via io_req_task_submit()
mutex_unlock(&ctx->uring_lock);

Most contexts are fine: io-wq pins requests, syscalls hold the file,
task works are taking ctx references and so on. However, the task work
fallback path doesn't follow the rule.

Cc: stable@vger.kernel.org
Fixes: 04fc6c802d ("io_uring: save ctx put/get for task_work submit")
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/io_uring.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

Comments

Jens Axboe Dec. 4, 2023, 2:10 a.m. UTC | #1
On Sun, 03 Dec 2023 15:37:53 +0000, Pavel Begunkov wrote:
> Callers of mutex_unlock() have to make sure that the mutex stays alive
> for the whole duration of the function call. For io_uring that means
> that the following pattern is not valid unless we ensure that the
> context outlives the mutex_unlock() call.
> 
> mutex_lock(&ctx->uring_lock);
> req_put(req); // typically via io_req_task_submit()
> mutex_unlock(&ctx->uring_lock);
> 
> [...]

Applied, thanks!

[1/1] io_uring: fix mutex_unlock with unreferenced ctx
      commit: f7b32e785042d2357c5abc23ca6db1b92c91a070

Best regards,
diff mbox series

Patch

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 6212f81ed887..c45951f95946 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -272,6 +272,7 @@  static __cold void io_fallback_req_func(struct work_struct *work)
 	struct io_kiocb *req, *tmp;
 	struct io_tw_state ts = { .locked = true, };
 
+	percpu_ref_get(&ctx->refs);
 	mutex_lock(&ctx->uring_lock);
 	llist_for_each_entry_safe(req, tmp, node, io_task_work.node)
 		req->io_task_work.func(req, &ts);
@@ -279,6 +280,7 @@  static __cold void io_fallback_req_func(struct work_struct *work)
 		return;
 	io_submit_flush_completions(ctx);
 	mutex_unlock(&ctx->uring_lock);
+	percpu_ref_put(&ctx->refs);
 }
 
 static int io_alloc_hash_table(struct io_hash_table *table, unsigned bits)
@@ -3145,12 +3147,7 @@  static __cold void io_ring_exit_work(struct work_struct *work)
 	init_completion(&exit.completion);
 	init_task_work(&exit.task_work, io_tctx_exit_cb);
 	exit.ctx = ctx;
-	/*
-	 * Some may use context even when all refs and requests have been put,
-	 * and they are free to do so while still holding uring_lock or
-	 * completion_lock, see io_req_task_submit(). Apart from other work,
-	 * this lock/unlock section also waits them to finish.
-	 */
+
 	mutex_lock(&ctx->uring_lock);
 	while (!list_empty(&ctx->tctx_list)) {
 		WARN_ON_ONCE(time_after(jiffies, timeout));