Message ID | 40c7404a-f4ce-4a7d-86f3-313a9e9ee113@kernel.dk (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | io_uring/sqpoll: ensure that normal task_work is also run timely | expand |
diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c index 554c7212aa46..68a3e3290411 100644 --- a/io_uring/sqpoll.c +++ b/io_uring/sqpoll.c @@ -241,6 +241,8 @@ static unsigned int io_sq_tw(struct llist_node **retry_list, int max_entries) return count; max_entries -= count; } + if (task_work_pending(current)) + task_work_run(); *retry_list = tctx_task_work_run(tctx, max_entries, &count); return count;
With the move to private task_work, SQPOLL neglected to also run the normal task_work, if any is pending. This will eventually get run, but we should run it with the private task_work to ensure that things like a final fput() is processed in a timely fashion. Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/313824bc-799d-414f-96b7-e6de57c7e21d@gmail.com/ Reported-by: Andrew Udvare <audvare@gmail.com> Fixes: af5d68f8892f ("io_uring/sqpoll: manage task_work privately") Signed-off-by: Jens Axboe <axboe@kernel.dk> ---