Message ID | 20230809194306.170979-4-axboe@kernel.dk (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | io-wq locking improvements | expand |
diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c index 18a049fc53ef..2da0b1ba6a56 100644 --- a/io_uring/io-wq.c +++ b/io_uring/io-wq.c @@ -276,11 +276,14 @@ static bool io_wq_activate_free_worker(struct io_wq *wq, io_worker_release(worker); continue; } - if (wake_up_process(worker->task)) { - io_worker_release(worker); - return true; - } + /* + * If the worker is already running, it's either already + * starting work or finishing work. In either case, if it does + * to go sleep, we'll kick off a new task for this work anyway. + */ + wake_up_process(worker->task); io_worker_release(worker); + return true; } return false;
All we really care about is finding a free worker. If said worker is already running, it's either starting new work already or it's just finishing up existing work. For the latter, we'll be finding this work item next anyway, and for the former, if the worker does go to sleep, it'll create a new worker anyway as we have pending items. This reduces try_to_wake_up() overhead considerably: 23.16% -10.46% [kernel.kallsyms] [k] try_to_wake_up Signed-off-by: Jens Axboe <axboe@kernel.dk> --- io_uring/io-wq.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)