Message ID | 20230809194306.170979-2-axboe@kernel.dk (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | io-wq locking improvements | expand |
diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c index 399e9a15c38d..3e7025b9e0dd 100644 --- a/io_uring/io-wq.c +++ b/io_uring/io-wq.c @@ -909,13 +909,10 @@ void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work) clear_bit(IO_ACCT_STALLED_BIT, &acct->flags); raw_spin_unlock(&acct->lock); - raw_spin_lock(&wq->lock); rcu_read_lock(); do_create = !io_wq_activate_free_worker(wq, acct); rcu_read_unlock(); - raw_spin_unlock(&wq->lock); - if (do_create && ((work_flags & IO_WQ_WORK_CONCURRENT) || !atomic_read(&acct->nr_running))) { bool did_create;
The worker free list is RCU protected, and checks for workers going away when iterating it. There's no need to hold the wq->lock around the lookup. Signed-off-by: Jens Axboe <axboe@kernel.dk> --- io_uring/io-wq.c | 3 --- 1 file changed, 3 deletions(-)