Message ID | cover.1634822969.git.asml.silence@gmail.com (mailing list archive) |
---|---|
Headers | show |
Series | optimise blk_try_enter_queue() | expand |
On Thu, 21 Oct 2021 14:30:50 +0100, Pavel Begunkov wrote: > Kill extra rcu_read_lock/unlock() pair in blk_try_enter_queue(). > Testing with io_uring (high batching) with nullblk: > > Before: > 3.20% io_uring [kernel.vmlinux] [k] __rcu_read_unlock > 3.05% io_uring [kernel.vmlinux] [k] __rcu_read_lock > > [...] Applied, thanks! [1/2] percpu_ref: percpu_ref_tryget_live() version holding RCU commit: 3b13c168186c115501ee7d194460ba2f8c825155 [2/2] block: kill extra rcu lock/unlock in queue enter commit: e94f68527a35271131cdf9d3fb4eb3c2513dc3d0 Best regards,