Message ID | 159884e23f494a4b658c9cb66e36900c3c5800f5.1634759754.git.asml.silence@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | optimise blk_try_enter_queue() | expand |
diff --git a/block/blk-core.c b/block/blk-core.c index 88752e51d2b6..20e76aeb50f5 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -389,7 +389,7 @@ EXPORT_SYMBOL(blk_cleanup_queue); static bool blk_try_enter_queue(struct request_queue *q, bool pm) { rcu_read_lock(); - if (!percpu_ref_tryget_live(&q->q_usage_counter)) + if (!percpu_ref_tryget_live_rcu(&q->q_usage_counter)) goto fail; /*
blk_try_enter_queue() already takes rcu_read_lock/unlock, so we can avoid the second pair in percpu_ref_tryget_live(), use a newly added percpu_ref_tryget_live_rcu(). As rcu_read_lock/unlock imply barrier()s, it's pretty noticeable, especially for for !CONFIG_PREEMPT_RCU (default for some distributions), where __rcu_read_lock/unlock() are not inlined. 3.20% io_uring [kernel.vmlinux] [k] __rcu_read_unlock 3.05% io_uring [kernel.vmlinux] [k] __rcu_read_lock 2.52% io_uring [kernel.vmlinux] [k] __rcu_read_unlock 2.28% io_uring [kernel.vmlinux] [k] __rcu_read_lock Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> --- block/blk-core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)