diff mbox series

[v16,11/26] block: Optimize blk_mq_submit_bio() for the cache hit scenario

Message ID 20241119002815.600608-12-bvanassche@acm.org (mailing list archive)
State Not Applicable
Headers show
Series Improve write performance for zoned UFS devices | expand

Commit Message

Bart Van Assche Nov. 19, 2024, 12:28 a.m. UTC
Help the CPU branch predictor in case of a cache hit by handling the cache
hit scenario first.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-mq.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Damien Le Moal Nov. 19, 2024, 7:40 a.m. UTC | #1
On 11/19/24 09:28, Bart Van Assche wrote:
> Help the CPU branch predictor in case of a cache hit by handling the cache
> hit scenario first.
> 
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Not sure this really helps but looks OK to me.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index ece567b1b6be..56a6b5bef39f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3123,12 +3123,12 @@  void blk_mq_submit_bio(struct bio *bio)
 		goto queue_exit;
 
 new_request:
-	if (!rq) {
+	if (rq) {
+		blk_mq_use_cached_rq(rq, plug, bio);
+	} else {
 		rq = blk_mq_get_new_requests(q, plug, bio, nr_segs);
 		if (unlikely(!rq))
 			goto queue_exit;
-	} else {
-		blk_mq_use_cached_rq(rq, plug, bio);
 	}
 
 	trace_block_getrq(bio);