diff mbox series

[2/2] block: fix double bio queue when merging in cached request path

Message ID 20211202194741.810957-3-axboe@kernel.dk (mailing list archive)
State New, archived
Headers show
Series Fix bio merge off cached request path | expand

Commit Message

Jens Axboe Dec. 2, 2021, 7:47 p.m. UTC
When we attempt to merge off the cached request path, we return NULL
if successful. This makes the caller believe that it's should allocate
a new request, and hence we end up with the bio both merged and associated
with a new request. This, predictably, leads to all sorts of crashes.

Pass in a pointer to the bio pointer, and clear it for the merge case.
Then the caller knows that the bio is already queued, and no new requests
need to get allocated.

Fixes: 5b13bc8a3fd5 ("blk-mq: cleanup request allocation")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-mq.c | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

Comments

Ming Lei Dec. 3, 2021, 2:36 a.m. UTC | #1
On Thu, Dec 02, 2021 at 12:47:41PM -0700, Jens Axboe wrote:
> When we attempt to merge off the cached request path, we return NULL
> if successful. This makes the caller believe that it's should allocate
> a new request, and hence we end up with the bio both merged and associated
> with a new request. This, predictably, leads to all sorts of crashes.
> 
> Pass in a pointer to the bio pointer, and clear it for the merge case.
> Then the caller knows that the bio is already queued, and no new requests
> need to get allocated.
> 
> Fixes: 5b13bc8a3fd5 ("blk-mq: cleanup request allocation")
> Signed-off-by: Jens Axboe <axboe@kernel.dk>

Io hang in io_uring workload can't be triggered any more with this
patch:

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Christoph Hellwig Dec. 3, 2021, 6:54 a.m. UTC | #2
Looks fine:

Reviewed-by: Christoph Hellwig <hch@lst.de>
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index ca33cb755c5f..fc4520e992b1 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2731,7 +2731,7 @@  static struct request *blk_mq_get_new_requests(struct request_queue *q,
 }
 
 static inline struct request *blk_mq_get_cached_request(struct request_queue *q,
-		struct blk_plug *plug, struct bio *bio, unsigned int nsegs)
+		struct blk_plug *plug, struct bio **bio, unsigned int nsegs)
 {
 	struct request *rq;
 
@@ -2741,19 +2741,21 @@  static inline struct request *blk_mq_get_cached_request(struct request_queue *q,
 	if (!rq || rq->q != q)
 		return NULL;
 
-	if (unlikely(!submit_bio_checks(bio)))
+	if (unlikely(!submit_bio_checks(*bio)))
 		return NULL;
-	if (blk_mq_attempt_bio_merge(q, bio, nsegs))
+	if (blk_mq_attempt_bio_merge(q, *bio, nsegs)) {
+		*bio = NULL;
 		return NULL;
-	if (blk_mq_get_hctx_type(bio->bi_opf) != rq->mq_hctx->type)
+	}
+	if (blk_mq_get_hctx_type((*bio)->bi_opf) != rq->mq_hctx->type)
 		return NULL;
-	if (op_is_flush(rq->cmd_flags) != op_is_flush(bio->bi_opf))
+	if (op_is_flush(rq->cmd_flags) != op_is_flush((*bio)->bi_opf))
 		return NULL;
 
-	rq->cmd_flags = bio->bi_opf;
+	rq->cmd_flags = (*bio)->bi_opf;
 	plug->cached_rq = rq_list_next(rq);
 	INIT_LIST_HEAD(&rq->queuelist);
-	rq_qos_throttle(q, bio);
+	rq_qos_throttle(q, *bio);
 	return rq;
 }
 
@@ -2789,8 +2791,10 @@  void blk_mq_submit_bio(struct bio *bio)
 	if (!bio_integrity_prep(bio))
 		return;
 
-	rq = blk_mq_get_cached_request(q, plug, bio, nr_segs);
+	rq = blk_mq_get_cached_request(q, plug, &bio, nr_segs);
 	if (!rq) {
+		if (!bio)
+			return;
 		rq = blk_mq_get_new_requests(q, plug, bio, nr_segs);
 		if (unlikely(!rq))
 			return;