Message ID | 20211103183222.180268-5-axboe@kernel.dk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Alloc batch fixes | expand |
On Wed, Nov 03, 2021 at 12:32:22PM -0600, Jens Axboe wrote: > + if (plug && !rq_list_empty(plug->cached_rq)) { > + rq = rq_list_peek(&plug->cached_rq); No need for the empty check plus peek. This could be simplified down to: if (!plug) return NULL; rq = rq_list_peek(&plug->cached_rq); if (!rq || rq->q != q) return NULL; rq_qos_throttle(q, bio); plug->cached_rq = rq_list_next(rq); INIT_LIST_HEAD(&rq->queuelist); return rq;
On 11/4/21 3:17 AM, Christoph Hellwig wrote: > On Wed, Nov 03, 2021 at 12:32:22PM -0600, Jens Axboe wrote: >> + if (plug && !rq_list_empty(plug->cached_rq)) { >> + rq = rq_list_peek(&plug->cached_rq); > > No need for the empty check plus peek. This could be simplified > down to: > > if (!plug) > return NULL; > rq = rq_list_peek(&plug->cached_rq); > if (!rq || rq->q != q) > return NULL; > > rq_qos_throttle(q, bio); > plug->cached_rq = rq_list_next(rq); > INIT_LIST_HEAD(&rq->queuelist); > return rq; I tend to prefer having logic flow from the expected conditions. And since we need to check if the plug is valid anyway, I prefer the current logic.
On Thu, Nov 04, 2021 at 05:35:10AM -0600, Jens Axboe wrote: > I tend to prefer having logic flow from the expected conditions. And > since we need to check if the plug is valid anyway, I prefer the current > logic. Well, for 99% of the I/O not using a cached request is the expected condition.
On 11/4/21 11:33 AM, Christoph Hellwig wrote: > On Thu, Nov 04, 2021 at 05:35:10AM -0600, Jens Axboe wrote: >> I tend to prefer having logic flow from the expected conditions. And >> since we need to check if the plug is valid anyway, I prefer the current >> logic. > > Well, for 99% of the I/O not using a cached request is the expected > condition. That may very well be the case right now, but as more things plug into propagating expected number of requests, that's likely to change. That said, I did reflow it for you in the updated version.
diff --git a/block/blk-mq.c b/block/blk-mq.c index 4bc98c7264fa..e92c36f2326a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2485,6 +2485,24 @@ static inline bool blk_mq_queue_enter(struct request_queue *q, struct bio *bio) return true; } +static inline struct request *blk_get_plug_request(struct request_queue *q, + struct blk_plug *plug, + struct bio *bio) +{ + struct request *rq; + + if (plug && !rq_list_empty(plug->cached_rq)) { + rq = rq_list_peek(&plug->cached_rq); + if (rq->q == q) { + rq_qos_throttle(q, bio); + plug->cached_rq = rq_list_next(rq); + INIT_LIST_HEAD(&rq->queuelist); + return rq; + } + } + return NULL; +} + /** * blk_mq_submit_bio - Create and send a request to block device. * @bio: Bio pointer. @@ -2523,11 +2541,8 @@ void blk_mq_submit_bio(struct bio *bio) } plug = blk_mq_plug(q, bio); - if (plug && plug->cached_rq) { - rq = rq_list_pop(&plug->cached_rq); - INIT_LIST_HEAD(&rq->queuelist); - rq_qos_throttle(q, bio); - } else { + rq = blk_get_plug_request(q, plug, bio); + if (!rq) { struct blk_mq_alloc_data data = { .q = q, .nr_tags = 1,
We need to improve the logic here a bit, most importantly ensuring that the request matches the current queue. If it doesn't, we cannot use it and must fallback to normal request alloc. Fixes: 47c122e35d7e ("block: pre-allocate requests if plug is started and is a batch") Signed-off-by: Jens Axboe <axboe@kernel.dk> --- block/blk-mq.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-)