Message ID | 20211104182201.83906-2-axboe@kernel.dk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Alloc batch fixes | expand |
On Thu, Nov 04, 2021 at 12:21:57PM -0600, Jens Axboe wrote: > Requests that were stored in the cache deliberately didn't hold an enter > reference to the queue, instead we grabbed one every time we pulled a > request out of there. That made for awkward logic on freeing the remainder > of the cached list, if needed, where we had to artificially raise the > queue usage count before each free. > > Grab references up front for cached plug requests. That's safer, and also > more efficient. I think it would be useful to add zour explanation why the cached requests should be flushed at schedule time now here. Otherwise this looks good: Reviewed-by: Christoph Hellwig <hch@lst.de>
On 11/4/21 12:34 PM, Christoph Hellwig wrote: > On Thu, Nov 04, 2021 at 12:21:57PM -0600, Jens Axboe wrote: >> Requests that were stored in the cache deliberately didn't hold an enter >> reference to the queue, instead we grabbed one every time we pulled a >> request out of there. That made for awkward logic on freeing the remainder >> of the cached list, if needed, where we had to artificially raise the >> queue usage count before each free. >> >> Grab references up front for cached plug requests. That's safer, and also >> more efficient. > > I think it would be useful to add zour explanation why the cached > requests should be flushed at schedule time now here. > > Otherwise this looks good: > > Reviewed-by: Christoph Hellwig <hch@lst.de> Thanks, I'll add a comment when amending with your reviewed-by.
diff --git a/block/blk-core.c b/block/blk-core.c index fd389a16013c..c2d267b6f910 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1643,7 +1643,7 @@ void blk_flush_plug(struct blk_plug *plug, bool from_schedule) flush_plug_callbacks(plug, from_schedule); if (!rq_list_empty(plug->mq_list)) blk_mq_flush_plug_list(plug, from_schedule); - if (unlikely(!from_schedule && plug->cached_rq)) + if (unlikely(!rq_list_empty(plug->cached_rq))) blk_mq_free_plug_rqs(plug); } diff --git a/block/blk-mq.c b/block/blk-mq.c index c68aa0a332e1..5498454c2164 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -410,7 +410,10 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data, tag_mask &= ~(1UL << i); rq = blk_mq_rq_ctx_init(data, tags, tag, alloc_time_ns); rq_list_add(data->cached_rq, rq); + nr++; } + /* caller already holds a reference, add for remainder */ + percpu_ref_get_many(&data->q->q_usage_counter, nr - 1); data->nr_tags -= nr; return rq_list_pop(data->cached_rq); @@ -630,10 +633,8 @@ void blk_mq_free_plug_rqs(struct blk_plug *plug) { struct request *rq; - while ((rq = rq_list_pop(&plug->cached_rq)) != NULL) { - percpu_ref_get(&rq->q->q_usage_counter); + while ((rq = rq_list_pop(&plug->cached_rq)) != NULL) blk_mq_free_request(rq); - } } static void req_bio_endio(struct request *rq, struct bio *bio,
Requests that were stored in the cache deliberately didn't hold an enter reference to the queue, instead we grabbed one every time we pulled a request out of there. That made for awkward logic on freeing the remainder of the cached list, if needed, where we had to artificially raise the queue usage count before each free. Grab references up front for cached plug requests. That's safer, and also more efficient. Fixes: 47c122e35d7e ("block: pre-allocate requests if plug is started and is a batch") Signed-off-by: Jens Axboe <axboe@kernel.dk> --- block/blk-core.c | 2 +- block/blk-mq.c | 7 ++++--- 2 files changed, 5 insertions(+), 4 deletions(-)