Message ID | 1626275195-215652-5-git-send-email-john.garry@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | blk-mq: Reduce static requests memory footprint for shared sbitmap | expand |
On Wed, Jul 14, 2021 at 11:06:30PM +0800, John Garry wrote: > Put the functionality to resize the sched shared sbitmap in a common > function. > > Since the same formula is always used to resize, and it can be got from > the request queue argument, so just pass the request queue pointer. > > Signed-off-by: John Garry <john.garry@huawei.com> > --- > block/blk-mq-sched.c | 3 +-- > block/blk-mq-tag.c | 10 ++++++++++ > block/blk-mq-tag.h | 1 + > block/blk-mq.c | 3 +-- > 4 files changed, 13 insertions(+), 4 deletions(-) > > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c > index f5cb2931c20d..1e028183557d 100644 > --- a/block/blk-mq-sched.c > +++ b/block/blk-mq-sched.c > @@ -584,8 +584,7 @@ static int blk_mq_init_sched_shared_sbitmap(struct request_queue *queue) > &queue->sched_breserved_tags; > } > > - sbitmap_queue_resize(&queue->sched_bitmap_tags, > - queue->nr_requests - set->reserved_tags); > + blk_mq_tag_resize_sched_shared_sbitmap(queue); > > return 0; > } > diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c > index 86f87346232a..55c7f1bf41c7 100644 > --- a/block/blk-mq-tag.c > +++ b/block/blk-mq-tag.c > @@ -634,6 +634,16 @@ void blk_mq_tag_resize_shared_sbitmap(struct blk_mq_tag_set *set, unsigned int s > sbitmap_queue_resize(&set->__bitmap_tags, size - set->reserved_tags); > } > > +/* > + * We always resize with q->nr_requests - q->tag_set->reserved_tags, so > + * don't bother passing a size. > + */ > +void blk_mq_tag_resize_sched_shared_sbitmap(struct request_queue *q) > +{ > + sbitmap_queue_resize(&q->sched_bitmap_tags, > + q->nr_requests - q->tag_set->reserved_tags); > +} It is a bit hard to follow the resize part of the name, since no size parameter passed in. Maybe update is better?
On 20/07/2021 08:57, Ming Lei wrote: >> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c >> index f5cb2931c20d..1e028183557d 100644 >> --- a/block/blk-mq-sched.c >> +++ b/block/blk-mq-sched.c >> @@ -584,8 +584,7 @@ static int blk_mq_init_sched_shared_sbitmap(struct request_queue *queue) >> &queue->sched_breserved_tags; >> } >> >> - sbitmap_queue_resize(&queue->sched_bitmap_tags, >> - queue->nr_requests - set->reserved_tags); >> + blk_mq_tag_resize_sched_shared_sbitmap(queue); >> >> return 0; >> } >> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c >> index 86f87346232a..55c7f1bf41c7 100644 >> --- a/block/blk-mq-tag.c >> +++ b/block/blk-mq-tag.c >> @@ -634,6 +634,16 @@ void blk_mq_tag_resize_shared_sbitmap(struct blk_mq_tag_set *set, unsigned int s >> sbitmap_queue_resize(&set->__bitmap_tags, size - set->reserved_tags); >> } >> >> +/* >> + * We always resize with q->nr_requests - q->tag_set->reserved_tags, so >> + * don't bother passing a size. >> + */ >> +void blk_mq_tag_resize_sched_shared_sbitmap(struct request_queue *q) >> +{ >> + sbitmap_queue_resize(&q->sched_bitmap_tags, >> + q->nr_requests - q->tag_set->reserved_tags); >> +} > It is a bit hard to follow the resize part of the name, since no size > parameter passed in. Maybe update is better? Right, this function is a bit odd. Maybe I can just have a size argument for consistency with blk_mq_tag_resize_shared_sbitmap(). Thanks, John
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index f5cb2931c20d..1e028183557d 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -584,8 +584,7 @@ static int blk_mq_init_sched_shared_sbitmap(struct request_queue *queue) &queue->sched_breserved_tags; } - sbitmap_queue_resize(&queue->sched_bitmap_tags, - queue->nr_requests - set->reserved_tags); + blk_mq_tag_resize_sched_shared_sbitmap(queue); return 0; } diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 86f87346232a..55c7f1bf41c7 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -634,6 +634,16 @@ void blk_mq_tag_resize_shared_sbitmap(struct blk_mq_tag_set *set, unsigned int s sbitmap_queue_resize(&set->__bitmap_tags, size - set->reserved_tags); } +/* + * We always resize with q->nr_requests - q->tag_set->reserved_tags, so + * don't bother passing a size. + */ +void blk_mq_tag_resize_sched_shared_sbitmap(struct request_queue *q) +{ + sbitmap_queue_resize(&q->sched_bitmap_tags, + q->nr_requests - q->tag_set->reserved_tags); +} + /** * blk_mq_unique_tag() - return a tag that is unique queue-wide * @rq: request for which to compute a unique tag diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 8ed55af08427..3a7495e47fb4 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -48,6 +48,7 @@ extern int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, unsigned int depth, bool can_grow); extern void blk_mq_tag_resize_shared_sbitmap(struct blk_mq_tag_set *set, unsigned int size); +extern void blk_mq_tag_resize_sched_shared_sbitmap(struct request_queue *q); extern void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool); void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, diff --git a/block/blk-mq.c b/block/blk-mq.c index 56e3c6fdba60..b0d4197d36c7 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3643,8 +3643,7 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) q->nr_requests = nr; if (blk_mq_is_sbitmap_shared(set->flags)) { if (q->elevator) { - sbitmap_queue_resize(&q->sched_bitmap_tags, - nr - set->reserved_tags); + blk_mq_tag_resize_sched_shared_sbitmap(q); } else { blk_mq_tag_resize_shared_sbitmap(set, nr); }
Put the functionality to resize the sched shared sbitmap in a common function. Since the same formula is always used to resize, and it can be got from the request queue argument, so just pass the request queue pointer. Signed-off-by: John Garry <john.garry@huawei.com> --- block/blk-mq-sched.c | 3 +-- block/blk-mq-tag.c | 10 ++++++++++ block/blk-mq-tag.h | 1 + block/blk-mq.c | 3 +-- 4 files changed, 13 insertions(+), 4 deletions(-)