Message ID | 1626275195-215652-3-git-send-email-john.garry@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | blk-mq: Reduce static requests memory footprint for shared sbitmap | expand |
On Wed, Jul 14, 2021 at 11:06:28PM +0800, John Garry wrote: > It is a bit confusing that there is BLKDEV_MAX_RQ and MAX_SCHED_RQ, as > the name BLKDEV_MAX_RQ would imply the max requests always, which it is > not. > > Rename to BLKDEV_MAX_RQ to BLKDEV_DEFAULT_RQ, matching its usage - that being > the default number of requests assigned when allocating a request queue. > > Signed-off-by: John Garry <john.garry@huawei.com> > --- > block/blk-core.c | 2 +- > block/blk-mq-sched.c | 2 +- > block/blk-mq-sched.h | 2 +- > drivers/block/rbd.c | 2 +- > include/linux/blkdev.h | 2 +- > 5 files changed, 5 insertions(+), 5 deletions(-) > > diff --git a/block/blk-core.c b/block/blk-core.c > index 04477697ee4b..5d71382b6131 100644 > --- a/block/blk-core.c > +++ b/block/blk-core.c > @@ -579,7 +579,7 @@ struct request_queue *blk_alloc_queue(int node_id) > > blk_queue_dma_alignment(q, 511); > blk_set_default_limits(&q->limits); > - q->nr_requests = BLKDEV_MAX_RQ; > + q->nr_requests = BLKDEV_DEFAULT_RQ; The above assignment isn't necessary since bio based queue doesn't use ->nr_requests. For request based queue, ->nr_requests will be re-set in either blk_mq_init_sched() or blk_mq_init_allocated_queue(), but that may not be related with this patch itself. > > return q; > > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c > index c838d81ac058..f5cb2931c20d 100644 > --- a/block/blk-mq-sched.c > +++ b/block/blk-mq-sched.c > @@ -615,7 +615,7 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) > * Additionally, this is a per-hw queue depth. > */ > q->nr_requests = 2 * min_t(unsigned int, q->tag_set->queue_depth, > - BLKDEV_MAX_RQ); > + BLKDEV_DEFAULT_RQ); > > queue_for_each_hw_ctx(q, hctx, i) { > ret = blk_mq_sched_alloc_tags(q, hctx, i); > diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h > index 5246ae040704..1e46be6c5178 100644 > --- a/block/blk-mq-sched.h > +++ b/block/blk-mq-sched.h > @@ -5,7 +5,7 @@ > #include "blk-mq.h" > #include "blk-mq-tag.h" > > -#define MAX_SCHED_RQ (16 * BLKDEV_MAX_RQ) > +#define MAX_SCHED_RQ (16 * BLKDEV_DEFAULT_RQ) > > void blk_mq_sched_assign_ioc(struct request *rq); > > diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c > index 531d390902dd..d3f329749173 100644 > --- a/drivers/block/rbd.c > +++ b/drivers/block/rbd.c > @@ -836,7 +836,7 @@ struct rbd_options { > u32 alloc_hint_flags; /* CEPH_OSD_OP_ALLOC_HINT_FLAG_* */ > }; > > -#define RBD_QUEUE_DEPTH_DEFAULT BLKDEV_MAX_RQ > +#define RBD_QUEUE_DEPTH_DEFAULT BLKDEV_DEFAULT_RQ > #define RBD_ALLOC_SIZE_DEFAULT (64 * 1024) > #define RBD_LOCK_TIMEOUT_DEFAULT 0 /* no timeout */ > #define RBD_READ_ONLY_DEFAULT false > diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h > index 3177181c4326..6a64ea23f552 100644 > --- a/include/linux/blkdev.h > +++ b/include/linux/blkdev.h > @@ -45,7 +45,7 @@ struct blk_stat_callback; > struct blk_keyslot_manager; > > #define BLKDEV_MIN_RQ 4 > -#define BLKDEV_MAX_RQ 128 /* Default maximum */ > +#define BLKDEV_DEFAULT_RQ 128 > > /* Must be consistent with blk_mq_poll_stats_bkt() */ > #define BLK_MQ_POLL_STATS_BKTS 16 > -- > 2.26.2 > Looks fine, Reviewed-by: Ming Lei <ming.lei@redhat.com>
diff --git a/block/blk-core.c b/block/blk-core.c index 04477697ee4b..5d71382b6131 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -579,7 +579,7 @@ struct request_queue *blk_alloc_queue(int node_id) blk_queue_dma_alignment(q, 511); blk_set_default_limits(&q->limits); - q->nr_requests = BLKDEV_MAX_RQ; + q->nr_requests = BLKDEV_DEFAULT_RQ; return q; diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index c838d81ac058..f5cb2931c20d 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -615,7 +615,7 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) * Additionally, this is a per-hw queue depth. */ q->nr_requests = 2 * min_t(unsigned int, q->tag_set->queue_depth, - BLKDEV_MAX_RQ); + BLKDEV_DEFAULT_RQ); queue_for_each_hw_ctx(q, hctx, i) { ret = blk_mq_sched_alloc_tags(q, hctx, i); diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index 5246ae040704..1e46be6c5178 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -5,7 +5,7 @@ #include "blk-mq.h" #include "blk-mq-tag.h" -#define MAX_SCHED_RQ (16 * BLKDEV_MAX_RQ) +#define MAX_SCHED_RQ (16 * BLKDEV_DEFAULT_RQ) void blk_mq_sched_assign_ioc(struct request *rq); diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 531d390902dd..d3f329749173 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -836,7 +836,7 @@ struct rbd_options { u32 alloc_hint_flags; /* CEPH_OSD_OP_ALLOC_HINT_FLAG_* */ }; -#define RBD_QUEUE_DEPTH_DEFAULT BLKDEV_MAX_RQ +#define RBD_QUEUE_DEPTH_DEFAULT BLKDEV_DEFAULT_RQ #define RBD_ALLOC_SIZE_DEFAULT (64 * 1024) #define RBD_LOCK_TIMEOUT_DEFAULT 0 /* no timeout */ #define RBD_READ_ONLY_DEFAULT false diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 3177181c4326..6a64ea23f552 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -45,7 +45,7 @@ struct blk_stat_callback; struct blk_keyslot_manager; #define BLKDEV_MIN_RQ 4 -#define BLKDEV_MAX_RQ 128 /* Default maximum */ +#define BLKDEV_DEFAULT_RQ 128 /* Must be consistent with blk_mq_poll_stats_bkt() */ #define BLK_MQ_POLL_STATS_BKTS 16
It is a bit confusing that there is BLKDEV_MAX_RQ and MAX_SCHED_RQ, as the name BLKDEV_MAX_RQ would imply the max requests always, which it is not. Rename to BLKDEV_MAX_RQ to BLKDEV_DEFAULT_RQ, matching its usage - that being the default number of requests assigned when allocating a request queue. Signed-off-by: John Garry <john.garry@huawei.com> --- block/blk-core.c | 2 +- block/blk-mq-sched.c | 2 +- block/blk-mq-sched.h | 2 +- drivers/block/rbd.c | 2 +- include/linux/blkdev.h | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-)