diff mbox series

[V2,01/20] block: move blk_mq_add_queue_tag_set() after blk_mq_map_swqueue()

Message ID 20250418163708.442085-2-ming.lei@redhat.com (mailing list archive)
State New
Headers show
Series block: unify elevator changing and fix lockdep warning | expand

Commit Message

Ming Lei April 18, 2025, 4:36 p.m. UTC
Move blk_mq_add_queue_tag_set() after blk_mq_map_swqueue(), and publish
this request queue to tagset after everything is setup.

This way is safe because BLK_MQ_F_TAG_QUEUE_SHARED isn't used by
blk_mq_map_swqueue(), and this flag is mainly checked in fast IO code
path.

Prepare for removing ->elevator_lock from blk_mq_map_swqueue() which
is supposed to be called when elevator switch isn't possible.

Reported-by: Nilay Shroff <nilay@linux.ibm.com>
Closes: https://lore.kernel.org/linux-block/567cb7ab-23d6-4cee-a915-c8cdac903ddd@linux.ibm.com/
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Yu Kuai April 19, 2025, 8:57 a.m. UTC | #1
Hi,

在 2025/04/19 0:36, Ming Lei 写道:
> Move blk_mq_add_queue_tag_set() after blk_mq_map_swqueue(), and publish
> this request queue to tagset after everything is setup.
> 
> This way is safe because BLK_MQ_F_TAG_QUEUE_SHARED isn't used by
> blk_mq_map_swqueue(), and this flag is mainly checked in fast IO code
> path.
> 
> Prepare for removing ->elevator_lock from blk_mq_map_swqueue() which
> is supposed to be called when elevator switch isn't possible.

I think you mean *is* possible? Or to protect against switching
elevator concurrently?
> 
> Reported-by: Nilay Shroff <nilay@linux.ibm.com>
> Closes: https://lore.kernel.org/linux-block/567cb7ab-23d6-4cee-a915-c8cdac903ddd@linux.ibm.com/
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>   block/blk-mq.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index e0fe12f1320f..7cda919fafba 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -4561,8 +4561,8 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
>   	q->nr_requests = set->queue_depth;
>   
>   	blk_mq_init_cpu_queues(q, set->nr_hw_queues);
> -	blk_mq_add_queue_tag_set(set, q);
>   	blk_mq_map_swqueue(q);
> +	blk_mq_add_queue_tag_set(set, q);
>   	return 0;
>   
>   err_hctxs:
>
Nilay Shroff April 19, 2025, 10:25 a.m. UTC | #2
On 4/18/25 10:06 PM, Ming Lei wrote:
> Move blk_mq_add_queue_tag_set() after blk_mq_map_swqueue(), and publish
> this request queue to tagset after everything is setup.
> 
> This way is safe because BLK_MQ_F_TAG_QUEUE_SHARED isn't used by
> blk_mq_map_swqueue(), and this flag is mainly checked in fast IO code
> path.
> 
> Prepare for removing ->elevator_lock from blk_mq_map_swqueue() which
> is supposed to be called when elevator switch isn't possible.
> 
> Reported-by: Nilay Shroff <nilay@linux.ibm.com>
> Closes: https://lore.kernel.org/linux-block/567cb7ab-23d6-4cee-a915-c8cdac903ddd@linux.ibm.com/
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/blk-mq.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Looks good to me:
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index e0fe12f1320f..7cda919fafba 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4561,8 +4561,8 @@  int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
 	q->nr_requests = set->queue_depth;
 
 	blk_mq_init_cpu_queues(q, set->nr_hw_queues);
-	blk_mq_add_queue_tag_set(set, q);
 	blk_mq_map_swqueue(q);
+	blk_mq_add_queue_tag_set(set, q);
 	return 0;
 
 err_hctxs: