Message ID | 20230606011438.3743440-1-yukuai1@huaweicloud.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [-next] blk-ioc: fix recursive spin_lock/unlock_irq() in ioc_clear_queue() | expand |
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
On Tue, 06 Jun 2023 09:14:38 +0800, Yu Kuai wrote: > Recursive spin_lock/unlock_irq() is not safe, because spin_unlock_irq() > will enable irq unconditionally: > > spin_lock_irq queue_lock -> disable irq > spin_lock_irq ioc->lock > spin_unlock_irq ioc->lock -> enable irq > /* > * AA dead lock will be triggered if current context is preempted by irq, > * and irq try to hold queue_lock again. > */ > spin_unlock_irq queue_lock > > [...] Applied, thanks! [1/1] blk-ioc: fix recursive spin_lock/unlock_irq() in ioc_clear_queue() commit: a7cfa0af0c88353b4eb59db5a2a0fbe35329b3f9 Best regards,
diff --git a/block/blk-ioc.c b/block/blk-ioc.c index d5db92e62c43..25dd4db11121 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -179,9 +179,9 @@ void ioc_clear_queue(struct request_queue *q) * Other context won't hold ioc lock to wait for queue_lock, see * details in ioc_release_fn(). */ - spin_lock_irq(&icq->ioc->lock); + spin_lock(&icq->ioc->lock); ioc_destroy_icq(icq); - spin_unlock_irq(&icq->ioc->lock); + spin_unlock(&icq->ioc->lock); } spin_unlock_irq(&q->queue_lock); }