Message ID | 20210808031752.579882-2-yukuai3@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | fix request uaf in nbd_read_stat() | expand |
On 8/7/21 8:17 PM, Yu Kuai wrote: > +void blk_mq_tags_lock(struct blk_mq_tags *tags, unsigned long *flags) > +{ > + spin_lock_irqsave(&tags->lock, *flags); > +} > +EXPORT_SYMBOL(blk_mq_tags_lock); > + > +void blk_mq_tags_unlock(struct blk_mq_tags *tags, unsigned long *flags) > +{ > + spin_unlock_irqrestore(&tags->lock, *flags); > +} > +EXPORT_SYMBOL(blk_mq_tags_unlock); The tag map lock is an implementation detail and hence this lock must not be used directly by block drivers. I propose to introduce and export a new function to block drivers that does the following: * Lock tags->lock. * Call blk_mq_tag_to_rq(). * Check whether the request is in the started state. If so, increment its reference count. * Unlock tags->lock. Thanks, Bart.
On 2021/08/09 0:44, Bart Van Assche wrote: > On 8/7/21 8:17 PM, Yu Kuai wrote: >> +void blk_mq_tags_lock(struct blk_mq_tags *tags, unsigned long *flags) >> +{ >> + spin_lock_irqsave(&tags->lock, *flags); >> +} >> +EXPORT_SYMBOL(blk_mq_tags_lock); >> + >> +void blk_mq_tags_unlock(struct blk_mq_tags *tags, unsigned long *flags) >> +{ >> + spin_unlock_irqrestore(&tags->lock, *flags); >> +} >> +EXPORT_SYMBOL(blk_mq_tags_unlock); > > The tag map lock is an implementation detail and hence this lock must > not be used directly by block drivers. I propose to introduce and export > a new function to block drivers that does the following: > * Lock tags->lock. > * Call blk_mq_tag_to_rq(). > * Check whether the request is in the started state. If so, increment > its reference count. > * Unlock tags->lock. Hi, Bart Thanks for your advice, will do this in next iteration. Best regards Kuai
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 86f87346232a..388d447c993a 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -652,3 +652,15 @@ u32 blk_mq_unique_tag(struct request *rq) (rq->tag & BLK_MQ_UNIQUE_TAG_MASK); } EXPORT_SYMBOL(blk_mq_unique_tag); + +void blk_mq_tags_lock(struct blk_mq_tags *tags, unsigned long *flags) +{ + spin_lock_irqsave(&tags->lock, *flags); +} +EXPORT_SYMBOL(blk_mq_tags_lock); + +void blk_mq_tags_unlock(struct blk_mq_tags *tags, unsigned long *flags) +{ + spin_unlock_irqrestore(&tags->lock, *flags); +} +EXPORT_SYMBOL(blk_mq_tags_unlock); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 1d18447ebebc..b4bad4d6a3a8 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -635,4 +635,6 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio); void blk_mq_hctx_set_fq_lock_class(struct blk_mq_hw_ctx *hctx, struct lock_class_key *key); +void blk_mq_tags_lock(struct blk_mq_tags *tags, unsigned long *flags); +void blk_mq_tags_unlock(struct blk_mq_tags *tags, unsigned long *flags); #endif
Ming Lei had fixed the request UAF while iterating tags, however some drivers is calling blk_mq_tag_to_rq() directly to get request through tag. So the problem might still exist since that blk_mq_tags->lock should be held. Thus add blk_mq_tags_lock() and blk_mq_tags_unlock() so that drivers can lock and unlock blk_mq_tags->lock if they are not sure that the request is valid. Signed-off-by: Yu Kuai <yukuai3@huawei.com> --- block/blk-mq-tag.c | 12 ++++++++++++ include/linux/blk-mq.h | 2 ++ 2 files changed, 14 insertions(+)