Message ID | 20230222185224.2484590-1-ushankar@purestorage.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] blk-mq: enforce op-specific segment limits in blk_insert_cloned_request | expand |
On Wed, Feb 22, 2023 at 11:52:25AM -0700, Uday Shankar wrote: > static inline unsigned int blk_rq_get_max_segments(struct request *rq) > { > - if (req_op(rq) == REQ_OP_DISCARD) > - return queue_max_discard_segments(rq->q); > - return queue_max_segments(rq->q); > + return blk_queue_get_max_segments(rq->q, req_op(rq)); > } I think you should just move this function to blk.h instead of introducing a new one.
On Wed, Feb 22, 2023 at 12:16:02PM -0700, Keith Busch wrote: > On Wed, Feb 22, 2023 at 11:52:25AM -0700, Uday Shankar wrote: > > static inline unsigned int blk_rq_get_max_segments(struct request *rq) > > { > > - if (req_op(rq) == REQ_OP_DISCARD) > > - return queue_max_discard_segments(rq->q); > > - return queue_max_segments(rq->q); > > + return blk_queue_get_max_segments(rq->q, req_op(rq)); > > } > > I think you should just move this function to blk.h instead of > introducing a new one. I chose to add blk_queue_get_max_segments as a public function because it parallels blk_queue_get_max_sectors. If you don't want two functions, I could manually inline the (2) uses of blk_rq_get_max_segments(rq), converting them to blk_queue_get_max_segments(rq->q, req_op(rq)).
On Thu, Feb 23, 2023 at 12:34:46PM -0700, Uday Shankar wrote: > I chose to add blk_queue_get_max_segments as a public function because > it parallels blk_queue_get_max_sectors. If you don't want two functions, > I could manually inline the (2) uses of blk_rq_get_max_segments(rq), > converting them to blk_queue_get_max_segments(rq->q, req_op(rq)). I'd be much happier with a single function that takes a request instead of two decoded arguments. This should not be a public API in any form.
diff --git a/block/blk-merge.c b/block/blk-merge.c index b7c193d67..7f663c2d3 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -588,9 +588,7 @@ EXPORT_SYMBOL(__blk_rq_map_sg); static inline unsigned int blk_rq_get_max_segments(struct request *rq) { - if (req_op(rq) == REQ_OP_DISCARD) - return queue_max_discard_segments(rq->q); - return queue_max_segments(rq->q); + return blk_queue_get_max_segments(rq->q, req_op(rq)); } static inline unsigned int blk_rq_get_max_sectors(struct request *rq, diff --git a/block/blk-mq.c b/block/blk-mq.c index d3494a796..b053a9255 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3000,6 +3000,7 @@ blk_status_t blk_insert_cloned_request(struct request *rq) { struct request_queue *q = rq->q; unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq)); + unsigned int max_segments = blk_queue_get_max_segments(q, req_op(rq)); blk_status_t ret; if (blk_rq_sectors(rq) > max_sectors) { @@ -3026,9 +3027,9 @@ blk_status_t blk_insert_cloned_request(struct request *rq) * original queue. */ rq->nr_phys_segments = blk_recalc_rq_segments(rq); - if (rq->nr_phys_segments > queue_max_segments(q)) { - printk(KERN_ERR "%s: over max segments limit. (%hu > %hu)\n", - __func__, rq->nr_phys_segments, queue_max_segments(q)); + if (rq->nr_phys_segments > max_segments) { + printk(KERN_ERR "%s: over max segments limit. (%u > %u)\n", + __func__, rq->nr_phys_segments, max_segments); return BLK_STS_IOERR; } diff --git a/block/blk.h b/block/blk.h index f02381405..8d705c13a 100644 --- a/block/blk.h +++ b/block/blk.h @@ -169,6 +169,14 @@ static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q, return q->limits.max_sectors; } +static inline unsigned int blk_queue_get_max_segments(struct request_queue *q, + enum req_op op) +{ + if (op == REQ_OP_DISCARD) + return queue_max_discard_segments(q); + return queue_max_segments(q); +} + #ifdef CONFIG_BLK_DEV_INTEGRITY void blk_flush_integrity(void); bool __bio_integrity_endio(struct bio *);
The block layer might merge together discard requests up until the max_discard_segments limit is hit, but blk_insert_cloned_request checks the segment count against max_segments regardless of the req op. This can result in errors like the following when discards are issued through a DM device and max_discard_segments exceeds max_segments for the queue of the chosen underlying device. blk_insert_cloned_request: over max segments limit. (256 > 129) Fix this by looking at the req_op and enforcing the appropriate segment limit - max_discard_segments for REQ_OP_DISCARDs and max_segments for everything else. Signed-off-by: Uday Shankar <ushankar@purestorage.com> --- Changes since v1: - Fixed format specifier type mismatch Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/oe-kbuild-all/202302162040.FaI25ul2-lkp@intel.com/ block/blk-merge.c | 4 +--- block/blk-mq.c | 7 ++++--- block/blk.h | 8 ++++++++ 3 files changed, 13 insertions(+), 6 deletions(-) base-commit: 6bea9ac7c6481c09eb2b61d7cd844fc64a526e3e