Message ID | 20230706201433.3987617-1-bvanassche@acm.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | block: Do not merge if merging is disabled | expand |
On 7/7/23 05:14, Bart Van Assche wrote: > While testing the performance impact of zoned write pipelining, I > noticed that merging happens even if merging has been disabled via > sysfs. Fix this. > > Cc: Christoph Hellwig <hch@lst.de> > Cc: Ming Lei <ming.lei@redhat.com> > Cc: Damien Le Moal <dlemoal@kernel.org> > Signed-off-by: Bart Van Assche <bvanassche@acm.org> Looks good. Reviewed-by: Damien Le Moal <dlemoal@kernel.org> > --- > block/blk-mq-sched.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c > index 67c95f31b15b..8883721f419a 100644 > --- a/block/blk-mq-sched.c > +++ b/block/blk-mq-sched.c > @@ -375,7 +375,8 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, > bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq, > struct list_head *free) > { > - return rq_mergeable(rq) && elv_attempt_insert_merge(q, rq, free); > + return !blk_queue_nomerges(q) && rq_mergeable(rq) && > + elv_attempt_insert_merge(q, rq, free); > } > EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge); >
On Thu, Jul 06, 2023 at 01:14:33PM -0700, Bart Van Assche wrote: > While testing the performance impact of zoned write pipelining, I > noticed that merging happens even if merging has been disabled via > sysfs. Fix this. > > Cc: Christoph Hellwig <hch@lst.de> > Cc: Ming Lei <ming.lei@redhat.com> > Cc: Damien Le Moal <dlemoal@kernel.org> > Signed-off-by: Bart Van Assche <bvanassche@acm.org> > --- > block/blk-mq-sched.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c > index 67c95f31b15b..8883721f419a 100644 > --- a/block/blk-mq-sched.c > +++ b/block/blk-mq-sched.c > @@ -375,7 +375,8 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, > bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq, > struct list_head *free) > { > - return rq_mergeable(rq) && elv_attempt_insert_merge(q, rq, free); > + return !blk_queue_nomerges(q) && rq_mergeable(rq) && > + elv_attempt_insert_merge(q, rq, free); > } > EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge); elv_attempt_insert_merge() does check blk_queue_nomerges() at its entry, so this patch fix nothing. Given blk_mq_sched_try_insert_merge is only called from bfq and deadline, it may not matter to apply this optimization. Thanks, Ming
On 7/6/23 17:38, Ming Lei wrote: > Given blk_mq_sched_try_insert_merge is only called from bfq and > deadline, it may not matter to apply this optimization. Without this patch, the documentation of the "nomerges" sysfs attribute is incorrect. I need this patch because I want the ability to disable merging even if an I/O scheduler has been selected. As mentioned in the patch description, I discovered this while I was writing a shell script that submits various I/O workloads to a block device. Bart.
On 7/7/23 10:50, Bart Van Assche wrote: > On 7/6/23 17:38, Ming Lei wrote: >> Given blk_mq_sched_try_insert_merge is only called from bfq and >> deadline, it may not matter to apply this optimization. > > Without this patch, the documentation of the "nomerges" sysfs > attribute is incorrect. I need this patch because I want the > ability to disable merging even if an I/O scheduler has been > selected. As mentioned in the patch description, I discovered > this while I was writing a shell script that submits various > I/O workloads to a block device. Ming's point still stands I think: blk_queue_nomerges(q) is the first thing checked in elv_attempt_insert_merge(). So your patch should be a no-op and disabling merging through sysfs should still be effective. Why is your patch changing anything ? Moving blk_mq_sched_try_insert_merge() call to rq_mergeable(rq) inside elv_attempt_insert_merge() would also make a lot of sense I think. With that, blk_mq_sched_try_insert_merge() would be reduced to calling only elv_attempt_insert_merge(), which means that elv_attempt_insert_merge() could go away.
On 7/6/23 20:34, Damien Le Moal wrote: > Ming's point still stands I think: blk_queue_nomerges(q) is the first > thing checked in elv_attempt_insert_merge(). So your patch should be a > no-op and disabling merging through sysfs should still be effective. Why > is your patch changing anything ? > > Moving blk_mq_sched_try_insert_merge() call to rq_mergeable(rq) inside > elv_attempt_insert_merge() would also make a lot of sense I think. With > that, blk_mq_sched_try_insert_merge() would be reduced to calling only > elv_attempt_insert_merge(), which means that elv_attempt_insert_merge() > could go away. Let's drop this patch. Since this patch was developed against an older kernel version, let me check whether this patch is perhaps only needed for the kernel version it was developed against. Thanks, Bart.
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 67c95f31b15b..8883721f419a 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -375,7 +375,8 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq, struct list_head *free) { - return rq_mergeable(rq) && elv_attempt_insert_merge(q, rq, free); + return !blk_queue_nomerges(q) && rq_mergeable(rq) && + elv_attempt_insert_merge(q, rq, free); } EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge);
While testing the performance impact of zoned write pipelining, I noticed that merging happens even if merging has been disabled via sysfs. Fix this. Cc: Christoph Hellwig <hch@lst.de> Cc: Ming Lei <ming.lei@redhat.com> Cc: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Bart Van Assche <bvanassche@acm.org> --- block/blk-mq-sched.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)