diff mbox series

[7/8] block/bfq: skip expensive merge lookups if contended

Message ID 20240123174021.1967461-8-axboe@kernel.dk (mailing list archive)
State New, archived
Headers show
Series [1/8] block/mq-deadline: pass in queue directly to dd_insert_request() | expand

Commit Message

Jens Axboe Jan. 23, 2024, 5:34 p.m. UTC
We do several stages of merging in the block layer - the most likely one
to work is also the cheap one, merging direct in the per-task plug when
IO is submitted. Getting merges outside of that is a lot less likely,
but IO schedulers may still maintain internal data structures to
facilitate merge lookups outside of the plug.

Make BFQ skip expensive merge lookups if the queue lock or bfqd lock is
already contended. The likelihood of getting a merge here is not very
high, hence it should not be a problem skipping the attempt in the also
unlikely event that either the queue or bfqd are already contended.

Perf diff shows the difference between a random read/write workload
with 4 threads doing IO, with expensive merges turned on and off:

    31.70%    +54.80%  [kernel.kallsyms]  [k] queued_spin_lock_slowpath

where we almost tripple the lock contention (~32% -> ~87%) by attempting
these expensive merges, and performance drops from 1630K to 1050K IOPS.
At the same time, sys time drops from 37% to 14%.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/bfq-iosched.c | 30 ++++++++++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)

Comments

Bart Van Assche Jan. 23, 2024, 6:44 p.m. UTC | #1
On 1/23/24 09:34, Jens Axboe wrote:
> where we almost tripple the lock contention (~32% -> ~87%) by attempting
                   ^^^^^^^
                   triple?

> +	/*
> +	 * bio merging is called for every bio queued, and it's very easy
> +	 * to run into contention because of that. If we fail getting
> +	 * the dd lock, just skip this merge attempt. For related IO, the
                ^^
               bfqd?
> +	 * plug will be the successful merging point. If we get here, we
> +	 * already failed doing the obvious merge. Chances of actually
> +	 * getting a merge off this path is a lot slimmer, so skipping an
> +	 * occassional lookup that will most likely not succeed anyway should
> +	 * not be a problem.
> +	 */

Otherwise this patch looks good to me.

Thanks,

Bart.
Jens Axboe Jan. 23, 2024, 7:14 p.m. UTC | #2
On 1/23/24 11:44 AM, Bart Van Assche wrote:
> On 1/23/24 09:34, Jens Axboe wrote:
>> where we almost tripple the lock contention (~32% -> ~87%) by attempting
>                   ^^^^^^^
>                   triple?
> 
>> +    /*
>> +     * bio merging is called for every bio queued, and it's very easy
>> +     * to run into contention because of that. If we fail getting
>> +     * the dd lock, just skip this merge attempt. For related IO, the
>                ^^
>               bfqd?
>> +     * plug will be the successful merging point. If we get here, we
>> +     * already failed doing the obvious merge. Chances of actually
>> +     * getting a merge off this path is a lot slimmer, so skipping an
>> +     * occassional lookup that will most likely not succeed anyway should
>> +     * not be a problem.
>> +     */
> 
> Otherwise this patch looks good to me.

Thanks, will update both.
diff mbox series

Patch

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 5ef4a4eba572..ea16a0c53082 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -467,6 +467,21 @@  static struct bfq_io_cq *bfq_bic_lookup(struct request_queue *q)
 	return icq;
 }
 
+static struct bfq_io_cq *bfq_bic_try_lookup(struct request_queue *q)
+{
+	if (!current->io_context)
+		return NULL;
+	if (spin_trylock_irq(&q->queue_lock)) {
+		struct bfq_io_cq *icq;
+
+		icq = icq_to_bic(ioc_lookup_icq(q));
+		spin_unlock_irq(&q->queue_lock);
+		return icq;
+	}
+
+	return NULL;
+}
+
 /*
  * Scheduler run of queue, if there are requests pending and no one in the
  * driver that will restart queueing.
@@ -2454,10 +2469,21 @@  static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
 	 * returned by bfq_bic_lookup does not go away before
 	 * bfqd->lock is taken.
 	 */
-	struct bfq_io_cq *bic = bfq_bic_lookup(q);
+	struct bfq_io_cq *bic = bfq_bic_try_lookup(q);
 	bool ret;
 
-	spin_lock_irq(&bfqd->lock);
+	/*
+	 * bio merging is called for every bio queued, and it's very easy
+	 * to run into contention because of that. If we fail getting
+	 * the dd lock, just skip this merge attempt. For related IO, the
+	 * plug will be the successful merging point. If we get here, we
+	 * already failed doing the obvious merge. Chances of actually
+	 * getting a merge off this path is a lot slimmer, so skipping an
+	 * occassional lookup that will most likely not succeed anyway should
+	 * not be a problem.
+	 */
+	if (!spin_trylock_irq(&bfqd->lock))
+		return false;
 
 	if (bic) {
 		/*