From patchwork Thu Nov 25 13:36:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 12639121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5993C433EF for ; Thu, 25 Nov 2021 13:38:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355186AbhKYNmD (ORCPT ); Thu, 25 Nov 2021 08:42:03 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:41360 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355123AbhKYNkB (ORCPT ); Thu, 25 Nov 2021 08:40:01 -0500 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 2D19D1FD3C; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IJC+I6441CyaNwmEv5MXgQPNM6NbVhZ0Rc4qTVwE4aw=; b=LqeSv9HC+RfvgAkEMdRsJ5UTtUdhBdqQ8TUDMn4+zYRYi3rNHmfJg8SCYnBB2THrtoOOb/ 12ujvmfqImvn6KS5NG7/hegGbZMPJY/8r4DclVE5hrJugXCkQT1P/bXnyOLNYklZ+jJKdz ue2B29cXjD5qdnRGHkVpWe7dFLJrtjg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IJC+I6441CyaNwmEv5MXgQPNM6NbVhZ0Rc4qTVwE4aw=; b=CyQWnfZl54s7qj3s1BtCe1aYX5qeD7ChWC/K/T1TrZSmAmbMYsob7tT7DQR4pmXATMUPaM e7widr3iu1s3WVBw== Received: from quack2.suse.cz (unknown [10.100.200.198]) by relay2.suse.de (Postfix) with ESMTP id 1D553A3B8D; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id E3B7B1E0BD3; Thu, 25 Nov 2021 14:36:45 +0100 (CET) From: Jan Kara To: Jens Axboe Cc: , Paolo Valente , Jan Kara Subject: [PATCH 1/8] block: Provide blk_mq_sched_get_icq() Date: Thu, 25 Nov 2021 14:36:34 +0100 Message-Id: <20211125133645.27483-1-jack@suse.cz> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211125133131.14018-1-jack@suse.cz> References: <20211125133131.14018-1-jack@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2286; h=from:subject; bh=CyorICWMZ55H6dqfkHqRSC8ZUzyfheG60RAO3agFGPc=; b=owEBbQGS/pANAwAIAZydqgc/ZEDZAcsmYgBhn5Fi5NUt8jSjiPHIbir7sDJXAgr+vm+RpPZXD8wM Zayd+J+JATMEAAEIAB0WIQSrWdEr1p4yirVVKBycnaoHP2RA2QUCYZ+RYgAKCRCcnaoHP2RA2TNXCA DqIhU2VGSh+XL8qyCB79ybzreoaFSM3pi1K3DsVCYGBNRdsLQwe0SrPGoMuC/s0fwsSLNVyFJpYEi2 NFSXJ3JkSD823UtMfbmmCzvVst4DcP4A4s7/+Dm463qMpVFLnH950FW5ehF73TPlsraqH/O1L/OYYF Bjg7qhfwyoyjtAeiGpno3mO+gMoK5Dkn1MMxRgezocUmlntd6MUUA+y8gQoX8PO2OFsBVBG6a3xxM6 trDH88FZeJTz72oZKa/isIehAcVfphM6iwetBueH7VuV/qaE4Wkzp+AtP1iwQ0rjqWHhse64UZgEtE DK7pCpGroEkdVIuHNjSMg55RyeYDo7 X-Developer-Key: i=jack@suse.cz; a=openpgp; fpr=93C6099A142276A28BBE35D815BC833443038D8C Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently we lookup ICQ only after the request is allocated. However BFQ will want to decide how many scheduler tags it allows a given bfq queue (effectively a process) to consume based on cgroup weight. So provide a function blk_mq_sched_get_icq() so that BFQ can lookup ICQ earlier. Acked-by: Paolo Valente Signed-off-by: Jan Kara --- block/blk-mq-sched.c | 26 +++++++++++++++----------- block/blk-mq-sched.h | 1 + 2 files changed, 16 insertions(+), 11 deletions(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index b942b38000e5..98c6a97729f2 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -18,9 +18,8 @@ #include "blk-mq-tag.h" #include "blk-wbt.h" -void blk_mq_sched_assign_ioc(struct request *rq) +struct io_cq *blk_mq_sched_get_icq(struct request_queue *q) { - struct request_queue *q = rq->q; struct io_context *ioc; struct io_cq *icq; @@ -28,22 +27,27 @@ void blk_mq_sched_assign_ioc(struct request *rq) if (unlikely(!current->io_context)) create_task_io_context(current, GFP_ATOMIC, q->node); - /* - * May not have an IO context if it's a passthrough request - */ + /* May not have an IO context if context creation failed */ ioc = current->io_context; if (!ioc) - return; + return NULL; spin_lock_irq(&q->queue_lock); icq = ioc_lookup_icq(ioc, q); spin_unlock_irq(&q->queue_lock); + if (icq) + return icq; + return ioc_create_icq(ioc, q, GFP_ATOMIC); +} +EXPORT_SYMBOL(blk_mq_sched_get_icq); - if (!icq) { - icq = ioc_create_icq(ioc, q, GFP_ATOMIC); - if (!icq) - return; - } +void blk_mq_sched_assign_ioc(struct request *rq) +{ + struct io_cq *icq; + + icq = blk_mq_sched_get_icq(rq->q); + if (!icq) + return; get_io_context(icq->ioc); rq->elv.icq = icq; } diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index 25d1034952b6..add651ec06da 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -8,6 +8,7 @@ #define MAX_SCHED_RQ (16 * BLKDEV_DEFAULT_RQ) +struct io_cq *blk_mq_sched_get_icq(struct request_queue *q); void blk_mq_sched_assign_ioc(struct request *rq); bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio, From patchwork Thu Nov 25 13:36:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 12639127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A85DC4332F for ; Thu, 25 Nov 2021 13:38:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355126AbhKYNmG (ORCPT ); Thu, 25 Nov 2021 08:42:06 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:58080 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355103AbhKYNkB (ORCPT ); Thu, 25 Nov 2021 08:40:01 -0500 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 26B7A21B36; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pspeQ5O47o9ivLrZ7ySOESfJQ2W51Z3nyDReAIUeNG4=; b=W7ron2z/5KtILMgAKj4QQTzcYTpIvtmWv3kfAIn2ROBIFsqYZ+sjUEuErGKkgHOoAU0dZE 1A9OOXc0S4GyIKdmNCFrxLXzKigvKus59YsTMY+JTwZi0MF0W1uOGUiUdWg1u/H3rTpFs9 F8zULXl6MF9BpxBMF5BLYp4HgZRRSKk= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pspeQ5O47o9ivLrZ7ySOESfJQ2W51Z3nyDReAIUeNG4=; b=OcoiFCt3nmARuj4ildeyI1jl4DARc697ZpD/UirI2+I2nbHI6OxC/KlyAIX/5dlnP0DxoW NhaiZRxWpoVz14Dw== Received: from quack2.suse.cz (unknown [10.100.200.198]) by relay2.suse.de (Postfix) with ESMTP id 1A00EA3B8B; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id E5BEA1F2CE1; Thu, 25 Nov 2021 14:36:45 +0100 (CET) From: Jan Kara To: Jens Axboe Cc: , Paolo Valente , Jan Kara Subject: [PATCH 2/8] bfq: Track number of allocated requests in bfq_entity Date: Thu, 25 Nov 2021 14:36:35 +0100 Message-Id: <20211125133645.27483-2-jack@suse.cz> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211125133131.14018-1-jack@suse.cz> References: <20211125133131.14018-1-jack@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3304; h=from:subject; bh=kBjxfIaLY/3HsqOpz6uuyns/2uF00arb0DZ4ivBTZPk=; b=owEBbQGS/pANAwAIAZydqgc/ZEDZAcsmYgBhn5FjSn/ZlC6yNdW4z9hxFQbwP+yqb+V8c32jQMfb 3m9e+cyJATMEAAEIAB0WIQSrWdEr1p4yirVVKBycnaoHP2RA2QUCYZ+RYwAKCRCcnaoHP2RA2RWYB/ sGoSxK0nLgxY1zHXtt6AzpwuHje68ReQUwtUdvH5gFQ8CIk6cYdekMACJNS6eRd3rONmzrE2eoJ6K3 VEC+SWulXj4uILoH6K0mik5OUDa5XYlFTYUqEwGGENg1IGb/JluicPELYwdoJzTwj0Zl5nTIYXODy7 RCM5bG7FwgdPh0WDngFC5s/SAxEsBczndvrmkUDLThJBFQK9Ba9ZB1dZjnMoAVNWq6nI/saOytD1CO E+bc6Chfue1pC2F6pq8aRfgK7oPZlkJv2sBdk6/Rbv/o2nyvGteY3avDIZTqaQ8LXWCNWmMLIQTRxi W/rDejByQw3bu9icQM0MEphFsTMrfj X-Developer-Key: i=jack@suse.cz; a=openpgp; fpr=93C6099A142276A28BBE35D815BC833443038D8C Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org When we want to limit number of requests used by each bfqq and also cgroup, we need to track also number of requests used by each cgroup. So track number of allocated requests for each bfq_entity. Acked-by: Paolo Valente Signed-off-by: Jan Kara --- block/bfq-iosched.c | 28 ++++++++++++++++++++++------ block/bfq-iosched.h | 5 +++-- 2 files changed, 25 insertions(+), 8 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 1ce1a99a7160..1d564499614e 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -1113,7 +1113,8 @@ bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_data *bfqd, static int bfqq_process_refs(struct bfq_queue *bfqq) { - return bfqq->ref - bfqq->allocated - bfqq->entity.on_st_or_in_serv - + return bfqq->ref - bfqq->entity.allocated - + bfqq->entity.on_st_or_in_serv - (bfqq->weight_counter != NULL) - bfqq->stable_ref; } @@ -5878,6 +5879,22 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq, } } +static void bfqq_request_allocated(struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + + for_each_entity(entity) + entity->allocated++; +} + +static void bfqq_request_freed(struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + + for_each_entity(entity) + entity->allocated--; +} + /* returns true if it causes the idle timer to be disabled */ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq) { @@ -5891,8 +5908,8 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq) * Release the request's reference to the old bfqq * and make sure one is taken to the shared queue. */ - new_bfqq->allocated++; - bfqq->allocated--; + bfqq_request_allocated(new_bfqq); + bfqq_request_freed(bfqq); new_bfqq->ref++; /* * If the bic associated with the process @@ -6251,8 +6268,7 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd) static void bfq_finish_requeue_request_body(struct bfq_queue *bfqq) { - bfqq->allocated--; - + bfqq_request_freed(bfqq); bfq_put_queue(bfqq); } @@ -6674,7 +6690,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq) } } - bfqq->allocated++; + bfqq_request_allocated(bfqq); bfqq->ref++; bfq_log_bfqq(bfqd, bfqq, "get_request %p: bfqq %p, %d", rq, bfqq, bfqq->ref); diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index a73488eec8a4..3787cfb0febb 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -170,6 +170,9 @@ struct bfq_entity { /* budget, used also to calculate F_i: F_i = S_i + @budget / @weight */ int budget; + /* Number of requests allocated in the subtree of this entity */ + int allocated; + /* device weight, if non-zero, it overrides the default weight of * bfq_group_data */ int dev_weight; @@ -266,8 +269,6 @@ struct bfq_queue { struct request *next_rq; /* number of sync and async requests queued */ int queued[2]; - /* number of requests currently allocated */ - int allocated; /* number of pending metadata requests */ int meta_pending; /* fifo list of requests in sort_list */ From patchwork Thu Nov 25 13:36:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 12639119 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B295C433FE for ; Thu, 25 Nov 2021 13:38:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350118AbhKYNmE (ORCPT ); Thu, 25 Nov 2021 08:42:04 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:58084 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355125AbhKYNkB (ORCPT ); Thu, 25 Nov 2021 08:40:01 -0500 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 26DE021B37; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W9fz+a5CrOjVhQv/LJfGt3GtW70ilV8sGNzDLI9pBqc=; b=yyq9Qp7P/yUuhAvOrbCUf5eTlVa831H7oOariEpW49JlRAIifzDiccp3F1D+TmPLfj0Hvi 8/NeYw1T3QJZvc6eqDPW9PuMFGlJAOifOOnl7nlTZP7V7rIoidhYCSP30SShWP2TUDKBYL 4mSIzCnHuE/KigsEnD/Izne1IdYZb9w= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W9fz+a5CrOjVhQv/LJfGt3GtW70ilV8sGNzDLI9pBqc=; b=jD4vMc9GTJ7yD52Nnlrmo+g79U0x56YsbGtUjNmwXoojegYwpqtzTEb2+DUqEmhBUlEmas 5GS1R4OpM8oXS8Aw== Received: from quack2.suse.cz (unknown [10.100.200.198]) by relay2.suse.de (Postfix) with ESMTP id 19E80A3B8A; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id EAF021F2CE3; Thu, 25 Nov 2021 14:36:45 +0100 (CET) From: Jan Kara To: Jens Axboe Cc: , Paolo Valente , Jan Kara Subject: [PATCH 3/8] bfq: Store full bitmap depth in bfq_data Date: Thu, 25 Nov 2021 14:36:36 +0100 Message-Id: <20211125133645.27483-3-jack@suse.cz> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211125133131.14018-1-jack@suse.cz> References: <20211125133131.14018-1-jack@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2409; h=from:subject; bh=GenNats77LuvabWB/q5WUDZT8hAvprUKQfX+PrwPakg=; b=owEBbQGS/pANAwAIAZydqgc/ZEDZAcsmYgBhn5Fk7PeIgOv04kxB6NOcPVIRIbfRxLErYt3BzNk3 n8G6PzOJATMEAAEIAB0WIQSrWdEr1p4yirVVKBycnaoHP2RA2QUCYZ+RZAAKCRCcnaoHP2RA2cjgB/ wLXmyv4xxxOSJMJY4qFtdTHUCIMWMIaW6uNBsF3XTw8q125/lrDUoAvogUvzpQfmJHJm7FdHudSgUU 1gLwdRnkwpWxwQJy80XexX8IBLlE+Ee5UrMBYWx2HNME1Tgh/t+7LFwgf6J6PgoSxWLxM3Re6/rst5 yLnZPs6Kv/0qucwUer68h9d6QM3EpZUPyJ8HiToMZdtvU+QB7Me+L/vroUTcxg8DCMjqQ1TWYO4OIN VYHmTbFRd8/krvgtp8PQfTdLqCEXrPUIiKVvTO0VJj8ndWkyBINH+KwBI+Nxs578EUxOpo3pUhiSjM ETMRNiHwweZlAIs5mNQEQsI4Q4V1w3 X-Developer-Key: i=jack@suse.cz; a=openpgp; fpr=93C6099A142276A28BBE35D815BC833443038D8C Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Store bitmap depth shift inside bfq_data so that we can use it in bfq_limit_depth() for proportioning when limiting number of available request tags for a cgroup. Acked-by: Paolo Valente Signed-off-by: Jan Kara --- block/bfq-iosched.c | 10 ++++++---- block/bfq-iosched.h | 1 + 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 1d564499614e..cf9247301e3c 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -6857,7 +6857,9 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt) { unsigned int i, j, min_shallow = UINT_MAX; + unsigned int depth = 1U << bt->sb.shift; + bfqd->full_depth_shift = bt->sb.shift; /* * In-word depths if no bfq_queue is being weight-raised: * leaving 25% of tags only for sync reads. @@ -6869,13 +6871,13 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd, * limit 'something'. */ /* no more than 50% of tags for async I/O */ - bfqd->word_depths[0][0] = max((1U << bt->sb.shift) >> 1, 1U); + bfqd->word_depths[0][0] = max(depth >> 1, 1U); /* * no more than 75% of tags for sync writes (25% extra tags * w.r.t. async I/O, to prevent async I/O from starving sync * writes) */ - bfqd->word_depths[0][1] = max(((1U << bt->sb.shift) * 3) >> 2, 1U); + bfqd->word_depths[0][1] = max((depth * 3) >> 2, 1U); /* * In-word depths in case some bfq_queue is being weight- @@ -6885,9 +6887,9 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd, * shortage. */ /* no more than ~18% of tags for async I/O */ - bfqd->word_depths[1][0] = max(((1U << bt->sb.shift) * 3) >> 4, 1U); + bfqd->word_depths[1][0] = max((depth * 3) >> 4, 1U); /* no more than ~37% of tags for sync writes (~20% extra tags) */ - bfqd->word_depths[1][1] = max(((1U << bt->sb.shift) * 6) >> 4, 1U); + bfqd->word_depths[1][1] = max((depth * 6) >> 4, 1U); for (i = 0; i < 2; i++) for (j = 0; j < 2; j++) diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 3787cfb0febb..820cb8c2d1fe 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -769,6 +769,7 @@ struct bfq_data { * function) */ unsigned int word_depths[2][2]; + unsigned int full_depth_shift; }; enum bfqq_state_flags { From patchwork Thu Nov 25 13:36:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 12639125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACE12C4167B for ; Thu, 25 Nov 2021 13:38:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355187AbhKYNmF (ORCPT ); Thu, 25 Nov 2021 08:42:05 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:58086 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355126AbhKYNkB (ORCPT ); Thu, 25 Nov 2021 08:40:01 -0500 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 2D6B921B38; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rAHD5qwRVyZMzt/8ELYAv1c6UnCwskQyvUQomUOo6j8=; b=BJ+QTVzIXkXTYVz60gzg8jULnBQIMi7Cz5lRCG2kutovidFoHQ0xE9QEsZbZnQkikwL/Dl oKSQ1nJKFbEbq+brEUDpE60hh7pZ3erLURbbsmORpo1ob+4TFmJg0LmwEnkHEt4FCuW1Xh g2a6BZKLkHJj+p1jWeZq4qnoXJ5UY7c= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rAHD5qwRVyZMzt/8ELYAv1c6UnCwskQyvUQomUOo6j8=; b=7Sw8a8LSyHrG6f6Ol/Wb9B+hkuAVlEs+PpqkLkwth4/8kPAFHLxfW6fKY68jAVPpkOnD0V YKr+JJEhA0QkMvBg== Received: from quack2.suse.cz (unknown [10.100.200.198]) by relay2.suse.de (Postfix) with ESMTP id 1FEE7A3B8E; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id EF1E71F2CE4; Thu, 25 Nov 2021 14:36:45 +0100 (CET) From: Jan Kara To: Jens Axboe Cc: , Paolo Valente , Jan Kara , =?utf-8?q?Michal_Koutn=C3=BD?= Subject: [PATCH 4/8] bfq: Limit number of requests consumed by each cgroup Date: Thu, 25 Nov 2021 14:36:37 +0100 Message-Id: <20211125133645.27483-4-jack@suse.cz> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211125133131.14018-1-jack@suse.cz> References: <20211125133131.14018-1-jack@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7752; h=from:subject; bh=rCNHABN1J3Rs7LxjgH8aAH1cYstDN6EtN+bZ9CE2H08=; b=owEBbQGS/pANAwAIAZydqgc/ZEDZAcsmYgBhn5Fl9cXT0CEXm97MUMjmuac3QtktIGyH/SikvDJi 2J0XwpSJATMEAAEIAB0WIQSrWdEr1p4yirVVKBycnaoHP2RA2QUCYZ+RZQAKCRCcnaoHP2RA2cwgB/ 4n58Snbb3KUzfA76Q4B8ed9zBu9PwO/dj0wdJXlA88yNB9qy2faSiyqj0+Ki1Ry5yQpbskLRKs2Cfj skRsXrIDT8KwMKR+CKOfShk6ZNK3N9FpDXAZtJtq05iMiqR2jTyHbZ1e/+m74RG6Sp76qmjfSnKeKB XeYdlbxtNOcVh/Z+0lL0PrHqgjsQlDC4YsQ0CMKqyqA4U7+21m5iqzy6JSWDr5BiQnB88OpNSrtnMm 2UrY4e4/U/rIC1hBuf2AJObsmR3J2VNO7q989HW5VziGPpOJI78HyihyV08L+Y+JW4XYbtcpfYoEhS qZtmtidc/QjJ410qBcCSXWBWdoR8qp X-Developer-Key: i=jack@suse.cz; a=openpgp; fpr=93C6099A142276A28BBE35D815BC833443038D8C Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org When cgroup IO scheduling is used with BFQ it does not really provide service differentiation if the cgroup drives a big IO depth. That for example happens with writeback which asynchronously submits lots of IO but it can happen with AIO as well. The problem is that if we have two cgroups that submit IO with different weights, the cgroup with higher weight properly gets more IO time and is able to dispatch more IO. However this causes lower weight cgroup to accumulate more requests inside BFQ and eventually lower weight cgroup consumes most of IO scheduler tags. At that point higher weight cgroup stops getting better service as it is mostly blocked waiting for a scheduler tag while its queues inside BFQ are empty and thus lower weight cgroup gets served. Check how many requests submitting cgroup has allocated in bfq_limit_depth() and if it consumes more requests than what would correspond to its weight limit available depth to 1 so that the cgroup cannot consume many more requests. With this limitation the higher weight cgroup gets proper service even with writeback. Reviewed-by: Michal Koutný Acked-by: Paolo Valente Signed-off-by: Jan Kara --- block/bfq-iosched.c | 137 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 118 insertions(+), 19 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index cf9247301e3c..95a19d1fbedf 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -565,26 +565,134 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd, } } +#define BFQ_LIMIT_INLINE_DEPTH 16 + +#ifdef CONFIG_BFQ_GROUP_IOSCHED +static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit) +{ + struct bfq_data *bfqd = bfqq->bfqd; + struct bfq_entity *entity = &bfqq->entity; + struct bfq_entity *inline_entities[BFQ_LIMIT_INLINE_DEPTH]; + struct bfq_entity **entities = inline_entities; + int depth, level; + int class_idx = bfqq->ioprio_class - 1; + struct bfq_sched_data *sched_data; + unsigned long wsum; + bool ret = false; + + if (!entity->on_st_or_in_serv) + return false; + + /* +1 for bfqq entity, root cgroup not included */ + depth = bfqg_to_blkg(bfqq_group(bfqq))->blkcg->css.cgroup->level + 1; + if (depth > BFQ_LIMIT_INLINE_DEPTH) { + entities = kmalloc_array(depth, sizeof(*entities), GFP_NOIO); + if (!entities) + return false; + } + + spin_lock_irq(&bfqd->lock); + sched_data = entity->sched_data; + /* Gather our ancestors as we need to traverse them in reverse order */ + level = 0; + for_each_entity(entity) { + /* + * If at some level entity is not even active, allow request + * queueing so that BFQ knows there's work to do and activate + * entities. + */ + if (!entity->on_st_or_in_serv) + goto out; + /* Uh, more parents than cgroup subsystem thinks? */ + if (WARN_ON_ONCE(level >= depth)) + break; + entities[level++] = entity; + } + WARN_ON_ONCE(level != depth); + for (level--; level >= 0; level--) { + entity = entities[level]; + if (level > 0) { + wsum = bfq_entity_service_tree(entity)->wsum; + } else { + int i; + /* + * For bfqq itself we take into account service trees + * of all higher priority classes and multiply their + * weights so that low prio queue from higher class + * gets more requests than high prio queue from lower + * class. + */ + wsum = 0; + for (i = 0; i <= class_idx; i++) { + wsum = wsum * IOPRIO_BE_NR + + sched_data->service_tree[i].wsum; + } + } + limit = DIV_ROUND_CLOSEST(limit * entity->weight, wsum); + if (entity->allocated >= limit) { + bfq_log_bfqq(bfqq->bfqd, bfqq, + "too many requests: allocated %d limit %d level %d", + entity->allocated, limit, level); + ret = true; + break; + } + } +out: + spin_unlock_irq(&bfqd->lock); + if (entities != inline_entities) + kfree(entities); + return ret; +} +#else +static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit) +{ + return false; +} +#endif + /* * Async I/O can easily starve sync I/O (both sync reads and sync * writes), by consuming all tags. Similarly, storms of sync writes, * such as those that sync(2) may trigger, can starve sync reads. * Limit depths of async I/O and sync writes so as to counter both * problems. + * + * Also if a bfq queue or its parent cgroup consume more tags than would be + * appropriate for their weight, we trim the available tag depth to 1. This + * avoids a situation where one cgroup can starve another cgroup from tags and + * thus block service differentiation among cgroups. Note that because the + * queue / cgroup already has many requests allocated and queued, this does not + * significantly affect service guarantees coming from the BFQ scheduling + * algorithm. */ static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data) { struct bfq_data *bfqd = data->q->elevator->elevator_data; + struct bfq_io_cq *bic = icq_to_bic(blk_mq_sched_get_icq(data->q)); + struct bfq_queue *bfqq = bic ? bic_to_bfqq(bic, op_is_sync(op)) : NULL; + int depth; + unsigned limit = data->q->nr_requests; + + /* Sync reads have full depth available */ + if (op_is_sync(op) && !op_is_write(op)) { + depth = 0; + } else { + depth = bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)]; + limit = (limit * depth) >> bfqd->full_depth_shift; + } - if (op_is_sync(op) && !op_is_write(op)) - return; - - data->shallow_depth = - bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)]; + /* + * Does queue (or any parent entity) exceed number of requests that + * should be available to it? Heavily limit depth so that it cannot + * consume more available requests and thus starve other entities. + */ + if (bfqq && bfqq_request_over_limit(bfqq, limit)) + depth = 1; bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u", - __func__, bfqd->wr_busy_queues, op_is_sync(op), - data->shallow_depth); + __func__, bfqd->wr_busy_queues, op_is_sync(op), depth); + if (depth) + data->shallow_depth = depth; } static struct bfq_queue * @@ -6853,10 +6961,8 @@ void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg) * See the comments on bfq_limit_depth for the purpose of * the depths set in the function. Return minimum shallow depth we'll use. */ -static unsigned int bfq_update_depths(struct bfq_data *bfqd, - struct sbitmap_queue *bt) +static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt) { - unsigned int i, j, min_shallow = UINT_MAX; unsigned int depth = 1U << bt->sb.shift; bfqd->full_depth_shift = bt->sb.shift; @@ -6890,22 +6996,15 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd, bfqd->word_depths[1][0] = max((depth * 3) >> 4, 1U); /* no more than ~37% of tags for sync writes (~20% extra tags) */ bfqd->word_depths[1][1] = max((depth * 6) >> 4, 1U); - - for (i = 0; i < 2; i++) - for (j = 0; j < 2; j++) - min_shallow = min(min_shallow, bfqd->word_depths[i][j]); - - return min_shallow; } static void bfq_depth_updated(struct blk_mq_hw_ctx *hctx) { struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; struct blk_mq_tags *tags = hctx->sched_tags; - unsigned int min_shallow; - min_shallow = bfq_update_depths(bfqd, &tags->bitmap_tags); - sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, min_shallow); + bfq_update_depths(bfqd, &tags->bitmap_tags); + sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, 1); } static int bfq_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int index) From patchwork Thu Nov 25 13:36:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 12639113 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68DE2C433FE for ; Thu, 25 Nov 2021 13:38:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355060AbhKYNmC (ORCPT ); Thu, 25 Nov 2021 08:42:02 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:41380 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355184AbhKYNkB (ORCPT ); Thu, 25 Nov 2021 08:40:01 -0500 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 4656E1FD3D; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GYmlFBYL40DvStsHobyBun7mCuDBKQ/RWYQzs2sFqig=; b=2ubSAXqmlEn0ywMklyv4FE0IpZzKfuwwyj/HKPk5d00ofGCyPl7rky91eyz6bx61CbUG0y zp3z+FR1TxEXT9RrTtAYJEi2Kv8jPQgBHKwdc/TglctMtDbDqRdS2BFqWMSgp3IncDRvJQ RA2AnhZWtv/VZjbpYzALOo5QUeKRZes= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GYmlFBYL40DvStsHobyBun7mCuDBKQ/RWYQzs2sFqig=; b=3nKCaD39Nv+mUSonneNQ8oG7vwCEn3nnFr2yBylq3SIwadfOxtcNccyccHTeB6gs8sgjBi jPmLsyEu9tDhdYBA== Received: from quack2.suse.cz (unknown [10.100.200.198]) by relay2.suse.de (Postfix) with ESMTP id 3B283A3B8F; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id F42111F2CE5; Thu, 25 Nov 2021 14:36:45 +0100 (CET) From: Jan Kara To: Jens Axboe Cc: , Paolo Valente , Jan Kara Subject: [PATCH 5/8] bfq: Limit waker detection in time Date: Thu, 25 Nov 2021 14:36:38 +0100 Message-Id: <20211125133645.27483-5-jack@suse.cz> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211125133131.14018-1-jack@suse.cz> References: <20211125133131.14018-1-jack@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5592; h=from:subject; bh=PHyV+U6QDYRlvtvBP9xp6tMKx82XNhZs6vG2t+94tl8=; b=owEBbQGS/pANAwAIAZydqgc/ZEDZAcsmYgBhn5Fm7qPC668wGS1mN6XtUbqxZOUd+vsQeMuW3CO8 Q2ek9Q6JATMEAAEIAB0WIQSrWdEr1p4yirVVKBycnaoHP2RA2QUCYZ+RZgAKCRCcnaoHP2RA2dXfCA CKfr4FSDhX5SrFR9UFRBw/fhH1K0kDDNeFsyMiewzcDrDQdDkc9XBExDBX43SOaGlvPGyNBUw7Z3Dt vUBzbthU5q3Fnl7ABz5nh5OgFwxOWA6jvf0R0viH3azBHFAy+SgV0N96N6OJdColNv1CpPeyvQINcP uESS1+BWJIvsGmSYcmtDihZNdKa6gojxjas/LhiUfDnjM8Uwd6Pn/llbUwpPv4UiFxuKa4lEuykPb1 hKS38MiIIq/pGzrM1Igy8XayB7f67qsahki5ZjmiNHgCyFbPy1zpglRGJ79wo4YMOgPC6fJSxATGpD lOtQUe/8b/ibiUQhKA6x6DZLnmMCjP X-Developer-Key: i=jack@suse.cz; a=openpgp; fpr=93C6099A142276A28BBE35D815BC833443038D8C Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently, when process A starts issuing requests shortly after process B has completed some IO three times in a row, we decide that B is a "waker" of A meaning that completing IO of B is needed for A to make progress and generally stop separating A's and B's IO much. This logic is useful to avoid unnecessary idling and thus throughput loss for cases where workload needs to switch e.g. between the process and the journaling thread doing IO. However the detection heuristic tends to frequently give false positives when A and B are fighting IO bandwidth and other processes aren't doing much IO as we are basically deemed to eventually accumulate three occurences of a situation where one process starts issuing requests after the other has completed some IO. To reduce these false positives, cancel the waker detection also if we didn't accumulate three detected wakeups within given timeout. The rationale is that if wakeups are really rare, the pointless idling doesn't hurt throughput that much anyway. This significantly reduces false waker detection for workload like: [global] directory=/mnt/repro/ rw=write size=8g time_based runtime=30 ramp_time=10 blocksize=1m direct=0 ioengine=sync [slowwriter] numjobs=1 fsync=200 [fastwriter] numjobs=1 fsync=200 Acked-by: Paolo Valente Signed-off-by: Jan Kara --- block/bfq-iosched.c | 38 +++++++++++++++++++++++--------------- block/bfq-iosched.h | 2 ++ 2 files changed, 25 insertions(+), 15 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 95a19d1fbedf..83a2225e407b 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -2091,20 +2091,19 @@ static void bfq_update_io_intensity(struct bfq_queue *bfqq, u64 now_ns) * aspect, see the comments on the choice of the queue for injection * in bfq_select_queue(). * - * Turning back to the detection of a waker queue, a queue Q is deemed - * as a waker queue for bfqq if, for three consecutive times, bfqq - * happens to become non empty right after a request of Q has been - * completed. In this respect, even if bfqq is empty, we do not check - * for a waker if it still has some in-flight I/O. In fact, in this - * case bfqq is actually still being served by the drive, and may - * receive new I/O on the completion of some of the in-flight - * requests. In particular, on the first time, Q is tentatively set as - * a candidate waker queue, while on the third consecutive time that Q - * is detected, the field waker_bfqq is set to Q, to confirm that Q is - * a waker queue for bfqq. These detection steps are performed only if - * bfqq has a long think time, so as to make it more likely that - * bfqq's I/O is actually being blocked by a synchronization. This - * last filter, plus the above three-times requirement, make false + * Turning back to the detection of a waker queue, a queue Q is deemed as a + * waker queue for bfqq if, for three consecutive times, bfqq happens to become + * non empty right after a request of Q has been completed within given + * timeout. In this respect, even if bfqq is empty, we do not check for a waker + * if it still has some in-flight I/O. In fact, in this case bfqq is actually + * still being served by the drive, and may receive new I/O on the completion + * of some of the in-flight requests. In particular, on the first time, Q is + * tentatively set as a candidate waker queue, while on the third consecutive + * time that Q is detected, the field waker_bfqq is set to Q, to confirm that Q + * is a waker queue for bfqq. These detection steps are performed only if bfqq + * has a long think time, so as to make it more likely that bfqq's I/O is + * actually being blocked by a synchronization. This last filter, plus the + * above three-times requirement and time limit for detection, make false * positives less likely. * * NOTE @@ -2136,8 +2135,16 @@ static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq, bfqd->last_completed_rq_bfqq == bfqq->waker_bfqq) return; + /* + * We reset waker detection logic also if too much time has passed + * since the first detection. If wakeups are rare, pointless idling + * doesn't hurt throughput that much. The condition below makes sure + * we do not uselessly idle blocking waker in more than 1/64 cases. + */ if (bfqd->last_completed_rq_bfqq != - bfqq->tentative_waker_bfqq) { + bfqq->tentative_waker_bfqq || + now_ns > bfqq->waker_detection_started + + 128 * (u64)bfqd->bfq_slice_idle) { /* * First synchronization detected with a * candidate waker queue, or with a different @@ -2146,6 +2153,7 @@ static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq, bfqq->tentative_waker_bfqq = bfqd->last_completed_rq_bfqq; bfqq->num_waker_detections = 1; + bfqq->waker_detection_started = now_ns; } else /* Same tentative waker queue detected again */ bfqq->num_waker_detections++; diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 820cb8c2d1fe..bb8180c52a31 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -388,6 +388,8 @@ struct bfq_queue { struct bfq_queue *tentative_waker_bfqq; /* number of times the same tentative waker has been detected */ unsigned int num_waker_detections; + /* time when we started considering this waker */ + u64 waker_detection_started; /* node for woken_list, see below */ struct hlist_node woken_list_node; From patchwork Thu Nov 25 13:36:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 12639123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47392C4167D for ; Thu, 25 Nov 2021 13:38:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355125AbhKYNmE (ORCPT ); Thu, 25 Nov 2021 08:42:04 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:41388 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355187AbhKYNkC (ORCPT ); Thu, 25 Nov 2021 08:40:02 -0500 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 48E9B1FD3E; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tp7/8R1LmG938qy5axq4MAM3rTPHcg6t4envA+5B7xo=; b=3T7Onr75W/3Pb5XE1BtgINaZS2yM2qRJCNtA9Xw0IyP8h7EaDNW348AQ9R6/XHDhCVMH0E d09SUKc0aj5K2zlTnUGX9z+o5Wojf+u44dN7YAaLIrneh2JJYiHJ2DMj0EzMJIH6sXD0og gy9NBCD9gLjLtAvCpomGY+JWruZdYWM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tp7/8R1LmG938qy5axq4MAM3rTPHcg6t4envA+5B7xo=; b=vJ1ZZt7i0Tyvjtjdpr6ZhcXxxEs3NpBuR8Dvo+CoqZSUO5e1Tg8Z4+0rbrJ8YfJABMHAs2 6fKXJOiWRXHChMBA== Received: from quack2.suse.cz (unknown [10.100.200.198]) by relay2.suse.de (Postfix) with ESMTP id 3D59BA3B92; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 041EE1F2CE6; Thu, 25 Nov 2021 14:36:46 +0100 (CET) From: Jan Kara To: Jens Axboe Cc: , Paolo Valente , Jan Kara Subject: [PATCH 6/8] bfq: Provide helper to generate bfqq name Date: Thu, 25 Nov 2021 14:36:39 +0100 Message-Id: <20211125133645.27483-6-jack@suse.cz> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211125133131.14018-1-jack@suse.cz> References: <20211125133131.14018-1-jack@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2907; h=from:subject; bh=lIW8uaLgOKwWPQwZ8mubQz8UhMz485reSQkalNeN/dQ=; b=owEBbQGS/pANAwAIAZydqgc/ZEDZAcsmYgBhn5FnyHX8wjbtONKgxP8uQ+RM1kSJK6/bTo5GIH08 iqxLYvSJATMEAAEIAB0WIQSrWdEr1p4yirVVKBycnaoHP2RA2QUCYZ+RZwAKCRCcnaoHP2RA2cjACA C/96X+C8pdJZSPVH+wCrW6B4Elu6Uv7Refvr508sMEDjN7CyXhexjTq1UHG8CDGEHb7y75ubI0KviP mjrqLGHyamP26Yxt0vV5Oj4p+YKTDc2xiUWiWgzbLxSK6D5zYI8DGZveDs98bdqGKSeGm3iw/FelKa F+0IfrOl/e7Ca8nxm7UEhlh6DYuxocmYrZxVQBUvyQGlWyPugTCqSKcR5y1Ys35NkYLdqFHMjSrkbZ LbvA/8RGieh8Vjk4Tso3Vd72FC+0bRXRLX4qgnQA9ankX4nQ6XhzJDC32xGj2eK+mzKLSHfw3LX3xI I38Q6vt9OAzC63PKMKKciVq5hP5agY X-Developer-Key: i=jack@suse.cz; a=openpgp; fpr=93C6099A142276A28BBE35D815BC833443038D8C Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Instead of having helper formating bfqq pid, provide a helper to generate full bfqq name as used in the traces. It saves some code duplication and will save more in the coming tracepoints. Acked-by: Paolo Valente Signed-off-by: Jan Kara --- block/bfq-iosched.h | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index bb8180c52a31..07288b9da389 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -25,7 +25,7 @@ #define BFQ_DEFAULT_GRP_IOPRIO 0 #define BFQ_DEFAULT_GRP_CLASS IOPRIO_CLASS_BE -#define MAX_PID_STR_LENGTH 12 +#define MAX_BFQQ_NAME_LENGTH 16 /* * Soft real-time applications are extremely more latency sensitive @@ -1083,26 +1083,27 @@ void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq); /* --------------- end of interface of B-WF2Q+ ---------------- */ /* Logging facilities. */ -static inline void bfq_pid_to_str(int pid, char *str, int len) +static inline void bfq_bfqq_name(struct bfq_queue *bfqq, char *str, int len) { - if (pid != -1) - snprintf(str, len, "%d", pid); + char type = bfq_bfqq_sync(bfqq) ? 'S' : 'A'; + + if (bfqq->pid != -1) + snprintf(str, len, "bfq%d%c", bfqq->pid, type); else - snprintf(str, len, "SHARED-"); + snprintf(str, len, "bfqSHARED-%c", type); } #ifdef CONFIG_BFQ_GROUP_IOSCHED struct bfq_group *bfqq_group(struct bfq_queue *bfqq); #define bfq_log_bfqq(bfqd, bfqq, fmt, args...) do { \ - char pid_str[MAX_PID_STR_LENGTH]; \ + char pid_str[MAX_BFQQ_NAME_LENGTH]; \ if (likely(!blk_trace_note_message_enabled((bfqd)->queue))) \ break; \ - bfq_pid_to_str((bfqq)->pid, pid_str, MAX_PID_STR_LENGTH); \ + bfq_bfqq_name((bfqq), pid_str, MAX_BFQQ_NAME_LENGTH); \ blk_add_cgroup_trace_msg((bfqd)->queue, \ bfqg_to_blkg(bfqq_group(bfqq))->blkcg, \ - "bfq%s%c " fmt, pid_str, \ - bfq_bfqq_sync((bfqq)) ? 'S' : 'A', ##args); \ + "%s " fmt, pid_str, ##args); \ } while (0) #define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do { \ @@ -1113,13 +1114,11 @@ struct bfq_group *bfqq_group(struct bfq_queue *bfqq); #else /* CONFIG_BFQ_GROUP_IOSCHED */ #define bfq_log_bfqq(bfqd, bfqq, fmt, args...) do { \ - char pid_str[MAX_PID_STR_LENGTH]; \ + char pid_str[MAX_BFQQ_NAME_LENGTH]; \ if (likely(!blk_trace_note_message_enabled((bfqd)->queue))) \ break; \ - bfq_pid_to_str((bfqq)->pid, pid_str, MAX_PID_STR_LENGTH); \ - blk_add_trace_msg((bfqd)->queue, "bfq%s%c " fmt, pid_str, \ - bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \ - ##args); \ + bfq_bfqq_name((bfqq), pid_str, MAX_BFQQ_NAME_LENGTH); \ + blk_add_trace_msg((bfqd)->queue, "%s " fmt, pid_str, ##args); \ } while (0) #define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do {} while (0) From patchwork Thu Nov 25 13:36:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 12639115 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9292C43219 for ; Thu, 25 Nov 2021 13:38:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355129AbhKYNmD (ORCPT ); Thu, 25 Nov 2021 08:42:03 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:58112 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355134AbhKYNkB (ORCPT ); Thu, 25 Nov 2021 08:40:01 -0500 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 4704E21B3A; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Vius1EEBYV8AfvA99YGsUwfNIFCRRkmKCQ+JuMpvEOU=; b=u2eOa8O77iZXe6qb6PtruVQFyjKYJKfRj1fsfDDIBIGXK84kcBwNmI9htF7fUjEO+nxBvm 9OKJuspbgym6Jq+FT0oL9Zh4UHR17FxpgRdqB6WB79GoSdq91/l5a2096ggwBehuY9AQxM MRsWjVye6cEoDBUzD+IRLpsRJzZH+Jw= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Vius1EEBYV8AfvA99YGsUwfNIFCRRkmKCQ+JuMpvEOU=; b=JnL8vI3g51hV1T91HyiN8jsDEE2omenpqAtWKmJp39Dpe49QQhYsgaNH5jw+hK4IMtA2f1 xnxdTiOEzMeIxoDw== Received: from quack2.suse.cz (unknown [10.100.200.198]) by relay2.suse.de (Postfix) with ESMTP id 3BD88A3B90; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 081721F2CE7; Thu, 25 Nov 2021 14:36:46 +0100 (CET) From: Jan Kara To: Jens Axboe Cc: , Paolo Valente , Jan Kara Subject: [PATCH 7/8] bfq: Log waker detections Date: Thu, 25 Nov 2021 14:36:40 +0100 Message-Id: <20211125133645.27483-7-jack@suse.cz> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211125133131.14018-1-jack@suse.cz> References: <20211125133131.14018-1-jack@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1735; h=from:subject; bh=48k5vsNZ3Sdki/wMVpiTSWvwslFhPA7dwVOPCb4DhU0=; b=owEBbQGS/pANAwAIAZydqgc/ZEDZAcsmYgBhn5FnYr/kBJuZ/xs3R7Z8mOow+h8plCoQlzNpmmf6 iRfPUeKJATMEAAEIAB0WIQSrWdEr1p4yirVVKBycnaoHP2RA2QUCYZ+RZwAKCRCcnaoHP2RA2f5/CA DoyDIdmkenA/vW+o0wegm3xpLm9RqYi7N5ZdfO8yxy7SeEv1jp143meXiKshEoXblRSw9Gs4mbl/iR SlWmXLxnB0miU+LMPWXmTmalrxk2YFFPfDLAXm8g/Yy3YWHaTgzWjfMx6F9c2BQ/i+AZftdFvtiXb0 1YMpKTsacfycaT1KXtWYDPnqzue+NxNhI6QofZiwKILScF2m87hIxfb4cJtAIgY4yRIsbTiMETRgeY ygaN9uyQtgW5IKwJK6ghUTGTuwPmMB0yZ/j/oMkGpYceskcCtoSuBd3sM3cdCvBopfQmi9A13v8/5x /JWC278y3pyYiyrlg5Oq7NUMtkF9rQ X-Developer-Key: i=jack@suse.cz; a=openpgp; fpr=93C6099A142276A28BBE35D815BC833443038D8C Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Waker - wakee relationships are important in deciding whether one queue can preempt the other one. Print information about detected waker-wakee relationships so that scheduling decisions can be better understood from block traces. Acked-by: Paolo Valente Signed-off-by: Jan Kara --- block/bfq-iosched.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 83a2225e407b..69144003a694 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -2127,6 +2127,8 @@ static void bfq_update_io_intensity(struct bfq_queue *bfqq, u64 now_ns) static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq, u64 now_ns) { + char waker_name[MAX_BFQQ_NAME_LENGTH]; + if (!bfqd->last_completed_rq_bfqq || bfqd->last_completed_rq_bfqq == bfqq || bfq_bfqq_has_short_ttime(bfqq) || @@ -2154,12 +2156,18 @@ static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq, bfqd->last_completed_rq_bfqq; bfqq->num_waker_detections = 1; bfqq->waker_detection_started = now_ns; + bfq_bfqq_name(bfqq->tentative_waker_bfqq, waker_name, + MAX_BFQQ_NAME_LENGTH); + bfq_log_bfqq(bfqd, bfqq, "set tenative waker %s", waker_name); } else /* Same tentative waker queue detected again */ bfqq->num_waker_detections++; if (bfqq->num_waker_detections == 3) { bfqq->waker_bfqq = bfqd->last_completed_rq_bfqq; bfqq->tentative_waker_bfqq = NULL; + bfq_bfqq_name(bfqq->waker_bfqq, waker_name, + MAX_BFQQ_NAME_LENGTH); + bfq_log_bfqq(bfqd, bfqq, "set waker %s", waker_name); /* * If the waker queue disappears, then From patchwork Thu Nov 25 13:36:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 12639117 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 034BDC433F5 for ; Thu, 25 Nov 2021 13:38:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355071AbhKYNmC (ORCPT ); Thu, 25 Nov 2021 08:42:02 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:58118 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355186AbhKYNkB (ORCPT ); Thu, 25 Nov 2021 08:40:01 -0500 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 46FD221B39; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KGY0kVPqM34byBMldqWuqXZ2jMEbR5KjlRtyUleSVHM=; b=zmln/gygPx4fK2+jWlhZacG4KhSpqsCQDsaiQ/O1PNfA6EB1Es9KQY95/+62KY8xm5R1Fj Vcs0slniChs/wBqILScIs9/72x00WyELg2Rio1Yq38jbL5S7k1l/EePaOVf+nNwRhpMWUm taFv5J1GKZydN+LfT4yEOduZqYFc4SY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1637847409; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KGY0kVPqM34byBMldqWuqXZ2jMEbR5KjlRtyUleSVHM=; b=fnIHgnhzkX9sJNcuqwyCt+hvplv35hHIEnIQeTHDFffQkJRTIDMUWXNWcMpQO0B+Ji0uQ6 8vXe1JGPD1xQ/6BA== Received: from quack2.suse.cz (unknown [10.100.200.198]) by relay2.suse.de (Postfix) with ESMTP id 3CDE5A3B91; Thu, 25 Nov 2021 13:36:49 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 0B9031F2CE8; Thu, 25 Nov 2021 14:36:46 +0100 (CET) From: Jan Kara To: Jens Axboe Cc: , Paolo Valente , Jan Kara Subject: [PATCH 8/8] bfq: Do not let waker requests skip proper accounting Date: Thu, 25 Nov 2021 14:36:41 +0100 Message-Id: <20211125133645.27483-8-jack@suse.cz> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211125133131.14018-1-jack@suse.cz> References: <20211125133131.14018-1-jack@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5101; h=from:subject; bh=CbsAyenB8/veMQyIAjcl1LZAjU9l8SpnWEmZc6Jv118=; b=owEBbQGS/pANAwAIAZydqgc/ZEDZAcsmYgBhn5FoHLEi9+rEEO+4HDE8GEt8Xd1tDuV9OO9aXjUn s5s45bCJATMEAAEIAB0WIQSrWdEr1p4yirVVKBycnaoHP2RA2QUCYZ+RaAAKCRCcnaoHP2RA2SsICA DVVeLP6Pl73KVqEjc2SZkAkYmIZMS0fwzCp7p0LdBQ1YIO01BMO9pL0+KgvfjDRfqqLa/kpiVMivHi r/yumW78uhlqoNcAbBcOqViM5bVj+A0iDSdKMID8Y/5t5RMfgY+GQB7D6Tr2lt1lnP0kQSpo+r612K SMH2PXiRSuQyJgUJc7UgeB6Q95dCQItvWBWpF3pc7A38JiwDw0NQMbmI6BCulSTcYOHqzdzOQpl3Jz Cc/Hu5DydJGCzpqnSoUUh36m4OsDgkWGH9gQzF4HqJPxGpXwRmpjEPID+bkZt76nf/9TFj1C00jKvu syA8RHpKQYq6ubLNTV3PowqP0SOEEr X-Developer-Key: i=jack@suse.cz; a=openpgp; fpr=93C6099A142276A28BBE35D815BC833443038D8C Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Commit 7cc4ffc55564 ("block, bfq: put reqs of waker and woken in dispatch list") added a condition to bfq_insert_request() which added waker's requests directly to dispatch list. The rationale was that completing waker's IO is needed to get more IO for the current queue. Although this rationale is valid, there is a hole in it. The waker does not necessarily serve the IO only for the current queue and maybe it's current IO is not needed for current queue to make progress. Furthermore injecting IO like this completely bypasses any service accounting within bfq and thus we do not properly track how much service is waker's queue getting or that the waker is actually doing any IO. Depending on the conditions this can result in the waker getting too much or too few service. Consider for example the following job file: [global] directory=/mnt/repro/ rw=write size=8g time_based runtime=30 ramp_time=10 blocksize=1m direct=0 ioengine=sync [slowwriter] numjobs=1 prioclass=2 prio=7 fsync=200 [fastwriter] numjobs=1 prioclass=2 prio=0 fsync=200 Despite processes have very different IO priorities, they get the same about of service. The reason is that bfq identifies these processes as having waker-wakee relationship and once that happens, IO from fastwriter gets injected during slowwriter's time slice. As a result bfq is not aware that fastwriter has any IO to do and constantly schedules only slowwriter's queue. Thus fastwriter is forced to compete with slowwriter's IO all the time instead of getting its share of time based on IO priority. Drop the special injection condition from bfq_insert_request(). As a result, requests will be tracked and queued in a normal way and on next dispatch bfq_select_queue() can decide whether the waker's inserted requests should be injected during the current queue's timeslice or not. Fixes: 7cc4ffc55564 ("block, bfq: put reqs of waker and woken in dispatch list") Acked-by: Paolo Valente Signed-off-by: Jan Kara --- block/bfq-iosched.c | 44 +------------------------------------------- 1 file changed, 1 insertion(+), 43 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 69144003a694..85554b800970 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -6132,48 +6132,7 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, spin_lock_irq(&bfqd->lock); bfqq = bfq_init_rq(rq); - - /* - * Reqs with at_head or passthrough flags set are to be put - * directly into dispatch list. Additional case for putting rq - * directly into the dispatch queue: the only active - * bfq_queues are bfqq and either its waker bfq_queue or one - * of its woken bfq_queues. The rationale behind this - * additional condition is as follows: - * - consider a bfq_queue, say Q1, detected as a waker of - * another bfq_queue, say Q2 - * - by definition of a waker, Q1 blocks the I/O of Q2, i.e., - * some I/O of Q1 needs to be completed for new I/O of Q2 - * to arrive. A notable example of waker is journald - * - so, Q1 and Q2 are in any respect the queues of two - * cooperating processes (or of two cooperating sets of - * processes): the goal of Q1's I/O is doing what needs to - * be done so that new Q2's I/O can finally be - * issued. Therefore, if the service of Q1's I/O is delayed, - * then Q2's I/O is delayed too. Conversely, if Q2's I/O is - * delayed, the goal of Q1's I/O is hindered. - * - as a consequence, if some I/O of Q1/Q2 arrives while - * Q2/Q1 is the only queue in service, there is absolutely - * no point in delaying the service of such an I/O. The - * only possible result is a throughput loss - * - so, when the above condition holds, the best option is to - * have the new I/O dispatched as soon as possible - * - the most effective and efficient way to attain the above - * goal is to put the new I/O directly in the dispatch - * list - * - as an additional restriction, Q1 and Q2 must be the only - * busy queues for this commit to put the I/O of Q2/Q1 in - * the dispatch list. This is necessary, because, if also - * other queues are waiting for service, then putting new - * I/O directly in the dispatch list may evidently cause a - * violation of service guarantees for the other queues - */ - if (!bfqq || - (bfqq != bfqd->in_service_queue && - bfqd->in_service_queue != NULL && - bfq_tot_busy_queues(bfqd) == 1 + bfq_bfqq_busy(bfqq) && - (bfqq->waker_bfqq == bfqd->in_service_queue || - bfqd->in_service_queue->waker_bfqq == bfqq)) || at_head) { + if (!bfqq || at_head) { if (at_head) list_add(&rq->queuelist, &bfqd->dispatch); else @@ -6200,7 +6159,6 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, * merge). */ cmd_flags = rq->cmd_flags; - spin_unlock_irq(&bfqd->lock); bfq_update_insert_stats(q, bfqq, idle_timer_disabled,