From patchwork Mon Jan 22 01:17:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joseph Qi X-Patchwork-Id: 10177187 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DB46B601D5 for ; Mon, 22 Jan 2018 01:18:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CCB9F20223 for ; Mon, 22 Jan 2018 01:18:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C0D2D205AF; Mon, 22 Jan 2018 01:18:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3580020223 for ; Mon, 22 Jan 2018 01:18:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751060AbeAVBSB (ORCPT ); Sun, 21 Jan 2018 20:18:01 -0500 Received: from out30-130.freemail.mail.aliyun.com ([115.124.30.130]:52571 "EHLO out30-130.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751038AbeAVBSB (ORCPT ); Sun, 21 Jan 2018 20:18:01 -0500 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07521719|-1; CH=green; FP=0|0|0|0|0|-1|-1|-1; HT=e01e01355; MF=joseph.qi@linux.alibaba.com; NM=1; PH=DS; RN=5; RT=5; SR=0; TI=SMTPD_---0SwxsPG._1516583877; Received: from JosephdeMacBook-Pro.local(mailfrom:joseph.qi@linux.alibaba.com fp:42.120.74.105) by smtp.aliyun-inc.com(127.0.0.1); Mon, 22 Jan 2018 09:17:58 +0800 From: Joseph Qi Subject: [PATCH RESEND 3/3] blk-throttle: do downgrade/upgrade check when issuing io to lower layer To: Jens Axboe , Shaohua Li Cc: jiufei.xue@linux.alibaba.com, caspar@linux.alibaba.com, linux-block Message-ID: <65fa564f-e6c2-6a9d-a626-8aa3d9716ee6@linux.alibaba.com> Date: Mon, 22 Jan 2018 09:17:58 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:52.0) Gecko/20100101 Thunderbird/52.5.2 MIME-Version: 1.0 Content-Language: en-US Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently downgrade/upgrade check is done when io firstly comes to block throttle layer. In case of writeback, a large number of ios will firstly be throttled in throttle queue and then dispatched when timer is kicked, which won't be checked because REQ_THROTTLED is set. This will lead to low limit not guaranteed most of time. Fix this case by moving check logic down, in which we are ready to issue io to lower layer. Signed-off-by: Joseph Qi Reviewed-by: Jiufei Xue --- block/blk-throttle.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/block/blk-throttle.c b/block/blk-throttle.c index 9c0b5ff..6207554 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -1065,8 +1065,6 @@ static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio) /* Charge the bio to the group */ tg->bytes_disp[rw] += bio_size; tg->io_disp[rw]++; - tg->last_bytes_disp[rw] += bio_size; - tg->last_io_disp[rw]++; /* * BIO_THROTTLED is used to prevent the same bio to be throttled @@ -2166,7 +2164,8 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg, struct bio *bio) { struct throtl_qnode *qn = NULL; - struct throtl_grp *tg = blkg_to_tg(blkg ?: q->root_blkg); + struct throtl_grp *orig_tg = blkg_to_tg(blkg ?: q->root_blkg); + struct throtl_grp *tg = orig_tg; struct throtl_service_queue *sq; bool rw = bio_data_dir(bio); bool throttled = false; @@ -2174,11 +2173,11 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg, WARN_ON_ONCE(!rcu_read_lock_held()); + spin_lock_irq(q->queue_lock); + /* see throtl_charge_bio() */ if (bio_flagged(bio, BIO_THROTTLED) || !tg->has_rules[rw]) - goto out; - - spin_lock_irq(q->queue_lock); + goto out_unlock; throtl_update_latency_buckets(td); @@ -2194,15 +2193,12 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg, while (true) { if (tg->last_low_overflow_time[rw] == 0) tg->last_low_overflow_time[rw] = jiffies; - throtl_downgrade_check(tg); - throtl_upgrade_check(tg); /* throtl is FIFO - if bios are already queued, should queue */ if (sq->nr_queued[rw]) break; /* if above limits, break to queue */ if (!tg_may_dispatch(tg, bio, NULL)) { - tg->last_low_overflow_time[rw] = jiffies; if (throtl_can_upgrade(td, tg)) { throtl_upgrade_state(td); goto again; @@ -2246,8 +2242,6 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg, tg->io_disp[rw], tg_iops_limit(tg, rw), sq->nr_queued[READ], sq->nr_queued[WRITE]); - tg->last_low_overflow_time[rw] = jiffies; - td->nr_queued[rw]++; throtl_add_bio_tg(bio, qn, tg); throttled = true; @@ -2264,8 +2258,13 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg, } out_unlock: + throtl_downgrade_check(orig_tg); + throtl_upgrade_check(orig_tg); + if (!throttled) { + orig_tg->last_bytes_disp[rw] += throtl_bio_data_size(bio); + orig_tg->last_io_disp[rw]++; + } spin_unlock_irq(q->queue_lock); -out: bio_set_flag(bio, BIO_THROTTLED); #ifdef CONFIG_BLK_DEV_THROTTLING_LOW