From patchwork Fri Sep 28 17:45:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10620227 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A8DD15A7 for ; Fri, 28 Sep 2018 17:45:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 671222BEB6 for ; Fri, 28 Sep 2018 17:45:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5AEEF2BF68; Fri, 28 Sep 2018 17:45:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D75402BEB6 for ; Fri, 28 Sep 2018 17:45:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726114AbeI2AKl (ORCPT ); Fri, 28 Sep 2018 20:10:41 -0400 Received: from mail-qt1-f196.google.com ([209.85.160.196]:42794 "EHLO mail-qt1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726100AbeI2AKl (ORCPT ); Fri, 28 Sep 2018 20:10:41 -0400 Received: by mail-qt1-f196.google.com with SMTP id z8-v6so7507886qto.9 for ; Fri, 28 Sep 2018 10:45:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=/5VOxmukhqtKCNjD1P0yo0Uqv0rcSa8SYnVd90VhpUI=; b=nl05g7rxa3Q0aTauHownUSk6InJEHt4ousrVrfNw9OQPlr6WEgwerKU9IKNhbUOm0k 12kBrI1X23AeD6s28czSS61AGzjKUXYhHB+eRaQvMYbuu1tZ2iMPJf0f0qf0k0tOmKgv 0n9lsSMEnWZhy6WlLjhUqxtFkURvtMV6r6c5BdWE9RcNDaYZxUCFReNvxkdkhZmfNVlF 2PK8DhsBAnZMH9EfgHH/Q0Cms3H3BzaroKjHdG2RPi0erIG/GZszhp85vjjOlPJHQ/dp HmTaoc2az8j6XXrL3BK4b4pXF/HhC6XPdSJVW58NMFwpTdHXW1ii/Qvt6uZWUL+sm5Qx qZ8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=/5VOxmukhqtKCNjD1P0yo0Uqv0rcSa8SYnVd90VhpUI=; b=sr2IsWDxoSEZBI0YBTXTM3duA9XwOe8QkeFfqPcm+fL3Ibi4p8caSVo36gFcXFNFEq EuLzs/XbUswzMKL85sCBIMiD3TDemla8ltVSuw9x9gRJTIapjWyJ/FkY5dL3qeRPw56X QXbsoPmkRBT6uOLFEmqb3GQgQlVNAV+uxeeRPtJ0NKJJYgbUcz7kDL8Dymz51Zr8GqaG ttFRMo7/AAUjiOY/ySFGXfIz02/abIZHq/79eQbRI5z4SG4k2IEUf1B0OuXmEP0AC9v5 hZswzu48NAlFcMiyy2nlPv5reHmCrKu/o42hqB7wZwqxpTRz0YlVg5FOO9ySnCyFv1ZX ZOvg== X-Gm-Message-State: ABuFfojeDen/mB4talbYersCTPrDoziKWJnCKOg2wjh45aPMSq91h0p0 rtzwqsItqoNu0ndaO10b/sPw1A== X-Google-Smtp-Source: ACcGV62jBCh7YUqt47BJjjh5/ibGp9th/+mY5a5ZUYvNHy1B5APlh1q6m5XEfssJJ2Nf8JTDcx6HcA== X-Received: by 2002:ac8:85c:: with SMTP id x28-v6mr13456639qth.90.1538156749050; Fri, 28 Sep 2018 10:45:49 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id n41-v6sm3789091qtn.73.2018.09.28.10.45.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 28 Sep 2018 10:45:48 -0700 (PDT) From: Josef Bacik To: axboe@kernel.dk, linux-block@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 1/5] blk-iolatency: use q->nr_requests directly Date: Fri, 28 Sep 2018 13:45:39 -0400 Message-Id: <20180928174543.28486-2-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180928174543.28486-1-josef@toxicpanda.com> References: <20180928174543.28486-1-josef@toxicpanda.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We were using blk_queue_depth() assuming that it would return nr_requests, but we hit a case in production on drives that had to have NCQ turned off in order for them to not shit the bed which resulted in a qd of 1, even though the nr_requests was much larger. iolatency really only cares about requests we are allowed to queue up, as any io that get's onto the request list is going to be serviced soonish, so we want to be throttling before the bio gets onto the request list. To make iolatency work as expected, simply use q->nr_requests instead of blk_queue_depth() as that is what we actually care about. Signed-off-by: Josef Bacik --- block/blk-iolatency.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 27c14f8d2576..c2e38bc12f27 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -255,7 +255,7 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat, struct child_latency_info *lat_info, bool up) { - unsigned long qd = blk_queue_depth(blkiolat->rqos.q); + unsigned long qd = blkiolat->rqos.q->nr_requests; unsigned long scale = scale_amount(qd, up); unsigned long old = atomic_read(&lat_info->scale_cookie); unsigned long max_scale = qd << 1; @@ -295,7 +295,7 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat, */ static void scale_change(struct iolatency_grp *iolat, bool up) { - unsigned long qd = blk_queue_depth(iolat->blkiolat->rqos.q); + unsigned long qd = iolat->blkiolat->rqos.q->nr_requests; unsigned long scale = scale_amount(qd, up); unsigned long old = iolat->rq_depth.max_depth; @@ -857,7 +857,7 @@ static void iolatency_pd_init(struct blkg_policy_data *pd) rq_wait_init(&iolat->rq_wait); spin_lock_init(&iolat->child_lat.lock); - iolat->rq_depth.queue_depth = blk_queue_depth(blkg->q); + iolat->rq_depth.queue_depth = blkg->q->nr_requests; iolat->rq_depth.max_depth = UINT_MAX; iolat->rq_depth.default_depth = iolat->rq_depth.queue_depth; iolat->blkiolat = blkiolat;