From patchwork Mon Sep 10 20:49:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10594769 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3A128920 for ; Mon, 10 Sep 2018 20:49:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2848F2933C for ; Mon, 10 Sep 2018 20:49:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1C74A2933F; Mon, 10 Sep 2018 20:49:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B10D62933C for ; Mon, 10 Sep 2018 20:49:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726795AbeIKBp3 (ORCPT ); Mon, 10 Sep 2018 21:45:29 -0400 Received: from mail-qt0-f194.google.com ([209.85.216.194]:46268 "EHLO mail-qt0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726150AbeIKBp3 (ORCPT ); Mon, 10 Sep 2018 21:45:29 -0400 Received: by mail-qt0-f194.google.com with SMTP id l42-v6so592383qtf.13 for ; Mon, 10 Sep 2018 13:49:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=eNUbdBOi2Xavy3NzQlbS2LAhHU+z6BEBPjelYt/pLas=; b=ZFs3IQGw64IcGfQTsZF7vebqFYWvj8UJS6B/b0k6hvlr6QeFPhjXjfJdeXZjkrV7Oh itpwA3VSn8iIoggQRd8VoFzNv+b/Ml7jvyWZoTaU5pmBY8T/VpzALh8OZR7OVZS6oroN hT4DFmGwCnAEJkLIFZNptTl/3vlijPrkva9aq2lzlU16/H1idLi3hsYHyXRuHBDe3jJl HQ7dchRlx3IU+Qt1O5vGrQdC/q9x9ZsDchufUuEHBFUkWe68X1r2BUZNSiLLfXsSRcAO bmzfP1p2793p7EgK72YQKfZJhKkSyiv3OubW7SSq4bMNSa6vCO//7Irbwxkh7nuyMv1r a/dA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=eNUbdBOi2Xavy3NzQlbS2LAhHU+z6BEBPjelYt/pLas=; b=Tj+xfQc640qgfKnjP+U6GOQH7Q9Y30uekrMRIMu7p0cOoUOJxIa+Els46VIXqVIxh8 WhlAzWpDy6dhwO4ZB9G+k+mRzHIZnxxv6iT+O8qL484rUj2zaKxeh0cpti56n2A+BnxS qq9YPo6nh99vcZwSmjj/9FIr+HUSu7+BELwhf7Sk+Zw4EVwBAw9LCl2YOT+Ex3V9fOrW pc6DMXyIRBndrWaF8PT4EVyOxy1GymbXko6iDVD4RIBGuf/K2QM7YReg3Gz+m9R+VBqH /P7T0d4l4qv9IywQ8WMgdxBp7LAc25vuPCs/ECeWjr67qKELvtZg5SF/auEI19OKuWOm xpug== X-Gm-Message-State: APzg51AeBfuCF3zewo7uF/awIMmWbtKZ2Pd3Bu9hHgAEuB6MOPyRA+Rn nPrLuFzz9L01bKKVmlB+WkwKQw== X-Google-Smtp-Source: ANB0VdasccWiRvY4tjieQ0bzbjRD9ABCVX+b213+DXq9p6CUCAsFU4ESlBAYi6eNB8SMfs2lHK1mQg== X-Received: by 2002:ac8:5216:: with SMTP id r22-v6mr17474360qtn.78.1536612576842; Mon, 10 Sep 2018 13:49:36 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id t17-v6sm11670330qtj.50.2018.09.10.13.49.35 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Sep 2018 13:49:35 -0700 (PDT) From: Josef Bacik To: axboe@kernel.dk, kernel-team@fb.com, linux-block@vger.kernel.org Subject: [PATCH 1/6] blk-iolatency: use q->nr_requests directly Date: Mon, 10 Sep 2018 16:49:27 -0400 Message-Id: <20180910204932.14323-2-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180910204932.14323-1-josef@toxicpanda.com> References: <20180910204932.14323-1-josef@toxicpanda.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We were using blk_queue_depth() assuming that it would return nr_requests, but we hit a case in production on drives that had to have NCQ turned off in order for them to not shit the bed which resulted in a qd of 1, even though the nr_requests was much larger. iolatency really only cares about requests we are allowed to queue up, as any io that get's onto the request list is going to be serviced soonish, so we want to be throttling before the bio gets onto the request list. To make iolatency work as expected, simply use q->nr_requests instead of blk_queue_depth() as that is what we actually care about. Signed-off-by: Josef Bacik --- block/blk-iolatency.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 19923f8a029d..9b8d2012ea88 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -255,7 +255,7 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat, struct child_latency_info *lat_info, bool up) { - unsigned long qd = blk_queue_depth(blkiolat->rqos.q); + unsigned long qd = blkiolat->rqos.q->nr_requests; unsigned long scale = scale_amount(qd, up); unsigned long old = atomic_read(&lat_info->scale_cookie); unsigned long max_scale = qd << 1; @@ -295,7 +295,7 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat, */ static void scale_change(struct iolatency_grp *iolat, bool up) { - unsigned long qd = blk_queue_depth(iolat->blkiolat->rqos.q); + unsigned long qd = iolat->blkiolat->rqos.q->nr_requests; unsigned long scale = scale_amount(qd, up); unsigned long old = iolat->rq_depth.max_depth; bool changed = false; @@ -884,7 +884,7 @@ static void iolatency_pd_init(struct blkg_policy_data *pd) rq_wait_init(&iolat->rq_wait); spin_lock_init(&iolat->child_lat.lock); - iolat->rq_depth.queue_depth = blk_queue_depth(blkg->q); + iolat->rq_depth.queue_depth = blkg->q->nr_requests; iolat->rq_depth.max_depth = UINT_MAX; iolat->rq_depth.default_depth = iolat->rq_depth.queue_depth; iolat->blkiolat = blkiolat;