From patchwork Fri Mar 31 12:47:35 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 9656273 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C871060349 for ; Fri, 31 Mar 2017 12:52:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BA3971FF73 for ; Fri, 31 Mar 2017 12:52:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ADDC22842C; Fri, 31 Mar 2017 12:52:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,RCVD_IN_DNSWL_HI,RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 682971FF73 for ; Fri, 31 Mar 2017 12:52:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933288AbdCaMwg (ORCPT ); Fri, 31 Mar 2017 08:52:36 -0400 Received: from mail-wr0-f174.google.com ([209.85.128.174]:34847 "EHLO mail-wr0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933456AbdCaMsl (ORCPT ); Fri, 31 Mar 2017 08:48:41 -0400 Received: by mail-wr0-f174.google.com with SMTP id k6so99404297wre.2 for ; Fri, 31 Mar 2017 05:48:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0zLyGQH26/iubVZEFF3yP/PFsQe//ImJWgacwSvOwCU=; b=BRhsuYSaGPf1a2WMxmsyqJAkJxHMf2gAzHEPassnIJbZHGq3XTkZ/NY1aZsDX4dfU2 Fi2WzFUW0W19WBVHpHbYOw5G7zssTwF+hOzO1D0KSHTcX6P38TpjzJZ6fH5ET0tYorNy PKA5GWIoS2mZ9VdTvmrPExQFTCx0bfIhAEOTg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0zLyGQH26/iubVZEFF3yP/PFsQe//ImJWgacwSvOwCU=; b=ryKgUtu6qiUzO8Ht1tGGcpy8wnVMTtl3rFpS+lel1kStfntTCQTooHlEmH4380P0j4 /3hurvfJgCtQNsDzbA57R3JQeHCt6yuhGjbu7YQObA2CX3FnT6ktnv9AgI5CGUvXprf9 gWbl0Z+4qZ85Xit+wwyxOB5lIpkU3k/X/iLFNLnTjT13mF+tKq36bWCFmXQpabu60/PJ QnYHhnp0jfunCOTHYX7MoQIuqTS62FlBSDPxNeDtP0s6GScTrG9OQPLih9v3Eb0018Hw 1lFqEuUypPR0FRT9cygRgnvMYBAPDmnIlqX9Q2w60/5Pyt4eZ+5p1e5QW4yAtYEaiMWU 6BwA== X-Gm-Message-State: AFeK/H3lQ73U6Y8RAazdACMzO+3Pb+7KrYJeJN29RrB2jNvHLCowPEGkv4OEaB8Kaq97Lu4S X-Received: by 10.28.15.12 with SMTP id 12mr2763477wmp.22.1490964519857; Fri, 31 Mar 2017 05:48:39 -0700 (PDT) Received: from localhost.localdomain ([185.14.11.71]) by smtp.gmail.com with ESMTPSA id 189sm2793679wmm.31.2017.03.31.05.48.37 (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 31 Mar 2017 05:48:39 -0700 (PDT) From: Paolo Valente To: Jens Axboe , Tejun Heo Cc: Fabio Checconi , Arianna Avanzini , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, broonie@kernel.org, Paolo Valente Subject: [PATCH V2 08/16] block, bfq: preserve a low latency also with NCQ-capable drives Date: Fri, 31 Mar 2017 14:47:35 +0200 Message-Id: <20170331124743.3530-9-paolo.valente@linaro.org> X-Mailer: git-send-email 2.10.0 In-Reply-To: <20170331124743.3530-1-paolo.valente@linaro.org> References: <20170331124743.3530-1-paolo.valente@linaro.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP I/O schedulers typically allow NCQ-capable drives to prefetch I/O requests, as NCQ boosts the throughput exactly by prefetching and internally reordering requests. Unfortunately, as discussed in detail and shown experimentally in [1], this may cause fairness and latency guarantees to be violated. The main problem is that the internal scheduler of an NCQ-capable drive may postpone the service of some unlucky (prefetched) requests as long as it deems serving other requests more appropriate to boost the throughput. This patch addresses this issue by not disabling device idling for weight-raised queues, even if the device supports NCQ. This allows BFQ to start serving a new queue, and therefore allows the drive to prefetch new requests, only after the idling timeout expires. At that time, all the outstanding requests of the expired queue have been most certainly served. [1] P. Valente and M. Andreolini, "Improving Application Responsiveness with the BFQ Disk I/O Scheduler", Proceedings of the 5th Annual International Systems and Storage Conference (SYSTOR '12), June 2012. Slightly extended version: http://algogroup.unimore.it/people/paolo/disk_sched/bfq-v1-suite- results.pdf Signed-off-by: Paolo Valente Signed-off-by: Arianna Avanzini --- block/bfq-iosched.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 9994962..c43a737 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -6233,7 +6233,8 @@ static void bfq_update_idle_window(struct bfq_data *bfqd, if (atomic_read(&bic->icq.ioc->active_ref) == 0 || bfqd->bfq_slice_idle == 0 || - (bfqd->hw_tag && BFQQ_SEEKY(bfqq))) + (bfqd->hw_tag && BFQQ_SEEKY(bfqq) && + bfqq->wr_coeff == 1)) enable_idle = 0; else if (bfq_sample_valid(bfqq->ttime.ttime_samples)) { if (bfqq->ttime.ttime_mean > bfqd->bfq_slice_idle &&