From patchwork Wed Feb 22 18:58:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 9587379 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9B1CB6051E for ; Wed, 22 Feb 2017 18:59:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A6CA28650 for ; Wed, 22 Feb 2017 18:59:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7CE4328657; Wed, 22 Feb 2017 18:59:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E7E7A28650 for ; Wed, 22 Feb 2017 18:59:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933366AbdBVS7X (ORCPT ); Wed, 22 Feb 2017 13:59:23 -0500 Received: from mail-pg0-f52.google.com ([74.125.83.52]:35823 "EHLO mail-pg0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933076AbdBVS7T (ORCPT ); Wed, 22 Feb 2017 13:59:19 -0500 Received: by mail-pg0-f52.google.com with SMTP id b129so4371651pgc.2 for ; Wed, 22 Feb 2017 10:59:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osandov-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=mx5xQvJFAEitCMrtZ5Z4UHRDpWM0f2BDqdiFvvITtdk=; b=WcYOinK0sork1U37j2eHgzgPzKYFBkP4JHSBGP3FusO3Di/LJEOdgVU1U1KZnRop// k7ZUoc1mZXW2sWDrEjYlMmHy/XOIIEvVkRN3lvMKx++pXAzlb4hisTcxiW+hjxmnJ0I1 KmCnM4qlGx2TDM9PF0RgiWxHx6PhvMXj7TxSGqXRYtAhxmS5DBtGjk8TNolWO5zCUtUc BJNS0s+oCu2cz502m829o2nQ2Cs+u/yahJlf4zxCQS5IJ3VkWdkRmAg9LLjCthOxYJPz DjJ0WVRz5XvLwZEfmeokfA6dXdzsZbUafaHbKjfLvij2G2dFLpwAE2dKDfVjpfVTDm3c edEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=mx5xQvJFAEitCMrtZ5Z4UHRDpWM0f2BDqdiFvvITtdk=; b=W4suaqlblOjY+P3fC0+gB4rQDHz/9PCFijjnGM/D/O/IqOwm144yfxWIgt0dEefe3O 1M1rBu//1HDXfMgN3IvjbkB8wlu0LSvQGAU3RtJUzP1S4XZEy6FDf/N0PFJHyv/gzWxX /csxvObfXE/Ca+tdRaj2NCVIYxzr65+vbp7mEO2jN59D5ZlSI0r6wUuuFb5anrMYWsnp j8nE/gopQPe0g4xuKS39TN4Tv6JKBHIiRnDEZJSQDc8x18Ef/UMaKa33ccYewmaW8eSn b4GvQSGBmy2/CqVPH8bGOyNsQ/6myW+ib2BLu1smZqb2vxVwslzyUjrL/DsVcxgDgW75 E4UQ== X-Gm-Message-State: AMke39kfQnaYAQq0JhKtX2YPCBeHyeLO9ZjtgLG+gw1cLtuUD9rtsaPaOKinmiTIHkh/nuND X-Received: by 10.99.8.4 with SMTP id 4mr43053858pgi.204.1487789958876; Wed, 22 Feb 2017 10:59:18 -0800 (PST) Received: from vader.thefacebook.com ([2620:10d:c090:180::1:87d7]) by smtp.gmail.com with ESMTPSA id x2sm5078833pfa.71.2017.02.22.10.59.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Feb 2017 10:59:18 -0800 (PST) From: Omar Sandoval To: Jens Axboe , linux-block@vger.kernel.org Cc: kernel-team@fb.com Subject: [PATCH v3 2/2] blk-mq-sched: separate mark hctx and queue restart operations Date: Wed, 22 Feb 2017 10:58:30 -0800 Message-Id: <2ef9f8450d7b15ac3af143583f1e6ae61938f5e0.1487789478.git.osandov@fb.com> X-Mailer: git-send-email 2.11.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Omar Sandoval In blk_mq_sched_dispatch_requests(), we call blk_mq_sched_mark_restart() after we dispatch requests left over on our hardware queue dispatch list. This is so we'll go back and dispatch requests from the scheduler. In this case, it's only necessary to restart the hardware queue that we are running; there's no reason to run other hardware queues just because we are using shared tags. So, split out blk_mq_sched_mark_restart() into two operations, one for just the hardware queue and one for the whole request queue. The core code only needs the hctx variant, but I/O schedulers will want to use both. This also requires adjusting blk_mq_sched_restart_queues() to always check the queue restart flag, not just when using shared tags. Signed-off-by: Omar Sandoval Signed-off-by: Jens Axboe --- block/blk-mq-sched.c | 20 ++++++++------------ block/blk-mq-sched.h | 26 ++++++++++++++++++-------- 2 files changed, 26 insertions(+), 20 deletions(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 9e8d6795a8c1..16df0a5e7046 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -205,7 +205,7 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx) * needing a restart in that case. */ if (!list_empty(&rq_list)) { - blk_mq_sched_mark_restart(hctx); + blk_mq_sched_mark_restart_hctx(hctx); did_work = blk_mq_dispatch_rq_list(hctx, &rq_list); } else if (!has_sched_dispatch) { blk_mq_flush_busy_ctxs(hctx, &rq_list); @@ -331,20 +331,16 @@ static void blk_mq_sched_restart_hctx(struct blk_mq_hw_ctx *hctx) void blk_mq_sched_restart_queues(struct blk_mq_hw_ctx *hctx) { + struct request_queue *q = hctx->queue; unsigned int i; - if (!(hctx->flags & BLK_MQ_F_TAG_SHARED)) + if (test_bit(QUEUE_FLAG_RESTART, &q->queue_flags)) { + if (test_and_clear_bit(QUEUE_FLAG_RESTART, &q->queue_flags)) { + queue_for_each_hw_ctx(q, hctx, i) + blk_mq_sched_restart_hctx(hctx); + } + } else { blk_mq_sched_restart_hctx(hctx); - else { - struct request_queue *q = hctx->queue; - - if (!test_bit(QUEUE_FLAG_RESTART, &q->queue_flags)) - return; - - clear_bit(QUEUE_FLAG_RESTART, &q->queue_flags); - - queue_for_each_hw_ctx(q, hctx, i) - blk_mq_sched_restart_hctx(hctx); } } diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index 7b5f3b95c78e..a75b16b123f7 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -122,17 +122,27 @@ static inline bool blk_mq_sched_has_work(struct blk_mq_hw_ctx *hctx) return false; } -static inline void blk_mq_sched_mark_restart(struct blk_mq_hw_ctx *hctx) +/* + * Mark a hardware queue as needing a restart. + */ +static inline void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx) { - if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state)) { + if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state)) set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state); - if (hctx->flags & BLK_MQ_F_TAG_SHARED) { - struct request_queue *q = hctx->queue; +} + +/* + * Mark a hardware queue and the request queue it belongs to as needing a + * restart. + */ +static inline void blk_mq_sched_mark_restart_queue(struct blk_mq_hw_ctx *hctx) +{ + struct request_queue *q = hctx->queue; - if (!test_bit(QUEUE_FLAG_RESTART, &q->queue_flags)) - set_bit(QUEUE_FLAG_RESTART, &q->queue_flags); - } - } + if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state)) + set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state); + if (!test_bit(QUEUE_FLAG_RESTART, &q->queue_flags)) + set_bit(QUEUE_FLAG_RESTART, &q->queue_flags); } static inline bool blk_mq_sched_needs_restart(struct blk_mq_hw_ctx *hctx)