From patchwork Sat Feb 18 01:05:07 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 9580923 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A845F600F6 for ; Sat, 18 Feb 2017 01:06:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9192A287A3 for ; Sat, 18 Feb 2017 01:06:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 84C9E287A8; Sat, 18 Feb 2017 01:06:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F0594287A3 for ; Sat, 18 Feb 2017 01:06:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752257AbdBRBGA (ORCPT ); Fri, 17 Feb 2017 20:06:00 -0500 Received: from mail-pg0-f42.google.com ([74.125.83.42]:33900 "EHLO mail-pg0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751977AbdBRBGA (ORCPT ); Fri, 17 Feb 2017 20:06:00 -0500 Received: by mail-pg0-f42.google.com with SMTP id z67so19623814pgb.1 for ; Fri, 17 Feb 2017 17:05:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osandov-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=BEhNL6tQIXIGYOF0blcz5MMByoqLVyve5mjvWiP3Cw8=; b=Nxwu/V8SyLZvzPYleOOkiRj25c+oBdyFT/y+O7yPKvdc3A0d9P3DR3nwlqAHhD0957 xipYhJ9/azZho1uvCsZBJvIeCuzg6NR4CwZ1YneDzk4ACTibuWysarT5nMSBr2S6rq3Y aw0bTxKUPlOvev6yziq4lrXlYUsgLOONaVYXEoUAl77fTF4ymKXwPSLV2FM1AqaB+Mw6 Tl6RyrUWTM9/Odn5Qn7LrxFx9vfwyHAW9ewwK6FgtDb12XiNSe8n5FeRSgf96RJRFxfg p7xBSrddlapAC6EWn+rq44cO5yhnN5pWywyVo5jNiA5+s0bmOtJ5D8Vdpn0M70ais1HV yS4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=BEhNL6tQIXIGYOF0blcz5MMByoqLVyve5mjvWiP3Cw8=; b=da6fPK3dYKsX70eGJmHdnMcW8wqR9UFGaWgBW1yozRcqswHrbnhzcXam1mV+U+fSyH 77FVCP2taqSno2Q4qCpw60XsrkVtGA1h+g1tUWZuLzWlRUqVzzwr8EFF2t+OGHXsIdQc 4oubRYB3a5E6TnQKccnz/YytGdZRpyQJEDqypyVbbHI1lYAf5WQVcG+3fQCVDjNu8h6N M/namrk2538xg6FbexZ2m5JrZZLEw60krmI5sNCwI9+cZA9/uvvnimcuNpm7gUCd50NY ggF4PUK3ek6DsxvFN95mATt5XORqE+x9946TH6SPlim1f+Pw/DVqhpF++cd+Bxd7gZL4 t+1g== X-Gm-Message-State: AMke39lJtWCzECArpnwMnju7XuDcHX3B5MgrixL9Da9ND/WPcbnmxzGxN1uUKHqIQdkiioYP X-Received: by 10.99.174.4 with SMTP id q4mr13550099pgf.186.1487379958989; Fri, 17 Feb 2017 17:05:58 -0800 (PST) Received: from vader.thefacebook.com ([2620:10d:c090:200::2a40]) by smtp.gmail.com with ESMTPSA id n123sm21716084pga.9.2017.02.17.17.05.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Feb 2017 17:05:58 -0800 (PST) From: Omar Sandoval To: Jens Axboe , linux-block@vger.kernel.org Cc: kernel-team@fb.com Subject: [PATCH 2/2] blk-mq-sched: separate mark hctx and queue restart operations Date: Fri, 17 Feb 2017 17:05:07 -0800 Message-Id: <2729e47a81dc34eb838d7d854bb0e3f8a82ec7f0.1487379857.git.osandov@fb.com> X-Mailer: git-send-email 2.11.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Omar Sandoval In blk_mq_sched_dispatch_requests(), we call blk_mq_sched_mark_restart() after we dispatch requests left over on our hardware queue dispatch list. This is so we'll go back and dispatch requests from the scheduler. In this case, it's only necessary to restart the hardware queue that we are running; there's no reason to run other hardware queues just because we are using shared tags. So, split out blk_mq_sched_mark_restart() into two operations, one for just the hardware queue and one for the whole request queue. The core code only needs the hctx variant, but I/O schedulers will want to use both. This also requires adjusting blk_mq_sched_restart_queues() to always check the queue restart flag, not just when using shared tags. Signed-off-by: Omar Sandoval --- block/blk-mq-sched.c | 20 ++++++++------------ block/blk-mq-sched.h | 26 ++++++++++++++++++-------- 2 files changed, 26 insertions(+), 20 deletions(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 97fe904f0a04..aa27ecab0d3f 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -203,7 +203,7 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx) * needing a restart in that case. */ if (!list_empty(&rq_list)) { - blk_mq_sched_mark_restart(hctx); + blk_mq_sched_mark_restart_hctx(hctx); blk_mq_dispatch_rq_list(hctx, &rq_list); } else if (!e || !e->type->ops.mq.dispatch_request) { blk_mq_flush_busy_ctxs(hctx, &rq_list); @@ -322,20 +322,16 @@ static void blk_mq_sched_restart_hctx(struct blk_mq_hw_ctx *hctx) void blk_mq_sched_restart_queues(struct blk_mq_hw_ctx *hctx) { + struct request_queue *q = hctx->queue; unsigned int i; - if (!(hctx->flags & BLK_MQ_F_TAG_SHARED)) + if (test_bit(QUEUE_FLAG_RESTART, &q->queue_flags)) { + if (test_and_clear_bit(QUEUE_FLAG_RESTART, &q->queue_flags)) { + queue_for_each_hw_ctx(q, hctx, i) + blk_mq_sched_restart_hctx(hctx); + } + } else { blk_mq_sched_restart_hctx(hctx); - else { - struct request_queue *q = hctx->queue; - - if (!test_bit(QUEUE_FLAG_RESTART, &q->queue_flags)) - return; - - clear_bit(QUEUE_FLAG_RESTART, &q->queue_flags); - - queue_for_each_hw_ctx(q, hctx, i) - blk_mq_sched_restart_hctx(hctx); } } diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index 7b5f3b95c78e..a75b16b123f7 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -122,17 +122,27 @@ static inline bool blk_mq_sched_has_work(struct blk_mq_hw_ctx *hctx) return false; } -static inline void blk_mq_sched_mark_restart(struct blk_mq_hw_ctx *hctx) +/* + * Mark a hardware queue as needing a restart. + */ +static inline void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx) { - if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state)) { + if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state)) set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state); - if (hctx->flags & BLK_MQ_F_TAG_SHARED) { - struct request_queue *q = hctx->queue; +} + +/* + * Mark a hardware queue and the request queue it belongs to as needing a + * restart. + */ +static inline void blk_mq_sched_mark_restart_queue(struct blk_mq_hw_ctx *hctx) +{ + struct request_queue *q = hctx->queue; - if (!test_bit(QUEUE_FLAG_RESTART, &q->queue_flags)) - set_bit(QUEUE_FLAG_RESTART, &q->queue_flags); - } - } + if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state)) + set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state); + if (!test_bit(QUEUE_FLAG_RESTART, &q->queue_flags)) + set_bit(QUEUE_FLAG_RESTART, &q->queue_flags); } static inline bool blk_mq_sched_needs_restart(struct blk_mq_hw_ctx *hctx)