From patchwork Sat Aug 4 00:03:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10555573 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 389481390 for ; Sat, 4 Aug 2018 00:03:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2AFB62C91D for ; Sat, 4 Aug 2018 00:03:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1F13A2C933; Sat, 4 Aug 2018 00:03:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A60352C913 for ; Sat, 4 Aug 2018 00:03:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731959AbeHDCCB (ORCPT ); Fri, 3 Aug 2018 22:02:01 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:34565 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732090AbeHDCCA (ORCPT ); Fri, 3 Aug 2018 22:02:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1533341012; x=1564877012; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=p9d8yQ2cIAtH58G7Wae8cgzhiAFsZwlKB+UZa4aE/D4=; b=kl0/ceqihawBI/kmxqi7j3jKldX80dKslBIJnHOXW8l07m22YmBekGre lOxprmvSf14N0NLGmMCu9IpDeE/82BSG9nm6DfZy52GKaNljDVyvrw0w2 9EG1Zq1ecGLYn212hYf85XHFyxSMGk9pOhxhoA9G8UWSwE9fnGKPVznVU KQjMDtRX9CKuWpF3tus3/RxwQTHbtTHJjFcwTye681DuNphG+dmsOw+S6 t+WKJrNY2ftlYNs0RAS8dRnP5wIyyUGrHP/sBXP8P+qJEsqQUvYYR46db RYVgRf63Nkf91AeI870POh2y7XVlVVsZ5EtxB5zLphsPZx71zl1wGcoUa A==; X-IronPort-AV: E=Sophos;i="5.51,440,1526313600"; d="scan'208";a="90059280" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 04 Aug 2018 08:03:31 +0800 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP; 03 Aug 2018 16:51:15 -0700 Received: from thinkpad-bart.sdcorp.global.sandisk.com ([10.111.67.248]) by uls-op-cesaip01.wdc.com with ESMTP; 03 Aug 2018 17:03:27 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Jianchao Wang , Ming Lei , Alan Stern , Johannes Thumshirn Subject: [PATCH v4 09/10] blk-mq: Insert blk_pm_{add,put}_request() calls Date: Fri, 3 Aug 2018 17:03:24 -0700 Message-Id: <20180804000325.3610-10-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180804000325.3610-1-bart.vanassche@wdc.com> References: <20180804000325.3610-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Make sure that blk_pm_add_request() is called exactly once before a request is added to a software queue, to the scheduler and also before .queue_rq() is called directly. Call blk_pm_put_request() after a request has finished. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Jianchao Wang Cc: Ming Lei Cc: Alan Stern Cc: Johannes Thumshirn --- block/blk-mq-sched.c | 13 +++++++++++-- block/blk-mq.c | 8 ++++++++ 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index cf9c66c6d35a..d87839b31d56 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -14,6 +14,7 @@ #include "blk-mq-debugfs.h" #include "blk-mq-sched.h" #include "blk-mq-tag.h" +#include "blk-pm.h" #include "blk-wbt.h" void blk_mq_sched_free_hctx_data(struct request_queue *q, @@ -349,6 +350,8 @@ static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx, { /* dispatch flush rq directly */ if (rq->rq_flags & RQF_FLUSH_SEQ) { + blk_pm_add_request(rq->q, rq); + spin_lock(&hctx->lock); list_add(&rq->queuelist, &hctx->dispatch); spin_unlock(&hctx->lock); @@ -380,6 +383,8 @@ void blk_mq_sched_insert_request(struct request *rq, bool at_head, if (blk_mq_sched_bypass_insert(hctx, !!e, rq)) goto run; + blk_pm_add_request(q, rq); + if (e && e->type->ops.mq.insert_requests) { LIST_HEAD(list); @@ -402,10 +407,14 @@ void blk_mq_sched_insert_requests(struct request_queue *q, { struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu); struct elevator_queue *e = hctx->queue->elevator; + struct request *rq; + + if (e && e->type->ops.mq.insert_requests) { + list_for_each_entry(rq, list, queuelist) + blk_pm_add_request(q, rq); - if (e && e->type->ops.mq.insert_requests) e->type->ops.mq.insert_requests(hctx, list, false); - else { + } else { /* * try to issue requests directly if the hw queue isn't * busy in case of 'none' scheduler, and this way may save diff --git a/block/blk-mq.c b/block/blk-mq.c index b1882a3a5216..74a575a32dda 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -36,6 +36,7 @@ #include "blk-mq-tag.h" #include "blk-stat.h" #include "blk-mq-sched.h" +#include "blk-pm.h" #include "blk-rq-qos.h" static bool blk_mq_poll(struct request_queue *q, blk_qc_t cookie); @@ -474,6 +475,8 @@ static void __blk_mq_free_request(struct request *rq) struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu); const int sched_tag = rq->internal_tag; + blk_pm_put_request(rq); + if (rq->tag != -1) blk_mq_put_tag(hctx, hctx->tags, ctx, rq->tag); if (sched_tag != -1) @@ -1563,6 +1566,8 @@ void blk_mq_request_bypass_insert(struct request *rq, bool run_queue) struct blk_mq_ctx *ctx = rq->mq_ctx; struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(rq->q, ctx->cpu); + blk_pm_add_request(rq->q, rq); + spin_lock(&hctx->lock); list_add_tail(&rq->queuelist, &hctx->dispatch); spin_unlock(&hctx->lock); @@ -1584,6 +1589,7 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, list_for_each_entry(rq, list, queuelist) { BUG_ON(rq->mq_ctx != ctx); trace_block_rq_insert(hctx->queue, rq); + blk_pm_add_request(rq->q, rq); } spin_lock(&ctx->lock); @@ -1680,6 +1686,8 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, blk_qc_t new_cookie; blk_status_t ret; + blk_pm_add_request(q, rq); + new_cookie = request_to_qc_t(hctx, rq); /*