From patchwork Fri Sep 8 23:52:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 9945143 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8C62E602D7 for ; Fri, 8 Sep 2017 23:53:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7E4BF2898E for ; Fri, 8 Sep 2017 23:53:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 72201289C6; Fri, 8 Sep 2017 23:53:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CF9022898E for ; Fri, 8 Sep 2017 23:53:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757275AbdIHXxV (ORCPT ); Fri, 8 Sep 2017 19:53:21 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:24500 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752952AbdIHXxU (ORCPT ); Fri, 8 Sep 2017 19:53:20 -0400 X-IronPort-AV: E=Sophos;i="5.42,363,1500912000"; d="scan'208";a="49358402" Received: from sjappemgw11.hgst.com (HELO sjappemgw12.hgst.com) ([199.255.44.62]) by ob1.hgst.iphmx.com with ESMTP; 09 Sep 2017 07:52:34 +0800 Received: from thinkpad-bart.sdcorp.global.sandisk.com (HELO thinkpad-bart.int.fusionio.com) ([10.11.172.152]) by sjappemgw12.hgst.com with ESMTP; 08 Sep 2017 16:52:28 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Hannes Reinecke , Johannes Thumshirn , "Rafael J . Wysocki" , Ming Lei Subject: [PATCH 5/5] blk-mq: Implement power management support Date: Fri, 8 Sep 2017 16:52:26 -0700 Message-Id: <20170908235226.26622-6-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20170908235226.26622-1-bart.vanassche@wdc.com> References: <20170908235226.26622-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Implement the following approach for blk-mq: - Either make blk_get_request() wait or make it fail when a request queue is not in status RPM_ACTIVE. - While suspending, suspended or resuming, only process power management requests (REQ_PM). Reported-by: Oleksandr Natalenko References: "I/O hangs after resuming from suspend-to-ram" (https://marc.info/?l=linux-block&m=150340235201348). Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Rafael J. Wysocki Cc: Ming Lei --- block/blk-core.c | 20 ++++++++++++++++---- block/blk-mq.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 50 insertions(+), 4 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index cd2700c763ed..49a4cd5b255e 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -3438,10 +3438,6 @@ EXPORT_SYMBOL(blk_finish_plug); */ void blk_pm_runtime_init(struct request_queue *q, struct device *dev) { - /* not support for RQF_PM and ->rpm_status in blk-mq yet */ - if (q->mq_ops) - return; - q->dev = dev; q->rpm_status = RPM_ACTIVE; init_waitqueue_head(&q->rpm_active_wq); @@ -3478,6 +3474,19 @@ int blk_pre_runtime_suspend(struct request_queue *q) if (!q->dev) return ret; + if (q->mq_ops) { + percpu_ref_switch_to_atomic_nowait(&q->q_usage_counter); + if (!percpu_ref_is_zero(&q->q_usage_counter)) { + ret = -EBUSY; + pm_runtime_mark_last_busy(q->dev); + } else { + spin_lock_irq(q->queue_lock); + q->rpm_status = RPM_SUSPENDING; + spin_unlock_irq(q->queue_lock); + } + return ret; + } + spin_lock_irq(q->queue_lock); if (q->nr_pending) { ret = -EBUSY; @@ -3561,6 +3570,9 @@ void blk_post_runtime_resume(struct request_queue *q, int err) if (!q->dev) return; + if (q->mq_ops) + percpu_ref_switch_to_percpu(&q->q_usage_counter); + spin_lock_irq(q->queue_lock); if (!err) { q->rpm_status = RPM_ACTIVE; diff --git a/block/blk-mq.c b/block/blk-mq.c index 3f18cff80050..cbd680dc194a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -383,6 +383,29 @@ static struct request *blk_mq_get_request(struct request_queue *q, return rq; } +#ifdef CONFIG_PM +static bool blk_mq_wait_until_active(struct request_queue *q, bool wait) +{ + if (!wait) + return false; + /* + * Note: the q->rpm_status check below races against the changes of + * that variable by the blk_{pre,post}_runtime_{suspend,resume}() + * functions. The worst possible consequence of these races is that a + * small number of requests gets passed to the block driver associated + * with the request queue after rpm_status has been changed into + * RPM_SUSPENDING and before it is changed into RPM_SUSPENDED. + */ + wait_event(q->rpm_active_wq, q->rpm_status == RPM_ACTIVE); + return true; +} +#else +static bool blk_mq_wait_until_active(struct request_queue *q, bool nowait) +{ + return true; +} +#endif + struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op, unsigned int flags) { @@ -390,6 +413,17 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op, struct request *rq; int ret; + WARN_ON_ONCE((op & REQ_PM) && blk_pm_suspended(q)); + + /* + * Wait if the request queue is suspended or in the process of + * suspending/resuming and the request being allocated will not be + * used for power management purposes. + */ + if (!(op & REQ_PM) && + !blk_mq_wait_until_active(q, !(op & REQ_NOWAIT))) + return ERR_PTR(-EAGAIN); + ret = blk_queue_enter(q, flags & BLK_MQ_REQ_NOWAIT); if (ret) return ERR_PTR(ret);