From patchwork Sat Aug 4 00:03:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10555563 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 41B03A748 for ; Sat, 4 Aug 2018 00:03:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 339EE2C91B for ; Sat, 4 Aug 2018 00:03:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 276262C933; Sat, 4 Aug 2018 00:03:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 924BC2C91B for ; Sat, 4 Aug 2018 00:03:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732079AbeHDCB4 (ORCPT ); Fri, 3 Aug 2018 22:01:56 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:34551 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732034AbeHDCB4 (ORCPT ); Fri, 3 Aug 2018 22:01:56 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1533341008; x=1564877008; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=8XI7MifHDMuZW1bDCr56gI9HbP8znkzB3erwSeAgvG8=; b=bXXpZppSFxZ24VxUFk9ERLFXLHh5mjW3PFQy2108VbmPSLyxaEaHPaP2 JxV1l0XYJVWnHytpR5myjEpwUjOIucUk5r6YuRsM1dKT76cNS7bYsv5q/ xpL/r7UEgTU0wO4mTCrRbd68AAVaIa5h/FZsQxpWbVxBOt91AzcpD2U5/ EB1WgJ0BDqXuUhw1ZvX2RghNlT730p7Wc7JeVHgsx7umpsAaWslOipbbi s9QCXsdgN2HXYDA/E7F150oF5AIp8fZy56bJwWo8adbbCJnQ/KDJRrY5V 4hLnksaHmUCUYPKgmWGEK7EBfzPOyqZ+R6KHqyLgpvSB29b095hZkcgq1 g==; X-IronPort-AV: E=Sophos;i="5.51,440,1526313600"; d="scan'208";a="90059263" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 04 Aug 2018 08:03:27 +0800 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP; 03 Aug 2018 16:51:11 -0700 Received: from thinkpad-bart.sdcorp.global.sandisk.com ([10.111.67.248]) by uls-op-cesaip01.wdc.com with ESMTP; 03 Aug 2018 17:03:26 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Jianchao Wang , Ming Lei , Johannes Thumshirn , Alan Stern Subject: [PATCH v4 05/10] block: Serialize queue freezing and blk_pre_runtime_suspend() Date: Fri, 3 Aug 2018 17:03:20 -0700 Message-Id: <20180804000325.3610-6-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180804000325.3610-1-bart.vanassche@wdc.com> References: <20180804000325.3610-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Serialize these operations because a later patch will add code into blk_pre_runtime_suspend() that should not run concurrently with queue freezing nor unfreezing. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Jianchao Wang Cc: Ming Lei Cc: Johannes Thumshirn Cc: Alan Stern --- block/blk-core.c | 5 +++++ block/blk-mq.c | 3 +++ block/blk-pm.c | 44 ++++++++++++++++++++++++++++++++++++++++++ include/linux/blk-pm.h | 6 ++++++ include/linux/blkdev.h | 5 +++++ 5 files changed, 63 insertions(+) diff --git a/block/blk-core.c b/block/blk-core.c index 03cff7445dee..59382c758155 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -696,6 +697,7 @@ void blk_set_queue_dying(struct request_queue *q) * prevent I/O from crossing blk_queue_enter(). */ blk_freeze_queue_start(q); + blk_pm_runtime_unlock(q); if (q->mq_ops) blk_mq_wake_waiters(q); @@ -756,6 +758,7 @@ void blk_cleanup_queue(struct request_queue *q) * prevent that q->request_fn() gets invoked after draining finished. */ blk_freeze_queue(q); + blk_pm_runtime_unlock(q); spin_lock_irq(lock); queue_flag_set(QUEUE_FLAG_DEAD, q); spin_unlock_irq(lock); @@ -1045,6 +1048,8 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, #ifdef CONFIG_BLK_DEV_IO_TRACE mutex_init(&q->blk_trace_mutex); #endif + blk_pm_init(q); + mutex_init(&q->sysfs_lock); spin_lock_init(&q->__queue_lock); diff --git a/block/blk-mq.c b/block/blk-mq.c index 8b23ae34d949..b1882a3a5216 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -138,6 +139,7 @@ void blk_freeze_queue_start(struct request_queue *q) { int freeze_depth; + blk_pm_runtime_lock(q); freeze_depth = atomic_inc_return(&q->mq_freeze_depth); if (freeze_depth == 1) { percpu_ref_kill(&q->q_usage_counter); @@ -201,6 +203,7 @@ void blk_mq_unfreeze_queue(struct request_queue *q) percpu_ref_reinit(&q->q_usage_counter); wake_up_all(&q->mq_freeze_wq); } + blk_pm_runtime_unlock(q); } EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue); diff --git a/block/blk-pm.c b/block/blk-pm.c index 9b636960d285..2a4632d0be4b 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -3,6 +3,45 @@ #include #include #include +#include +#include + +/* + * Initialize the request queue members used by blk_pm_runtime_lock() and + * blk_pm_runtime_unlock(). + */ +void blk_pm_init(struct request_queue *q) +{ + spin_lock_init(&q->rpm_lock); + init_waitqueue_head(&q->rpm_wq); + q->rpm_owner = NULL; + q->rpm_nesting_level = 0; +} + +void blk_pm_runtime_lock(struct request_queue *q) +{ + might_sleep(); + + spin_lock(&q->rpm_lock); + wait_event_exclusive_cmd(q->rpm_wq, + q->rpm_owner == NULL || q->rpm_owner == current, + spin_unlock(&q->rpm_lock), spin_lock(&q->rpm_lock)); + if (q->rpm_owner == NULL) + q->rpm_owner = current; + q->rpm_nesting_level++; + spin_unlock(&q->rpm_lock); +} + +void blk_pm_runtime_unlock(struct request_queue *q) +{ + spin_lock(&q->rpm_lock); + WARN_ON_ONCE(q->rpm_nesting_level <= 0); + if (--q->rpm_nesting_level == 0) { + q->rpm_owner = NULL; + wake_up(&q->rpm_wq); + } + spin_unlock(&q->rpm_lock); +} /** * blk_pm_runtime_init - Block layer runtime PM initialization routine @@ -68,6 +107,8 @@ int blk_pre_runtime_suspend(struct request_queue *q) if (!q->dev) return ret; + blk_pm_runtime_lock(q); + spin_lock_irq(q->queue_lock); if (q->nr_pending) { ret = -EBUSY; @@ -76,6 +117,9 @@ int blk_pre_runtime_suspend(struct request_queue *q) q->rpm_status = RPM_SUSPENDING; } spin_unlock_irq(q->queue_lock); + + blk_pm_runtime_unlock(q); + return ret; } EXPORT_SYMBOL(blk_pre_runtime_suspend); diff --git a/include/linux/blk-pm.h b/include/linux/blk-pm.h index b80c65aba249..aafcc7877e53 100644 --- a/include/linux/blk-pm.h +++ b/include/linux/blk-pm.h @@ -10,6 +10,9 @@ struct request_queue; * block layer runtime pm functions */ #ifdef CONFIG_PM +extern void blk_pm_init(struct request_queue *q); +extern void blk_pm_runtime_lock(struct request_queue *q); +extern void blk_pm_runtime_unlock(struct request_queue *q); extern void blk_pm_runtime_init(struct request_queue *q, struct device *dev); extern int blk_pre_runtime_suspend(struct request_queue *q); extern void blk_post_runtime_suspend(struct request_queue *q, int err); @@ -17,6 +20,9 @@ extern void blk_pre_runtime_resume(struct request_queue *q); extern void blk_post_runtime_resume(struct request_queue *q, int err); extern void blk_set_runtime_active(struct request_queue *q); #else +static inline void blk_pm_init(struct request_queue *q) {} +static inline void blk_pm_runtime_lock(struct request_queue *q) {} +static inline void blk_pm_runtime_unlock(struct request_queue *q) {} static inline void blk_pm_runtime_init(struct request_queue *q, struct device *dev) {} #endif diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 2ef38739d645..72d569218231 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -548,6 +548,11 @@ struct request_queue { struct device *dev; int rpm_status; unsigned int nr_pending; + wait_queue_head_t rpm_wq; + /* rpm_lock protects rpm_owner and rpm_nesting_level */ + spinlock_t rpm_lock; + struct task_struct *rpm_owner; + int rpm_nesting_level; #endif /*