From patchwork Sun Nov 8 19:28:17 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 7579391 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 350949F96D for ; Sun, 8 Nov 2015 19:34:03 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 33AE320601 for ; Sun, 8 Nov 2015 19:34:02 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 323492058C for ; Sun, 8 Nov 2015 19:34:01 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 262261A1F7F; Sun, 8 Nov 2015 11:34:01 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by ml01.01.org (Postfix) with ESMTP id E0F671A1F77 for ; Sun, 8 Nov 2015 11:33:59 -0800 (PST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP; 08 Nov 2015 11:33:59 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,263,1444719600"; d="scan'208";a="846146101" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.39]) by fmsmga002.fm.intel.com with ESMTP; 08 Nov 2015 11:33:59 -0800 Subject: [PATCH v4 10/14] block: notify queue death confirmation From: Dan Williams To: axboe@fb.com Date: Sun, 08 Nov 2015 14:28:17 -0500 Message-ID: <20151108192817.9104.15236.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <20151108192722.9104.86664.stgit@dwillia2-desk3.amr.corp.intel.com> References: <20151108192722.9104.86664.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Cc: Jens Axboe , jack@suse.cz, linux-nvdimm@lists.01.org, david@fromorbit.com, linux-block@vger.kernel.org, hch@lst.de X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The pmem driver arranges for references to be taken against the queue while pages it allocated via devm_memremap_pages() are in use. At shutdown time, before those pages can be deallocated, they need to be unmapped, and guaranteed to be idle. The unmap scan can only be done once we are certain no new page references will be taken. Once the blk queue percpu_ref is confirmed dead the dax core will cease allowing new references and we can free these "device" pages. Cc: Jens Axboe Cc: Christoph Hellwig Cc: Ross Zwisler Signed-off-by: Dan Williams --- block/blk-core.c | 12 +++++++++--- block/blk-mq.c | 19 +++++++++++++++---- include/linux/blkdev.h | 4 +++- 3 files changed, 27 insertions(+), 8 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 6ebe33ed5154..5159946a2b41 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -516,6 +516,12 @@ void blk_set_queue_dying(struct request_queue *q) } EXPORT_SYMBOL_GPL(blk_set_queue_dying); +void blk_wait_queue_dead(struct request_queue *q) +{ + wait_event(q->q_freeze_wq, q->q_usage_dead); +} +EXPORT_SYMBOL(blk_wait_queue_dead); + /** * blk_cleanup_queue - shutdown a request queue * @q: request queue to shutdown @@ -641,7 +647,7 @@ int blk_queue_enter(struct request_queue *q, gfp_t gfp) if (!(gfp & __GFP_WAIT)) return -EBUSY; - ret = wait_event_interruptible(q->mq_freeze_wq, + ret = wait_event_interruptible(q->q_freeze_wq, !atomic_read(&q->mq_freeze_depth) || blk_queue_dying(q)); if (blk_queue_dying(q)) @@ -661,7 +667,7 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref) struct request_queue *q = container_of(ref, struct request_queue, q_usage_counter); - wake_up_all(&q->mq_freeze_wq); + wake_up_all(&q->q_freeze_wq); } struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id) @@ -723,7 +729,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id) q->bypass_depth = 1; __set_bit(QUEUE_FLAG_BYPASS, &q->queue_flags); - init_waitqueue_head(&q->mq_freeze_wq); + init_waitqueue_head(&q->q_freeze_wq); /* * Init percpu_ref in atomic mode so that it's faster to shutdown. diff --git a/block/blk-mq.c b/block/blk-mq.c index 6c240712553a..e0417febbcd4 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -78,13 +78,23 @@ static void blk_mq_hctx_clear_pending(struct blk_mq_hw_ctx *hctx, clear_bit(CTX_TO_BIT(hctx, ctx), &bm->word); } +static void blk_confirm_queue_death(struct percpu_ref *ref) +{ + struct request_queue *q = container_of(ref, typeof(*q), + q_usage_counter); + + q->q_usage_dead = 1; + wake_up_all(&q->q_freeze_wq); +} + void blk_mq_freeze_queue_start(struct request_queue *q) { int freeze_depth; freeze_depth = atomic_inc_return(&q->mq_freeze_depth); if (freeze_depth == 1) { - percpu_ref_kill(&q->q_usage_counter); + percpu_ref_kill_and_confirm(&q->q_usage_counter, + blk_confirm_queue_death); blk_mq_run_hw_queues(q, false); } } @@ -92,7 +102,7 @@ EXPORT_SYMBOL_GPL(blk_mq_freeze_queue_start); static void blk_mq_freeze_queue_wait(struct request_queue *q) { - wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter)); + wait_event(q->q_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter)); } /* @@ -130,7 +140,8 @@ void blk_mq_unfreeze_queue(struct request_queue *q) WARN_ON_ONCE(freeze_depth < 0); if (!freeze_depth) { percpu_ref_reinit(&q->q_usage_counter); - wake_up_all(&q->mq_freeze_wq); + q->q_usage_dead = 0; + wake_up_all(&q->q_freeze_wq); } } EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue); @@ -149,7 +160,7 @@ void blk_mq_wake_waiters(struct request_queue *q) * dying, we need to ensure that processes currently waiting on * the queue are notified as well. */ - wake_up_all(&q->mq_freeze_wq); + wake_up_all(&q->q_freeze_wq); } bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index b78e01542e9e..e121e5e0c6ac 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -431,6 +431,7 @@ struct request_queue { */ unsigned int flush_flags; unsigned int flush_not_queueable:1; + unsigned int q_usage_dead:1; struct blk_flush_queue *fq; struct list_head requeue_list; @@ -453,7 +454,7 @@ struct request_queue { struct throtl_data *td; #endif struct rcu_head rcu_head; - wait_queue_head_t mq_freeze_wq; + wait_queue_head_t q_freeze_wq; struct percpu_ref q_usage_counter; struct list_head all_q_node; @@ -953,6 +954,7 @@ extern struct request_queue *blk_init_queue_node(request_fn_proc *rfn, extern struct request_queue *blk_init_queue(request_fn_proc *, spinlock_t *); extern struct request_queue *blk_init_allocated_queue(struct request_queue *, request_fn_proc *, spinlock_t *); +extern void blk_wait_queue_dead(struct request_queue *q); extern void blk_cleanup_queue(struct request_queue *); extern void blk_queue_make_request(struct request_queue *, make_request_fn *); extern void blk_queue_bounce_limit(struct request_queue *, u64);