From patchwork Mon May 21 18:11:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10416167 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E6D836053B for ; Mon, 21 May 2018 18:11:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D78A0289E0 for ; Mon, 21 May 2018 18:11:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CCFBD289EA; Mon, 21 May 2018 18:11:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 22024289E0 for ; Mon, 21 May 2018 18:11:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753291AbeEUSL2 (ORCPT ); Mon, 21 May 2018 14:11:28 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:50876 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753296AbeEUSL2 (ORCPT ); Mon, 21 May 2018 14:11:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1526927174; x=1558463174; h=from:to:cc:subject:date:message-id; bh=2tz1Z3Xl48ZTCZjLDg9/7lEGqLK/V37EJQsOo6VmRwc=; b=OvqzKufaNjVOvLhqAy33XIzAqijyv4a0UrGV4I30GVq+bXt3nP00IAQs i85GKnVwCgX9faIXn/i+Sag/so6+avwsu3ZjiriTUeXnb8iVepupci5J6 291m7ORouWxjDm4pZfEJswcwqzjfU+Nn/0XoMdLkqfJJ8wSMeNl9B/80R BNAmQ3AKSeTFL9j2W9BW4Zb5svfX89HobBbqFsltdV2IzDqeXZsD5I322 1qbQvahKMncJeawBDjeo4uLVO79IFKabBGzAX06APx3TNT1rmBRPpy+rW ehmWcj4gP32LPTM0NJ8z4BmGO+1UFjxqE/1u/CX0eHY6xHw/1SZEKXdh/ g==; X-IronPort-AV: E=Sophos;i="5.49,427,1520870400"; d="scan'208";a="175344324" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 22 May 2018 02:26:13 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP; 21 May 2018 11:02:25 -0700 Received: from thinkpad-bart.sdcorp.global.sandisk.com ([10.111.70.1]) by uls-op-cesaip02.wdc.com with ESMTP; 21 May 2018 11:11:28 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Tejun Heo Subject: [PATCH] block: Verify whether blk_queue_enter() is used when necessary Date: Mon, 21 May 2018 11:11:27 -0700 Message-Id: <20180521181127.11919-1-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.16.3 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP It is required to protect blkg_lookup() calls with a blk_queue_enter() / blk_queue_exit() pair. Since it is nontrivial to verify whether this is the case, verify this at runtime. Only perform this verification if CONFIG_LOCKDEP=y to avoid that unnecessary runtime overhead is added. Note: using lockdep to verify whether blkg_lookup() is protected correctly is not possible since lock_acquire() and lock_release() must be called from the same task and since blk_queue_enter() and blk_queue_exit() can be called from different tasks. Suggested-by: Tejun Heo Signed-off-by: Bart Van Assche Cc: Tejun Heo --- block/blk-cgroup.c | 2 ++ block/blk-core.c | 24 ++++++++++++++++++++++++ include/linux/blk-cgroup.h | 2 ++ include/linux/blkdev.h | 11 +++++++++++ include/linux/percpu-refcount.h | 2 ++ lib/percpu-refcount.c | 25 +++++++++++++++++++++++++ 6 files changed, 66 insertions(+) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index eb85cb87c40f..78822dcfa0da 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -145,6 +145,8 @@ struct blkcg_gq *blkg_lookup_slowpath(struct blkcg *blkcg, { struct blkcg_gq *blkg; + WARN_ON_ONCE(!blk_entered_queue(q)); + /* * Hint didn't match. Look up from the radix tree. Note that the * hint can only be updated under queue_lock as otherwise @blkg diff --git a/block/blk-core.c b/block/blk-core.c index 8b9e5dc882f4..b6fa6a9f7daa 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -687,6 +687,9 @@ EXPORT_SYMBOL_GPL(blk_queue_bypass_end); void blk_set_queue_dying(struct request_queue *q) { +#ifdef CONFIG_PROVE_LOCKING + q->cleanup_queue_task = current; +#endif blk_queue_flag_set(QUEUE_FLAG_DYING, q); /* @@ -907,6 +910,25 @@ struct request_queue *blk_alloc_queue(gfp_t gfp_mask) } EXPORT_SYMBOL(blk_alloc_queue); +#ifdef CONFIG_PROVE_LOCKING +/** + * blk_entered_queue() - whether or not it is safe to access cgroup information + * @q: request queue pointer + * + * In order to avoid races between accessing cgroup information and the cgroup + * information removal from inside blk_cleanup_queue(), any code that accesses + * cgroup information must either be protected by blk_queue_enter() and/or + * blk_queue_enter_live() or must be called after the queue has been marked + * dying from the same task that called blk_cleanup_queue(). + */ +bool blk_entered_queue(struct request_queue *q) +{ + return (blk_queue_dying(q) && current == q->cleanup_queue_task) || + percpu_ref_read(&q->q_usage_counter) > 0; +} +EXPORT_SYMBOL(blk_entered_queue); +#endif + /** * blk_queue_enter() - try to increase q->q_usage_counter * @q: request queue pointer @@ -2254,6 +2276,8 @@ generic_make_request_checks(struct bio *bio) goto end_io; } + WARN_ON_ONCE(!blk_entered_queue(q)); + /* * For a REQ_NOWAIT based request, return -EOPNOTSUPP * if queue is not a request based queue. diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h index 6c666fd7de3c..3b8512c259aa 100644 --- a/include/linux/blk-cgroup.h +++ b/include/linux/blk-cgroup.h @@ -266,6 +266,8 @@ static inline struct blkcg_gq *__blkg_lookup(struct blkcg *blkcg, { struct blkcg_gq *blkg; + WARN_ON_ONCE(!blk_entered_queue(q)); + if (blkcg == &blkcg_root) return q->root_blkg; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 780e4ea80d4d..0ed23677c36f 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -649,6 +649,9 @@ struct request_queue { int bypass_depth; atomic_t mq_freeze_depth; +#ifdef CONFIG_PROVE_LOCKING + struct task_struct *cleanup_queue_task; +#endif #if defined(CONFIG_BLK_DEV_BSG) bsg_job_fn *bsg_job_fn; @@ -1000,6 +1003,14 @@ extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t, extern int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags); extern void blk_queue_exit(struct request_queue *q); +#ifdef CONFIG_PROVE_LOCKING +extern bool blk_entered_queue(struct request_queue *q); +#else +static inline bool blk_entered_queue(struct request_queue *q) +{ + return true; +} +#endif extern void blk_start_queue(struct request_queue *q); extern void blk_start_queue_async(struct request_queue *q); extern void blk_stop_queue(struct request_queue *q); diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index 009cdf3d65b6..5707289ba828 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -331,4 +331,6 @@ static inline bool percpu_ref_is_zero(struct percpu_ref *ref) return !atomic_long_read(&ref->count); } +unsigned long percpu_ref_read(struct percpu_ref *ref); + #endif diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index 9f96fa7bc000..094c6c0b446e 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -369,3 +369,28 @@ void percpu_ref_reinit(struct percpu_ref *ref) spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); } EXPORT_SYMBOL_GPL(percpu_ref_reinit); + +/** + * percpu_ref_read - read a percpu refcount + * @ref: percpu_ref to test + * + * This function is safe to call as long as @ref is between init and exit. + */ +unsigned long percpu_ref_read(struct percpu_ref *ref) +{ + unsigned long __percpu *percpu_count; + unsigned long sum = 0; + int cpu; + + rcu_read_lock_sched(); + if (__ref_is_percpu(ref, &percpu_count)) { + for_each_possible_cpu(cpu) + sum += *per_cpu_ptr(percpu_count, cpu); + } + rcu_read_unlock_sched(); + sum += atomic_long_read(&ref->count); + sum &= ~PERCPU_COUNT_BIAS; + + return sum; +} +EXPORT_SYMBOL_GPL(percpu_ref_read);