From patchwork Wed Feb 28 19:28:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10249461 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5E79660594 for ; Wed, 28 Feb 2018 19:28:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4AAEF28D9F for ; Wed, 28 Feb 2018 19:28:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3F68328E63; Wed, 28 Feb 2018 19:28:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ADC3828D9F for ; Wed, 28 Feb 2018 19:28:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933510AbeB1T2d (ORCPT ); Wed, 28 Feb 2018 14:28:33 -0500 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:44088 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933476AbeB1T22 (ORCPT ); Wed, 28 Feb 2018 14:28:28 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1519846566; x=1551382566; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Rs75+VRN9xuaixNLEveOzDtg45jJ1QDJbUPDENm4bjc=; b=poixx6L12vFgqOjm/KGigOYw3iWSQKh7BIpxrRBO2rvi7sqIIZ6Xjiko RC3emApbaMhSVjZiaWqgD1Eo5GhuDWoiproExi+ZZfcd6k5xtdp9C96BG /JJSaHSGgKOY1rf7t2q5RpuPD7eNI+N9bl3dbg76irhVdhUjgl5lZ47nm VtSS5MwXi5VanaA089m6KOvwVHzK8bOCa5jZIf6Ew84cWTkLFu1cNdST2 nwBqtANQ270zKb7CwQoeQTwtmAoZhex37dneH3JPbhkAYCKZa9mG/d9bG jVsU+LDS+MkInwjrTUHyo+ToIAkvQpKuTzQsXZwB2O1AwyxLBQ0z/VCXP w==; X-IronPort-AV: E=Sophos;i="5.47,406,1515427200"; d="scan'208";a="169051672" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 01 Mar 2018 03:36:03 +0800 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP; 28 Feb 2018 11:22:32 -0800 Received: from thinkpad-bart.sdcorp.global.sandisk.com (HELO thinkpad-bart.int.fusionio.com) ([10.11.171.236]) by uls-op-cesaip01.wdc.com with ESMTP; 28 Feb 2018 11:28:27 -0800 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Hannes Reinecke , Johannes Thumshirn , Ming Lei Subject: [PATCH 11/11] block: Move the queue_flag_*() functions from a public into a private header file Date: Wed, 28 Feb 2018 11:28:23 -0800 Message-Id: <20180228192823.5191-12-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180228192823.5191-1-bart.vanassche@wdc.com> References: <20180228192823.5191-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch helps to avoid that new code gets introduced in block drivers that manipulates queue flags without holding the queue lock when that lock should be held. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Ming Lei Reviewed-by: Johannes Thumshirn Reviewed-by: Martin K. Petersen --- block/blk.h | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++ include/linux/blkdev.h | 69 -------------------------------------------------- 2 files changed, 69 insertions(+), 69 deletions(-) diff --git a/block/blk.h b/block/blk.h index 46db5dc83dcb..b034fd2460c4 100644 --- a/block/blk.h +++ b/block/blk.h @@ -41,6 +41,75 @@ extern struct kmem_cache *request_cachep; extern struct kobj_type blk_queue_ktype; extern struct ida blk_queue_ida; +/* + * @q->queue_lock is set while a queue is being initialized. Since we know + * that no other threads access the queue object before @q->queue_lock has + * been set, it is safe to manipulate queue flags without holding the + * queue_lock if @q->queue_lock == NULL. See also blk_alloc_queue_node() and + * blk_init_allocated_queue(). + */ +static inline void queue_lockdep_assert_held(struct request_queue *q) +{ + if (q->queue_lock) + lockdep_assert_held(q->queue_lock); +} + +static inline void queue_flag_set_unlocked(unsigned int flag, + struct request_queue *q) +{ + if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) && + kref_read(&q->kobj.kref)) + lockdep_assert_held(q->queue_lock); + __set_bit(flag, &q->queue_flags); +} + +static inline void queue_flag_clear_unlocked(unsigned int flag, + struct request_queue *q) +{ + if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) && + kref_read(&q->kobj.kref)) + lockdep_assert_held(q->queue_lock); + __clear_bit(flag, &q->queue_flags); +} + +static inline int queue_flag_test_and_clear(unsigned int flag, + struct request_queue *q) +{ + queue_lockdep_assert_held(q); + + if (test_bit(flag, &q->queue_flags)) { + __clear_bit(flag, &q->queue_flags); + return 1; + } + + return 0; +} + +static inline int queue_flag_test_and_set(unsigned int flag, + struct request_queue *q) +{ + queue_lockdep_assert_held(q); + + if (!test_bit(flag, &q->queue_flags)) { + __set_bit(flag, &q->queue_flags); + return 0; + } + + return 1; +} + +static inline void queue_flag_set(unsigned int flag, struct request_queue *q) +{ + queue_lockdep_assert_held(q); + __set_bit(flag, &q->queue_flags); +} + +static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) +{ + queue_lockdep_assert_held(q); + __clear_bit(flag, &q->queue_flags); +} + static inline struct blk_flush_queue *blk_get_flush_queue( struct request_queue *q, struct blk_mq_ctx *ctx) { diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 1f3ec9a7fbc7..478c1845c179 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -710,75 +710,6 @@ void blk_queue_flag_clear(unsigned int flag, struct request_queue *q); bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); bool blk_queue_flag_test_and_clear(unsigned int flag, struct request_queue *q); -/* - * @q->queue_lock is set while a queue is being initialized. Since we know - * that no other threads access the queue object before @q->queue_lock has - * been set, it is safe to manipulate queue flags without holding the - * queue_lock if @q->queue_lock == NULL. See also blk_alloc_queue_node() and - * blk_init_allocated_queue(). - */ -static inline void queue_lockdep_assert_held(struct request_queue *q) -{ - if (q->queue_lock) - lockdep_assert_held(q->queue_lock); -} - -static inline void queue_flag_set_unlocked(unsigned int flag, - struct request_queue *q) -{ - if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) && - kref_read(&q->kobj.kref)) - lockdep_assert_held(q->queue_lock); - __set_bit(flag, &q->queue_flags); -} - -static inline void queue_flag_clear_unlocked(unsigned int flag, - struct request_queue *q) -{ - if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) && - kref_read(&q->kobj.kref)) - lockdep_assert_held(q->queue_lock); - __clear_bit(flag, &q->queue_flags); -} - -static inline int queue_flag_test_and_clear(unsigned int flag, - struct request_queue *q) -{ - queue_lockdep_assert_held(q); - - if (test_bit(flag, &q->queue_flags)) { - __clear_bit(flag, &q->queue_flags); - return 1; - } - - return 0; -} - -static inline int queue_flag_test_and_set(unsigned int flag, - struct request_queue *q) -{ - queue_lockdep_assert_held(q); - - if (!test_bit(flag, &q->queue_flags)) { - __set_bit(flag, &q->queue_flags); - return 0; - } - - return 1; -} - -static inline void queue_flag_set(unsigned int flag, struct request_queue *q) -{ - queue_lockdep_assert_held(q); - __set_bit(flag, &q->queue_flags); -} - -static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) -{ - queue_lockdep_assert_held(q); - __clear_bit(flag, &q->queue_flags); -} - #define blk_queue_tagged(q) test_bit(QUEUE_FLAG_QUEUED, &(q)->queue_flags) #define blk_queue_stopped(q) test_bit(QUEUE_FLAG_STOPPED, &(q)->queue_flags) #define blk_queue_dying(q) test_bit(QUEUE_FLAG_DYING, &(q)->queue_flags)