From patchwork Wed Feb 28 19:28:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10249445 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 91ECB60211 for ; Wed, 28 Feb 2018 19:28:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7678328D85 for ; Wed, 28 Feb 2018 19:28:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6B5A128D9F; Wed, 28 Feb 2018 19:28:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EF7BB28D85 for ; Wed, 28 Feb 2018 19:28:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933514AbeB1T20 (ORCPT ); Wed, 28 Feb 2018 14:28:26 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:62045 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933071AbeB1T2Z (ORCPT ); Wed, 28 Feb 2018 14:28:25 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1519846106; x=1551382106; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=fTh1rYnE1QE/HjBSTpS3JqY6lBKqMG710LPS9YpeJQI=; b=P65hQI8u1jifREAfgRtiSxxmAJVw+0q7+GLSnCv/u1S1cS6wZV3WRgYo k092tFaGtb1wlZsAg5apENM5EHmXclcanWNFX5Il9Y2hpKRqNftIpqlsj 8/L/ouiHrW7bmTDBdfS46XW7SjH2bSPCNtmIHQlVSN5SFTGaItdLfmgFd GbGKHPOCu2T0t2uE5lIqPx8mnH7x4pnPdnh80wx0z859OgqDmCg1ZMr05 hzD6xDjDiWtvAgazM1nqQPKAXIgQ74fF9AiuA1ewvKtDEcPOg9BTk1iS3 SFzY83/aZsNOeW/0BHC4U1VsNzHufpEiMQxKb+RV97jRZUnMzCvEirnfA Q==; X-IronPort-AV: E=Sophos;i="5.47,406,1515427200"; d="scan'208";a="73136991" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 01 Mar 2018 03:28:25 +0800 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP; 28 Feb 2018 11:22:30 -0800 Received: from thinkpad-bart.sdcorp.global.sandisk.com (HELO thinkpad-bart.int.fusionio.com) ([10.11.171.236]) by uls-op-cesaip01.wdc.com with ESMTP; 28 Feb 2018 11:28:24 -0800 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Hannes Reinecke , Johannes Thumshirn , Ming Lei Subject: [PATCH 02/11] block: Use the queue_flag_*() functions instead of open-coding these Date: Wed, 28 Feb 2018 11:28:14 -0800 Message-Id: <20180228192823.5191-3-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180228192823.5191-1-bart.vanassche@wdc.com> References: <20180228192823.5191-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Except for changing the atomic queue flag manipulations that are protected by the queue lock into non-atomic manipulations, this patch does not change any functionality. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Hannes Reinecke Cc: Johannes Thumshirn Cc: Ming Lei Reviewed-by: Johannes Thumshirn Reviewed-by: Martin K. Petersen --- block/blk-core.c | 2 +- block/blk-mq.c | 2 +- block/blk-settings.c | 4 ++-- block/blk-stat.c | 6 +++--- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 9d95b2e7c289..1d6b4b0545aa 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -995,7 +995,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, * registered by blk_register_queue(). */ q->bypass_depth = 1; - __set_bit(QUEUE_FLAG_BYPASS, &q->queue_flags); + queue_flag_set_unlocked(QUEUE_FLAG_BYPASS, q); init_waitqueue_head(&q->mq_freeze_wq); diff --git a/block/blk-mq.c b/block/blk-mq.c index c60408138be6..96baa6511c1e 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2696,7 +2696,7 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT; if (!(set->flags & BLK_MQ_F_SG_MERGE)) - q->queue_flags |= 1 << QUEUE_FLAG_NO_SG_MERGE; + queue_flag_set_unlocked(QUEUE_FLAG_NO_SG_MERGE, q); q->sg_reserved_size = INT_MAX; diff --git a/block/blk-settings.c b/block/blk-settings.c index 48ebe6be07b7..7f719da0eadd 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -861,9 +861,9 @@ void blk_queue_flush_queueable(struct request_queue *q, bool queueable) { spin_lock_irq(q->queue_lock); if (queueable) - clear_bit(QUEUE_FLAG_FLUSH_NQ, &q->queue_flags); + queue_flag_clear(QUEUE_FLAG_FLUSH_NQ, q); else - set_bit(QUEUE_FLAG_FLUSH_NQ, &q->queue_flags); + queue_flag_set(QUEUE_FLAG_FLUSH_NQ, q); spin_unlock_irq(q->queue_lock); } EXPORT_SYMBOL_GPL(blk_queue_flush_queueable); diff --git a/block/blk-stat.c b/block/blk-stat.c index 28003bf9941c..b664aa6df725 100644 --- a/block/blk-stat.c +++ b/block/blk-stat.c @@ -152,7 +152,7 @@ void blk_stat_add_callback(struct request_queue *q, spin_lock(&q->stats->lock); list_add_tail_rcu(&cb->list, &q->stats->callbacks); - set_bit(QUEUE_FLAG_STATS, &q->queue_flags); + queue_flag_set(QUEUE_FLAG_STATS, q); spin_unlock(&q->stats->lock); } EXPORT_SYMBOL_GPL(blk_stat_add_callback); @@ -163,7 +163,7 @@ void blk_stat_remove_callback(struct request_queue *q, spin_lock(&q->stats->lock); list_del_rcu(&cb->list); if (list_empty(&q->stats->callbacks) && !q->stats->enable_accounting) - clear_bit(QUEUE_FLAG_STATS, &q->queue_flags); + queue_flag_clear(QUEUE_FLAG_STATS, q); spin_unlock(&q->stats->lock); del_timer_sync(&cb->timer); @@ -191,7 +191,7 @@ void blk_stat_enable_accounting(struct request_queue *q) { spin_lock(&q->stats->lock); q->stats->enable_accounting = true; - set_bit(QUEUE_FLAG_STATS, &q->queue_flags); + queue_flag_set(QUEUE_FLAG_STATS, q); spin_unlock(&q->stats->lock); }