Message ID | 20180228192823.5191-5-bart.vanassche@wdc.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, Feb 28, 2018 at 11:28:16AM -0800, Bart Van Assche wrote: > static bool blk_poll_stats_enable(struct request_queue *q) > { > - if (test_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags) || > - test_and_set_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags)) > + if (blk_queue_flag_test_and_set(QUEUE_FLAG_POLL_STATS, q)) Is this one really needed or just for symmetry? Even if something would change the queue_flags after the first test_bit() call, the test_and_set_bit() would still do the right thing, wouldn't it?
On Thu, 2018-03-01 at 09:51 +0100, Johannes Thumshirn wrote: > On Wed, Feb 28, 2018 at 11:28:16AM -0800, Bart Van Assche wrote: > > static bool blk_poll_stats_enable(struct request_queue *q) > > { > > - if (test_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags) || > > - test_and_set_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags)) > > + if (blk_queue_flag_test_and_set(QUEUE_FLAG_POLL_STATS, q)) > > Is this one really needed or just for symmetry? Even if something > would change the queue_flags after the first test_bit() call, the > test_and_set_bit() would still do the right thing, wouldn't it? Hello Johannes, Since blk_poll_stats_enable() is called from the hot path (polling code) I think we need the optimization of calling test_bit() before calling test_and_set_bit(). I will restore the test_bit() call. Bart.
On Thu, Mar 01, 2018 at 03:19:08PM +0000, Bart Van Assche wrote: > Hello Johannes, > > Since blk_poll_stats_enable() is called from the hot path (polling code) I > think we need the optimization of calling test_bit() before calling > test_and_set_bit(). I will restore the test_bit() call. Thanks, Johannes
Bart, > Since the queue flags may be changed concurrently from multiple > contexts after a queue becomes visible in sysfs, make these changes > safe by protecting these with the queue lock. Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
diff --git a/block/blk-mq.c b/block/blk-mq.c index 67026494119b..899f357962e8 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3022,8 +3022,7 @@ EXPORT_SYMBOL_GPL(blk_mq_update_nr_hw_queues); /* Enable polling stats and return whether they were already enabled. */ static bool blk_poll_stats_enable(struct request_queue *q) { - if (test_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags) || - test_and_set_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags)) + if (blk_queue_flag_test_and_set(QUEUE_FLAG_POLL_STATS, q)) return true; blk_stat_add_callback(q, q->poll_cb); return false; diff --git a/block/blk-stat.c b/block/blk-stat.c index b664aa6df725..bd365a95fcf8 100644 --- a/block/blk-stat.c +++ b/block/blk-stat.c @@ -152,7 +152,7 @@ void blk_stat_add_callback(struct request_queue *q, spin_lock(&q->stats->lock); list_add_tail_rcu(&cb->list, &q->stats->callbacks); - queue_flag_set(QUEUE_FLAG_STATS, q); + blk_queue_flag_set(QUEUE_FLAG_STATS, q); spin_unlock(&q->stats->lock); } EXPORT_SYMBOL_GPL(blk_stat_add_callback); @@ -163,7 +163,7 @@ void blk_stat_remove_callback(struct request_queue *q, spin_lock(&q->stats->lock); list_del_rcu(&cb->list); if (list_empty(&q->stats->callbacks) && !q->stats->enable_accounting) - queue_flag_clear(QUEUE_FLAG_STATS, q); + blk_queue_flag_clear(QUEUE_FLAG_STATS, q); spin_unlock(&q->stats->lock); del_timer_sync(&cb->timer); @@ -191,7 +191,7 @@ void blk_stat_enable_accounting(struct request_queue *q) { spin_lock(&q->stats->lock); q->stats->enable_accounting = true; - queue_flag_set(QUEUE_FLAG_STATS, q); + blk_queue_flag_set(QUEUE_FLAG_STATS, q); spin_unlock(&q->stats->lock); }
Since the queue flags may be changed concurrently from multiple contexts after a queue becomes visible in sysfs, make these changes safe by protecting these with the queue lock. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Johannes Thumshirn <jthumshirn@suse.de> Cc: Ming Lei <ming.lei@redhat.com> --- block/blk-mq.c | 3 +-- block/blk-stat.c | 6 +++--- 2 files changed, 4 insertions(+), 5 deletions(-)