Message ID | 20250225133110.1441035-8-nilay@linux.ibm.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | block: fix lock order and remove redundant locking | expand |
On Tue, Feb 25, 2025 at 07:00:42PM +0530, Nilay Shroff wrote: > The bdi->ra_pages could be updated under q->limits_lock because it's > usually calculated from the queue limits by queue_limits_commit_update. > So protect reading/writing the sysfs attribute read_ahead_kb using > q->limits_lock instead of q->sysfs_lock. Please add a comment near limits_lock that mentions that the lock protects the limits and read_ahead_kb field. I should have probably done the former myself when adding it, but now that it also protects a non-obvious field we really need it.
On 2/25/25 8:44 PM, Christoph Hellwig wrote: > On Tue, Feb 25, 2025 at 07:00:42PM +0530, Nilay Shroff wrote: >> The bdi->ra_pages could be updated under q->limits_lock because it's >> usually calculated from the queue limits by queue_limits_commit_update. >> So protect reading/writing the sysfs attribute read_ahead_kb using >> q->limits_lock instead of q->sysfs_lock. > > Please add a comment near limits_lock that mentions that the lock > protects the limits and read_ahead_kb field. I should have probably > done the former myself when adding it, but now that it also protects > a non-obvious field we really need it. > Okay this will be addressed in the next patchset. Thanks, --Nilay
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 31bfd6c92b2c..8d078c7f0347 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -93,9 +93,9 @@ static ssize_t queue_ra_show(struct gendisk *disk, char *page) { ssize_t ret; - mutex_lock(&disk->queue->sysfs_lock); + mutex_lock(&disk->queue->limits_lock); ret = queue_var_show(disk->bdi->ra_pages << (PAGE_SHIFT - 10), page); - mutex_unlock(&disk->queue->sysfs_lock); + mutex_unlock(&disk->queue->limits_lock); return ret; } @@ -111,12 +111,15 @@ queue_ra_store(struct gendisk *disk, const char *page, size_t count) ret = queue_var_store(&ra_kb, page, count); if (ret < 0) return ret; - - mutex_lock(&q->sysfs_lock); + /* + * ->ra_pages is protected by ->limits_lock because it is usually + * calculated from the queue limits by queue_limits_commit_update. + */ + mutex_lock(&q->limits_lock); memflags = blk_mq_freeze_queue(q); disk->bdi->ra_pages = ra_kb >> (PAGE_SHIFT - 10); + mutex_unlock(&q->limits_lock); blk_mq_unfreeze_queue(q, memflags); - mutex_unlock(&q->sysfs_lock); return ret; } @@ -670,7 +673,8 @@ static struct attribute *queue_attrs[] = { &queue_dma_alignment_entry.attr, /* - * Attributes which are protected with q->sysfs_lock. + * Attributes which require some form of locking + * other than q->sysfs_lock. */ &queue_ra_entry.attr,