Message ID | 20210617045618.1179079-1-naohiro.aota@wdc.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | btrfs: fix negative space_info->bytes_readonly | expand |
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
On Thu, Jun 17, 2021 at 01:56:18PM +0900, Naohiro Aota wrote: > Consider we have a using block group on zoned btrfs. > > |<- ZU ->|<- used ->|<---free--->| > `- Alloc offset > ZU: Zone unusable > > Marking the block group read-only will migrate the zone unusable bytes > to the read-only bytes. So, we will have this. > > |<- RO ->|<- used ->|<--- RO --->| > RO: Read only > > When marking it back to read-write, btrfs_dec_block_group_ro() > subtracts the above "RO" bytes from the > space_info->bytes_readonly. And, it moves the zone unusable bytes back > and again subtracts those bytes from the space_info->bytes_readonly, > leading to negative bytes_readonly. > > This commit fixes the issue by reordering the operations. > > Link: https://github.com/naota/linux/issues/37 I've copied the 'fi df' output to changelog. > Fixes: 169e0da91a21 ("btrfs: zoned: track unusable bytes for zones") > Cc: stable@vger.kernel.org # 5.12+ > Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> Added to misc-next, thanks.
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index 38885b29e6e5..c42b6528552f 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -2442,16 +2442,16 @@ void btrfs_dec_block_group_ro(struct btrfs_block_group *cache) spin_lock(&sinfo->lock); spin_lock(&cache->lock); if (!--cache->ro) { - num_bytes = cache->length - cache->reserved - - cache->pinned - cache->bytes_super - - cache->zone_unusable - cache->used; - sinfo->bytes_readonly -= num_bytes; if (btrfs_is_zoned(cache->fs_info)) { /* Migrate zone_unusable bytes back */ cache->zone_unusable = cache->alloc_offset - cache->used; sinfo->bytes_zone_unusable += cache->zone_unusable; sinfo->bytes_readonly -= cache->zone_unusable; } + num_bytes = cache->length - cache->reserved - + cache->pinned - cache->bytes_super - + cache->zone_unusable - cache->used; + sinfo->bytes_readonly -= num_bytes; list_del_init(&cache->ro_list); } spin_unlock(&cache->lock);
Consider we have a using block group on zoned btrfs. |<- ZU ->|<- used ->|<---free--->| `- Alloc offset ZU: Zone unusable Marking the block group read-only will migrate the zone unusable bytes to the read-only bytes. So, we will have this. |<- RO ->|<- used ->|<--- RO --->| RO: Read only When marking it back to read-write, btrfs_dec_block_group_ro() subtracts the above "RO" bytes from the space_info->bytes_readonly. And, it moves the zone unusable bytes back and again subtracts those bytes from the space_info->bytes_readonly, leading to negative bytes_readonly. This commit fixes the issue by reordering the operations. Link: https://github.com/naota/linux/issues/37 Fixes: 169e0da91a21 ("btrfs: zoned: track unusable bytes for zones") Cc: stable@vger.kernel.org # 5.12+ Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> --- fs/btrfs/block-group.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)