diff mbox series

[stable-5.4.y] btrfs: free device in btrfs_close_devices for a single device filesystem

Message ID 03596be514e296d87240c2b044b7088962ad9f1c.1676435839.git.anand.jain@oracle.com (mailing list archive)
State New, archived
Headers show
Series [stable-5.4.y] btrfs: free device in btrfs_close_devices for a single device filesystem | expand

Commit Message

Anand Jain Feb. 15, 2023, 4:53 a.m. UTC
commit 5f58d783fd7823b2c2d5954d1126e702f94bfc4c upstream

We have this check to make sure we don't accidentally add older devices
that may have disappeared and re-appeared with an older generation from
being added to an fs_devices (such as a replace source device). This
makes sense, we don't want stale disks in our file system. However for
single disks this doesn't really make sense.

I've seen this in testing, but I was provided a reproducer from a
project that builds btrfs images on loopback devices. The loopback
device gets cached with the new generation, and then if it is re-used to
generate a new file system we'll fail to mount it because the new fs is
"older" than what we have in cache.

Fix this by freeing the cache when closing the device for a single device
filesystem. This will ensure that the mount command passed device path is
scanned successfully during the next mount.

CC: stable@vger.kernel.org # 5.10+
Reported-by: Daan De Meyer <daandemeyer@fb.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Anand Jain <anand.jain@oracle.com>
---
This patch has already been submitted for the LTS stable 5.10 and above.

 fs/btrfs/volumes.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

Comments

Greg KH Feb. 17, 2023, 2:04 p.m. UTC | #1
On Wed, Feb 15, 2023 at 12:53:03PM +0800, Anand Jain wrote:
> commit 5f58d783fd7823b2c2d5954d1126e702f94bfc4c upstream
> 
> We have this check to make sure we don't accidentally add older devices
> that may have disappeared and re-appeared with an older generation from
> being added to an fs_devices (such as a replace source device). This
> makes sense, we don't want stale disks in our file system. However for
> single disks this doesn't really make sense.
> 
> I've seen this in testing, but I was provided a reproducer from a
> project that builds btrfs images on loopback devices. The loopback
> device gets cached with the new generation, and then if it is re-used to
> generate a new file system we'll fail to mount it because the new fs is
> "older" than what we have in cache.
> 
> Fix this by freeing the cache when closing the device for a single device
> filesystem. This will ensure that the mount command passed device path is
> scanned successfully during the next mount.
> 
> CC: stable@vger.kernel.org # 5.10+
> Reported-by: Daan De Meyer <daandemeyer@fb.com>
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> Signed-off-by: Anand Jain <anand.jain@oracle.com>
> Reviewed-by: David Sterba <dsterba@suse.com>
> Signed-off-by: David Sterba <dsterba@suse.com>
> Signed-off-by: Anand Jain <anand.jain@oracle.com>
> ---
> This patch has already been submitted for the LTS stable 5.10 and above.

Now queued up, thanks.

greg k-h
diff mbox series

Patch

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 548de841cee5..dacaea61c2f7 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -354,6 +354,7 @@  void btrfs_free_device(struct btrfs_device *device)
 static void free_fs_devices(struct btrfs_fs_devices *fs_devices)
 {
 	struct btrfs_device *device;
+
 	WARN_ON(fs_devices->opened);
 	while (!list_empty(&fs_devices->devices)) {
 		device = list_entry(fs_devices->devices.next,
@@ -1401,6 +1402,17 @@  int btrfs_close_devices(struct btrfs_fs_devices *fs_devices)
 	if (!fs_devices->opened) {
 		seed_devices = fs_devices->seed;
 		fs_devices->seed = NULL;
+
+		/*
+		 * If the struct btrfs_fs_devices is not assembled with any
+		 * other device, it can be re-initialized during the next mount
+		 * without the needing device-scan step. Therefore, it can be
+		 * fully freed.
+		 */
+		if (fs_devices->num_devices == 1) {
+			list_del(&fs_devices->fs_list);
+			free_fs_devices(fs_devices);
+		}
 	}
 	mutex_unlock(&uuid_mutex);