Message ID | 20200825120554.13070-3-jack@suse.cz (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Block and buffer invalidation under a filesystem | expand |
On Tue, Aug 25, 2020 at 02:05:54PM +0200, Jan Kara wrote: > Discarding blocks and buffers under a mounted filesystem is hardly > anything admin wants to do. Usually it will confuse the filesystem and > sometimes the loss of buffer_head state (including b_private field) can > even cause crashes like: Doesn't work if the file system uses multiple devices. I think we just really need to split the fs buffer_head address space from the block device one. Everything else is just going to cause a huge mess.
(Adding the OCFS2 maintainers, since my possibly insane idea proposed below would definitely impact them!) On Tue, Aug 25, 2020 at 01:16:16PM +0100, Christoph Hellwig wrote: > On Tue, Aug 25, 2020 at 02:05:54PM +0200, Jan Kara wrote: > > Discarding blocks and buffers under a mounted filesystem is hardly > > anything admin wants to do. Usually it will confuse the filesystem and > > sometimes the loss of buffer_head state (including b_private field) can > > even cause crashes like: > > Doesn't work if the file system uses multiple devices. I think we > just really need to split the fs buffer_head address space from the > block device one. Everything else is just going to cause a huge mess. I wonder if we should go a step further, and stop using struct buffer_head altogether in jbd2 and ext4 (as well as ocfs2). This would involve moving whatever structure elements from the buffer_head struct into journal_head, and manage writeback and reads requests directly in jbd2. This would allow us to get detailed write errors back, which is currently not possible from the buffer_head infrastructure. The downside is this would be a pretty massive change in terms of LOC, since we use struct buffer_head in a *huge* number of places. If we're careful, most of it could be handled by a Coccinelle script to rename "struct buffer_head" to "struct journal_head". Fortunately, we don't actually use that much of the fs/buffer_head functions in fs/{ext4,ocfs2}/*.c. One potentially tricky bit is that ocfs2 hasn't been converted to using iomap, so it's still using __blockdev_direct_IO. So it's data blocks for DIO would still have to use struct buffer_head (which means the Coccinelle script won't really work for fs/ocfs2, without a lot of manual rework) --- or ocfs2 would have to switched to use iomap at least for DIO support. What do folks think? - Ted
On Tue 25-08-20 13:16:16, Christoph Hellwig wrote: > On Tue, Aug 25, 2020 at 02:05:54PM +0200, Jan Kara wrote: > > Discarding blocks and buffers under a mounted filesystem is hardly > > anything admin wants to do. Usually it will confuse the filesystem and > > sometimes the loss of buffer_head state (including b_private field) can > > even cause crashes like: > > Doesn't work if the file system uses multiple devices. Hum, right. > I think we just really need to split the fs buffer_head address space > from the block device one. Everything else is just going to cause a huge > mess. Do you mean that address_space filesystem uses to access its metadata on /dev/sda will be different from the address_space you will see when reading say /dev/sda? Thus these will be completely separate (and incoherent) caches? Although this would be simple it will break userspace I'm afraid. There are situations where tools read e.g. superblock of a mounted filesystem from the block device and rely on the data to be reasonably recent. Even worse e.g. tune2fs or e2fsck can *modify* superblock of a mounted filesystem through the block device (e.g. to set 'fsck after X mounts' fields and similar). So we would need to somehow maintain at least vague coherence between these caches which would be ugly. Honza
On Tue 25-08-20 10:10:20, Theodore Y. Ts'o wrote: > (Adding the OCFS2 maintainers, since my possibly insane idea proposed > below would definitely impact them!) > > On Tue, Aug 25, 2020 at 01:16:16PM +0100, Christoph Hellwig wrote: > > On Tue, Aug 25, 2020 at 02:05:54PM +0200, Jan Kara wrote: > > > Discarding blocks and buffers under a mounted filesystem is hardly > > > anything admin wants to do. Usually it will confuse the filesystem and > > > sometimes the loss of buffer_head state (including b_private field) can > > > even cause crashes like: > > > > Doesn't work if the file system uses multiple devices. I think we > > just really need to split the fs buffer_head address space from the > > block device one. Everything else is just going to cause a huge mess. > > I wonder if we should go a step further, and stop using struct > buffer_head altogether in jbd2 and ext4 (as well as ocfs2). What about the cache coherency issues I've pointed out in my reply to Christoph? > This would involve moving whatever structure elements from the > buffer_head struct into journal_head, and manage writeback and reads > requests directly in jbd2. This would allow us to get detailed write > errors back, which is currently not possible from the buffer_head > infrastructure. > > The downside is this would be a pretty massive change in terms of LOC, > since we use struct buffer_head in a *huge* number of places. If > we're careful, most of it could be handled by a Coccinelle script to > rename "struct buffer_head" to "struct journal_head". Fortunately, we > don't actually use that much of the fs/buffer_head functions in > fs/{ext4,ocfs2}/*.c. > > One potentially tricky bit is that ocfs2 hasn't been converted to > using iomap, so it's still using __blockdev_direct_IO. So it's data > blocks for DIO would still have to use struct buffer_head (which means > the Coccinelle script won't really work for fs/ocfs2, without a lot of > manual rework) --- or ocfs2 would have to switched to use iomap at > least for DIO support. > > What do folks think? Otherwise yes, this would be doable although pretty invasive as you mention. Honza
On Tue, Aug 25, 2020 at 05:12:56PM +0200, Jan Kara wrote: > On Tue 25-08-20 10:10:20, Theodore Y. Ts'o wrote: > > (Adding the OCFS2 maintainers, since my possibly insane idea proposed > > below would definitely impact them!) > > > > On Tue, Aug 25, 2020 at 01:16:16PM +0100, Christoph Hellwig wrote: > > > On Tue, Aug 25, 2020 at 02:05:54PM +0200, Jan Kara wrote: > > > > Discarding blocks and buffers under a mounted filesystem is hardly > > > > anything admin wants to do. Usually it will confuse the filesystem and > > > > sometimes the loss of buffer_head state (including b_private field) can > > > > even cause crashes like: > > > > > > Doesn't work if the file system uses multiple devices. I think we > > > just really need to split the fs buffer_head address space from the > > > block device one. Everything else is just going to cause a huge mess. > > > > I wonder if we should go a step further, and stop using struct > > buffer_head altogether in jbd2 and ext4 (as well as ocfs2). > > What about the cache coherency issues I've pointed out in my reply to > Christoph? If journal_heads pointed into the page cache as well, then you'd get coherency. These new journal heads would have to be able to cope with the page cache being modified underneath them, of course.
On Tue 25-08-20 19:41:07, Matthew Wilcox wrote: > On Tue, Aug 25, 2020 at 05:12:56PM +0200, Jan Kara wrote: > > On Tue 25-08-20 10:10:20, Theodore Y. Ts'o wrote: > > > (Adding the OCFS2 maintainers, since my possibly insane idea proposed > > > below would definitely impact them!) > > > > > > On Tue, Aug 25, 2020 at 01:16:16PM +0100, Christoph Hellwig wrote: > > > > On Tue, Aug 25, 2020 at 02:05:54PM +0200, Jan Kara wrote: > > > > > Discarding blocks and buffers under a mounted filesystem is hardly > > > > > anything admin wants to do. Usually it will confuse the filesystem and > > > > > sometimes the loss of buffer_head state (including b_private field) can > > > > > even cause crashes like: > > > > > > > > Doesn't work if the file system uses multiple devices. I think we > > > > just really need to split the fs buffer_head address space from the > > > > block device one. Everything else is just going to cause a huge mess. > > > > > > I wonder if we should go a step further, and stop using struct > > > buffer_head altogether in jbd2 and ext4 (as well as ocfs2). > > > > What about the cache coherency issues I've pointed out in my reply to > > Christoph? > > If journal_heads pointed into the page cache as well, then you'd get > coherency. These new journal heads would have to be able to cope with > the page cache being modified underneath them, of course. I was thinking about this as well but IMHO it would be quite complex - e.g. we could then have both buffer_heads (for block dev) and journal heads (for fs) to have different opinions on block state (uptodate, dirty, ...) and who now determines what the final page state is (e.g. for reclaim purposes)? Sure we can all sort this out with various callbacks etc. but at that point we are not really simplifying things anymore... Honza
On Tue, Aug 25, 2020 at 04:50:56PM +0200, Jan Kara wrote: > Do you mean that address_space filesystem uses to access its metadata on > /dev/sda will be different from the address_space you will see when reading > say /dev/sda? Thus these will be completely separate (and incoherent) > caches? Yes. > Although this would be simple it will break userspace I'm afraid. > There are situations where tools read e.g. superblock of a mounted > filesystem from the block device and rely on the data to be reasonably > recent. Even worse e.g. tune2fs or e2fsck can *modify* superblock of a > mounted filesystem through the block device (e.g. to set 'fsck after X > mounts' fields and similar). We've not had any problems when XFS stopped using the block device address space 9.5 years ago.
On Thu, Aug 27, 2020 at 08:16:03AM +0100, Christoph Hellwig wrote: > On Tue, Aug 25, 2020 at 04:50:56PM +0200, Jan Kara wrote: > > Do you mean that address_space filesystem uses to access its metadata on > > /dev/sda will be different from the address_space you will see when reading > > say /dev/sda? Thus these will be completely separate (and incoherent) > > caches? > > Yes. > > > Although this would be simple it will break userspace I'm afraid. > > There are situations where tools read e.g. superblock of a mounted > > filesystem from the block device and rely on the data to be reasonably > > recent. Even worse e.g. tune2fs or e2fsck can *modify* superblock of a > > mounted filesystem through the block device (e.g. to set 'fsck after X > > mounts' fields and similar). > > We've not had any problems when XFS stopped using the block device > address space 9.5 years ago. How much writes from fsck use does xfs see, again?
On Thu, Aug 27, 2020 at 10:39:00PM +0100, Al Viro wrote: > On Thu, Aug 27, 2020 at 08:16:03AM +0100, Christoph Hellwig wrote: > > On Tue, Aug 25, 2020 at 04:50:56PM +0200, Jan Kara wrote: > > > Do you mean that address_space filesystem uses to access its metadata on > > > /dev/sda will be different from the address_space you will see when reading > > > say /dev/sda? Thus these will be completely separate (and incoherent) > > > caches? > > > > Yes. > > > > > Although this would be simple it will break userspace I'm afraid. > > > There are situations where tools read e.g. superblock of a mounted > > > filesystem from the block device and rely on the data to be reasonably > > > recent. Even worse e.g. tune2fs or e2fsck can *modify* superblock of a > > > mounted filesystem through the block device (e.g. to set 'fsck after X > > > mounts' fields and similar). > > > > We've not had any problems when XFS stopped using the block device > > address space 9.5 years ago. > > How much writes from fsck use does xfs see, again? All of them, because xfs_repair uses direct IO and caches what it needs in userspace. Cheers, Dave.
On Fri 28-08-20 10:07:05, Dave Chinner wrote: > On Thu, Aug 27, 2020 at 10:39:00PM +0100, Al Viro wrote: > > On Thu, Aug 27, 2020 at 08:16:03AM +0100, Christoph Hellwig wrote: > > > On Tue, Aug 25, 2020 at 04:50:56PM +0200, Jan Kara wrote: > > > > Do you mean that address_space filesystem uses to access its metadata on > > > > /dev/sda will be different from the address_space you will see when reading > > > > say /dev/sda? Thus these will be completely separate (and incoherent) > > > > caches? > > > > > > Yes. > > > > > > > Although this would be simple it will break userspace I'm afraid. > > > > There are situations where tools read e.g. superblock of a mounted > > > > filesystem from the block device and rely on the data to be reasonably > > > > recent. Even worse e.g. tune2fs or e2fsck can *modify* superblock of a > > > > mounted filesystem through the block device (e.g. to set 'fsck after X > > > > mounts' fields and similar). > > > > > > We've not had any problems when XFS stopped using the block device > > > address space 9.5 years ago. > > > > How much writes from fsck use does xfs see, again? > > All of them, because xfs_repair uses direct IO and caches what it > needs in userspace. But that's the difference. e2fsprogs (which means e2fsck, tune2fs, or generally libext2fs which is linked against quite a few external programs as well) use buffered IO so they rely on cache coherency between what the kernel filesystem driver sees and what userspace sees when opening the block device. That's why I'm concerned that loosing this coherency is going to break some ext4 user... I'll talk to Ted what he thinks about this but so far I don't see how to separate kernel's view of the bdev from userspace view without breakage. Honza
On Aug 25, 2020, at 6:16 AM, Christoph Hellwig <hch@infradead.org> wrote: > > On Tue, Aug 25, 2020 at 02:05:54PM +0200, Jan Kara wrote: >> Discarding blocks and buffers under a mounted filesystem is hardly >> anything admin wants to do. Usually it will confuse the filesystem and >> sometimes the loss of buffer_head state (including b_private field) can >> even cause crashes like: > > Doesn't work if the file system uses multiple devices. It's not _worse_ than the current situation of allowing the complete destruction of the mounted filesystem. It doesn't fix the problem for XFS with realtime devices, or ext4 with a separate journal device, but it fixes the problem for a majority of users with a single device filesystem. While BLKFLSBUF causing a crash is annoying, BLKDISCARD/BLKSECDISCARD under a mounted filesystem is definitely dangerous and wrong. What about checking for O_EXCL on the device, indicating that it is currently in use by some higher level? Cheers, Andreas
On Fri, Aug 28, 2020 at 02:21:29AM -0600, Andreas Dilger wrote: > On Aug 25, 2020, at 6:16 AM, Christoph Hellwig <hch@infradead.org> wrote: > > > > On Tue, Aug 25, 2020 at 02:05:54PM +0200, Jan Kara wrote: > >> Discarding blocks and buffers under a mounted filesystem is hardly > >> anything admin wants to do. Usually it will confuse the filesystem and > >> sometimes the loss of buffer_head state (including b_private field) can > >> even cause crashes like: > > > > Doesn't work if the file system uses multiple devices. > > It's not _worse_ than the current situation of allowing the complete > destruction of the mounted filesystem. It doesn't fix the problem > for XFS with realtime devices, or ext4 with a separate journal device, > but it fixes the problem for a majority of users with a single device > filesystem. > > While BLKFLSBUF causing a crash is annoying, BLKDISCARD/BLKSECDISCARD > under a mounted filesystem is definitely dangerous and wrong. > > What about checking for O_EXCL on the device, indicating that it is > currently in use by some higher level? That actually seems like a much better idea.
On Sat 29-08-20 07:40:41, Christoph Hellwig wrote: > On Fri, Aug 28, 2020 at 02:21:29AM -0600, Andreas Dilger wrote: > > On Aug 25, 2020, at 6:16 AM, Christoph Hellwig <hch@infradead.org> wrote: > > > > > > On Tue, Aug 25, 2020 at 02:05:54PM +0200, Jan Kara wrote: > > >> Discarding blocks and buffers under a mounted filesystem is hardly > > >> anything admin wants to do. Usually it will confuse the filesystem and > > >> sometimes the loss of buffer_head state (including b_private field) can > > >> even cause crashes like: > > > > > > Doesn't work if the file system uses multiple devices. > > > > It's not _worse_ than the current situation of allowing the complete > > destruction of the mounted filesystem. It doesn't fix the problem > > for XFS with realtime devices, or ext4 with a separate journal device, > > but it fixes the problem for a majority of users with a single device > > filesystem. > > > > While BLKFLSBUF causing a crash is annoying, BLKDISCARD/BLKSECDISCARD > > under a mounted filesystem is definitely dangerous and wrong. Actually BLKFLSBUF won't cause a crash. That's using invalidate_bdev() - i.e., page-reclaim style of eviction and that's fine with the filesystems. > > What about checking for O_EXCL on the device, indicating that it is > > currently in use by some higher level? > > That actually seems like a much better idea. OK, I'll rework the patch. Honza
diff --git a/block/ioctl.c b/block/ioctl.c index bdb3bbb253d9..0e3a46b0ffc8 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -113,7 +113,7 @@ static int blk_ioctl_discard(struct block_device *bdev, fmode_t mode, uint64_t start, len; struct request_queue *q = bdev_get_queue(bdev); struct address_space *mapping = bdev->bd_inode->i_mapping; - + struct super_block *sb; if (!(mode & FMODE_WRITE)) return -EBADF; @@ -134,6 +134,14 @@ static int blk_ioctl_discard(struct block_device *bdev, fmode_t mode, if (start + len > i_size_read(bdev->bd_inode)) return -EINVAL; + /* + * Don't mess with device with mounted filesystem. + */ + sb = get_super(bdev); + if (sb) { + drop_super(sb); + return -EBUSY; + } truncate_inode_pages_range(mapping, start, start + len - 1); return blkdev_issue_discard(bdev, start >> 9, len >> 9, GFP_KERNEL, flags); @@ -145,6 +153,7 @@ static int blk_ioctl_zeroout(struct block_device *bdev, fmode_t mode, uint64_t range[2]; struct address_space *mapping; uint64_t start, end, len; + struct super_block *sb; if (!(mode & FMODE_WRITE)) return -EBADF; @@ -165,6 +174,14 @@ static int blk_ioctl_zeroout(struct block_device *bdev, fmode_t mode, if (end < start) return -EINVAL; + /* + * Don't mess with device with mounted filesystem. + */ + sb = get_super(bdev); + if (sb) { + drop_super(sb); + return -EBUSY; + } /* Invalidate the page cache, including dirty pages */ mapping = bdev->bd_inode->i_mapping; truncate_inode_pages_range(mapping, start, end); diff --git a/fs/block_dev.c b/fs/block_dev.c index 8ae833e00443..5b398eb7c34c 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -1973,6 +1973,7 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start, loff_t end = start + len - 1; loff_t isize; int error; + struct super_block *sb; /* Fail if we don't recognize the flags. */ if (mode & ~BLKDEV_FALLOC_FL_SUPPORTED) @@ -1996,6 +1997,14 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start, if ((start | len) & (bdev_logical_block_size(bdev) - 1)) return -EINVAL; + /* + * Don't mess with device with mounted filesystem. + */ + sb = get_super(bdev); + if (sb) { + drop_super(sb); + return -EBUSY; + } /* Invalidate the page cache, including dirty pages. */ mapping = bdev->bd_inode->i_mapping; truncate_inode_pages_range(mapping, start, end);
Discarding blocks and buffers under a mounted filesystem is hardly anything admin wants to do. Usually it will confuse the filesystem and sometimes the loss of buffer_head state (including b_private field) can even cause crashes like: BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0002 [#1] SMP PTI CPU: 4 PID: 203778 Comm: jbd2/dm-3-8 Kdump: loaded Tainted: G O --------- - - 4.18.0-147.5.0.5.h126.eulerosv2r9.x86_64 #1 Hardware name: Huawei RH2288H V3/BC11HGSA0, BIOS 1.57 08/11/2015 RIP: 0010:jbd2_journal_grab_journal_head+0x1b/0x40 [jbd2] ... Call Trace: __jbd2_journal_insert_checkpoint+0x23/0x70 [jbd2] jbd2_journal_commit_transaction+0x155f/0x1b60 [jbd2] kjournald2+0xbd/0x270 [jbd2] So refuse fallocate(2) and BLKZEROOUT, BLKDISCARD, BLKSECDISCARD ioctls for a block device having filesystem mounted. Reported-by: Ye Bin <yebin10@huawei.com> Signed-off-by: Jan Kara <jack@suse.cz> --- block/ioctl.c | 19 ++++++++++++++++++- fs/block_dev.c | 9 +++++++++ 2 files changed, 27 insertions(+), 1 deletion(-)