diff mbox series

mm: migrate: buffer_migrate_folio_norefs() fallback migrate not uptodate pages

Message ID 20220825080146.2021641-1-chengzhihao1@huawei.com (mailing list archive)
State New
Headers show
Series mm: migrate: buffer_migrate_folio_norefs() fallback migrate not uptodate pages | expand

Commit Message

Zhihao Cheng Aug. 25, 2022, 8:01 a.m. UTC
From: Zhang Yi <yi.zhang@huawei.com>

Recently we notice that ext4 filesystem occasionally fail to read
metadata from disk and report error message, but the disk and block
layer looks fine. After analyse, we lockon commit 88dbcbb3a484
("blkdev: avoid migration stalls for blkdev pages"). It provide a
migration method for the bdev, we could move page that has buffers
without extra users now, but it will lock the buffers on the page, which
breaks a lot of current filesystem's fragile metadata read operations,
like ll_rw_block() for common usage and ext4_read_bh_lock() for ext4,
these helpers just trylock the buffer and skip submit IO if it lock
failed, many callers just wait_on_buffer() and conclude IO error if the
buffer is not uptodate after buffer unlocked.

This issue could be easily reproduced by add some delay just after
buffer_migrate_lock_buffers() in __buffer_migrate_folio() and do
fsstress on ext4 filesystem.

  EXT4-fs error (device pmem1): __ext4_find_entry:1658: inode #73193:
  comm fsstress: reading directory lblock 0
  EXT4-fs error (device pmem1): __ext4_find_entry:1658: inode #75334:
  comm fsstress: reading directory lblock 0

Something like ll_rw_block() should be used carefully and seems could
only be safely used for the readahead case. So the best way is to fix
the read operations in filesystem in the long run, but now let us avoid
this issue first. This patch avoid this issue by fallback to migrate
pages that are not uotodate like fallback_migrate_folio(), those pages
that has buffers may probably do read operation soon.

Fixes: 88dbcbb3a484 ("blkdev: avoid migration stalls for blkdev pages")
Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
---
 mm/migrate.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

Comments

Jan Kara Aug. 25, 2022, 10:57 a.m. UTC | #1
On Thu 25-08-22 16:01:46, Zhihao Cheng wrote:
> From: Zhang Yi <yi.zhang@huawei.com>
> 
> Recently we notice that ext4 filesystem occasionally fail to read
> metadata from disk and report error message, but the disk and block
> layer looks fine. After analyse, we lockon commit 88dbcbb3a484
> ("blkdev: avoid migration stalls for blkdev pages"). It provide a
> migration method for the bdev, we could move page that has buffers
> without extra users now, but it will lock the buffers on the page, which
> breaks a lot of current filesystem's fragile metadata read operations,
> like ll_rw_block() for common usage and ext4_read_bh_lock() for ext4,
> these helpers just trylock the buffer and skip submit IO if it lock
> failed, many callers just wait_on_buffer() and conclude IO error if the
> buffer is not uptodate after buffer unlocked.
> 
> This issue could be easily reproduced by add some delay just after
> buffer_migrate_lock_buffers() in __buffer_migrate_folio() and do
> fsstress on ext4 filesystem.
> 
>   EXT4-fs error (device pmem1): __ext4_find_entry:1658: inode #73193:
>   comm fsstress: reading directory lblock 0
>   EXT4-fs error (device pmem1): __ext4_find_entry:1658: inode #75334:
>   comm fsstress: reading directory lblock 0
> 
> Something like ll_rw_block() should be used carefully and seems could
> only be safely used for the readahead case. So the best way is to fix
> the read operations in filesystem in the long run, but now let us avoid
> this issue first. This patch avoid this issue by fallback to migrate
> pages that are not uotodate like fallback_migrate_folio(), those pages
> that has buffers may probably do read operation soon.
> 
> Fixes: 88dbcbb3a484 ("blkdev: avoid migration stalls for blkdev pages")
> Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
> Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>

Thanks for the analysis and the fix! As you noted above this is actually a
bug in the filesystems that they assume that locked buffer means it is
under IO.  Usually that is the case but there are other places that lock
the buffer without doing IO. Page migration is one of them, jbd2 machinery
another one, there may be others.

So I think this really ought to be fixed in filesystems instead of papering
over the bug in the migration code. I agree this is more work but we will
reduce the technical debt, not increase it :). Honestly, ll_rw_block()
should just die. It is actively dangerous to use. Instead we should have
one call for readahead of bhs and the rest should be converted to
submit_bh() or similar calls. There are like 25 callers remaining so it
won't be even that hard.

And then we have the same buggy code as in ll_rw_block() copied to
ext4_bread_batch() (ext4_read_bh_lock() in particular) so that needs to be
fixed as well...

								Honza

> ---
>  mm/migrate.c | 32 ++++++++++++++++++++++++++++++++
>  1 file changed, 32 insertions(+)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 6a1597c92261..bded69867619 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -691,6 +691,38 @@ static int __buffer_migrate_folio(struct address_space *mapping,
>  	if (!head)
>  		return migrate_folio(mapping, dst, src, mode);
>  
> +	/*
> +	 * If the mapped buffers on the page are not uptodate and has refcount,
> +	 * some others may propably try to lock the buffer and submit read IO
> +	 * through ll_rw_block(), but it will not submit IO once it failed to
> +	 * lock the buffer, so try to fallback to migrate_folio() to prevent
> +	 * false positive EIO.
> +	 */
> +	if (check_refs) {
> +		bool uptodate = true;
> +		bool invalidate = false;
> +
> +		bh = head;
> +		do {
> +			if (buffer_mapped(bh) && !buffer_uptodate(bh)) {
> +				uptodate = false;
> +				if (atomic_read(&bh->b_count)) {
> +					invalidate = true;
> +					break;
> +				}
> +			}
> +			bh = bh->b_this_page;
> +		} while (bh != head);
> +
> +		if (!uptodate) {
> +			if (invalidate)
> +				invalidate_bh_lrus();
> +			if (filemap_release_folio(src, GFP_KERNEL))
> +				return migrate_folio(mapping, dst, src, mode);
> +			return -EAGAIN;
> +		}
> +	}
> +
>  	/* Check whether page does not have extra refs before we do more work */
>  	expected_count = folio_expected_refs(mapping, src);
>  	if (folio_ref_count(src) != expected_count)
> -- 
> 2.31.1
>
Zhihao Cheng Aug. 25, 2022, 11:32 a.m. UTC | #2
在 2022/8/25 18:57, Jan Kara 写道:
> On Thu 25-08-22 16:01:46, Zhihao Cheng wrote:
>> From: Zhang Yi <yi.zhang@huawei.com>
>>
>> Recently we notice that ext4 filesystem occasionally fail to read
>> metadata from disk and report error message, but the disk and block
>> layer looks fine. After analyse, we lockon commit 88dbcbb3a484
>> ("blkdev: avoid migration stalls for blkdev pages"). It provide a
>> migration method for the bdev, we could move page that has buffers
>> without extra users now, but it will lock the buffers on the page, which
>> breaks a lot of current filesystem's fragile metadata read operations,
>> like ll_rw_block() for common usage and ext4_read_bh_lock() for ext4,
>> these helpers just trylock the buffer and skip submit IO if it lock
>> failed, many callers just wait_on_buffer() and conclude IO error if the
>> buffer is not uptodate after buffer unlocked.
>>
>> This issue could be easily reproduced by add some delay just after
>> buffer_migrate_lock_buffers() in __buffer_migrate_folio() and do
>> fsstress on ext4 filesystem.
>>
>>    EXT4-fs error (device pmem1): __ext4_find_entry:1658: inode #73193:
>>    comm fsstress: reading directory lblock 0
>>    EXT4-fs error (device pmem1): __ext4_find_entry:1658: inode #75334:
>>    comm fsstress: reading directory lblock 0
>>
>> Something like ll_rw_block() should be used carefully and seems could
>> only be safely used for the readahead case. So the best way is to fix
>> the read operations in filesystem in the long run, but now let us avoid
>> this issue first. This patch avoid this issue by fallback to migrate
>> pages that are not uotodate like fallback_migrate_folio(), those pages
>> that has buffers may probably do read operation soon.
>>
>> Fixes: 88dbcbb3a484 ("blkdev: avoid migration stalls for blkdev pages")
>> Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
>> Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
> 
> Thanks for the analysis and the fix! As you noted above this is actually a
> bug in the filesystems that they assume that locked buffer means it is
> under IO.  Usually that is the case but there are other places that lock
> the buffer without doing IO. Page migration is one of them, jbd2 machinery
> another one, there may be others.
> 
> So I think this really ought to be fixed in filesystems instead of papering
> over the bug in the migration code. I agree this is more work but we will
> reduce the technical debt, not increase it :). Honestly, ll_rw_block()
> should just die. It is actively dangerous to use. Instead we should have
> one call for readahead of bhs and the rest should be converted to
> submit_bh() or similar calls. There are like 25 callers remaining so it
> won't be even that hard.
> 
> And then we have the same buggy code as in ll_rw_block() copied to
> ext4_bread_batch() (ext4_read_bh_lock() in particular) so that needs to be
> fixed as well...
> 
> 								Honza

You are right, Jan. I totally agree with you. It seems that I shouldn't 
have been lazy.
Jan Kara Aug. 25, 2022, 2:11 p.m. UTC | #3
On Thu 25-08-22 19:32:09, Zhihao Cheng wrote:
> 在 2022/8/25 18:57, Jan Kara 写道:
> > On Thu 25-08-22 16:01:46, Zhihao Cheng wrote:
> > > From: Zhang Yi <yi.zhang@huawei.com>
> > > 
> > > Recently we notice that ext4 filesystem occasionally fail to read
> > > metadata from disk and report error message, but the disk and block
> > > layer looks fine. After analyse, we lockon commit 88dbcbb3a484
> > > ("blkdev: avoid migration stalls for blkdev pages"). It provide a
> > > migration method for the bdev, we could move page that has buffers
> > > without extra users now, but it will lock the buffers on the page, which
> > > breaks a lot of current filesystem's fragile metadata read operations,
> > > like ll_rw_block() for common usage and ext4_read_bh_lock() for ext4,
> > > these helpers just trylock the buffer and skip submit IO if it lock
> > > failed, many callers just wait_on_buffer() and conclude IO error if the
> > > buffer is not uptodate after buffer unlocked.
> > > 
> > > This issue could be easily reproduced by add some delay just after
> > > buffer_migrate_lock_buffers() in __buffer_migrate_folio() and do
> > > fsstress on ext4 filesystem.
> > > 
> > >    EXT4-fs error (device pmem1): __ext4_find_entry:1658: inode #73193:
> > >    comm fsstress: reading directory lblock 0
> > >    EXT4-fs error (device pmem1): __ext4_find_entry:1658: inode #75334:
> > >    comm fsstress: reading directory lblock 0
> > > 
> > > Something like ll_rw_block() should be used carefully and seems could
> > > only be safely used for the readahead case. So the best way is to fix
> > > the read operations in filesystem in the long run, but now let us avoid
> > > this issue first. This patch avoid this issue by fallback to migrate
> > > pages that are not uotodate like fallback_migrate_folio(), those pages
> > > that has buffers may probably do read operation soon.
> > > 
> > > Fixes: 88dbcbb3a484 ("blkdev: avoid migration stalls for blkdev pages")
> > > Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
> > > Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
> > 
> > Thanks for the analysis and the fix! As you noted above this is actually a
> > bug in the filesystems that they assume that locked buffer means it is
> > under IO.  Usually that is the case but there are other places that lock
> > the buffer without doing IO. Page migration is one of them, jbd2 machinery
> > another one, there may be others.
> > 
> > So I think this really ought to be fixed in filesystems instead of papering
> > over the bug in the migration code. I agree this is more work but we will
> > reduce the technical debt, not increase it :). Honestly, ll_rw_block()
> > should just die. It is actively dangerous to use. Instead we should have
> > one call for readahead of bhs and the rest should be converted to
> > submit_bh() or similar calls. There are like 25 callers remaining so it
> > won't be even that hard.
> > 
> > And then we have the same buggy code as in ll_rw_block() copied to
> > ext4_bread_batch() (ext4_read_bh_lock() in particular) so that needs to be
> > fixed as well...
> > 
> > 								Honza
> 
> You are right, Jan. I totally agree with you. It seems that I shouldn't have
> been lazy.

If you face any issues with this, feel free to email me. I'll be happy to
help :).

								Honza
diff mbox series

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index 6a1597c92261..bded69867619 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -691,6 +691,38 @@  static int __buffer_migrate_folio(struct address_space *mapping,
 	if (!head)
 		return migrate_folio(mapping, dst, src, mode);
 
+	/*
+	 * If the mapped buffers on the page are not uptodate and has refcount,
+	 * some others may propably try to lock the buffer and submit read IO
+	 * through ll_rw_block(), but it will not submit IO once it failed to
+	 * lock the buffer, so try to fallback to migrate_folio() to prevent
+	 * false positive EIO.
+	 */
+	if (check_refs) {
+		bool uptodate = true;
+		bool invalidate = false;
+
+		bh = head;
+		do {
+			if (buffer_mapped(bh) && !buffer_uptodate(bh)) {
+				uptodate = false;
+				if (atomic_read(&bh->b_count)) {
+					invalidate = true;
+					break;
+				}
+			}
+			bh = bh->b_this_page;
+		} while (bh != head);
+
+		if (!uptodate) {
+			if (invalidate)
+				invalidate_bh_lrus();
+			if (filemap_release_folio(src, GFP_KERNEL))
+				return migrate_folio(mapping, dst, src, mode);
+			return -EAGAIN;
+		}
+	}
+
 	/* Check whether page does not have extra refs before we do more work */
 	expected_count = folio_expected_refs(mapping, src);
 	if (folio_ref_count(src) != expected_count)