Message ID | 235ae192f8d1f9b525aa00a27fd476e2979ada1b.1743493507.git.wqu@suse.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | btrfs: two small and safe fixes for large folios | expand |
On Tue, Apr 1, 2025 at 8:51 AM Qu Wenruo <wqu@suse.com> wrote: > > [BUG WITH EXPERIMENTAL LARGE FOLIOS] > When testing the experimental large data folio support with compression, > there are several ASSERT()s triggered from btrfs_decompress_buf2page() > when running fsstress with compress=zstd mount option: > > - ASSERT(copy_len) from btrfs_decompress_buf2page() > - VM_BUG_ON(offset + len > PAGE_SIZE) from memcpy_to_page() > > [CAUSE] > Inside btrfs_decompress_buf2page(), we need to grab the file offset from > the current bvec.bv_page, to check if we even need to copy data into the > bio. > > And since we're using single page bvec, and no large folio, every page > inside the folio should have its index properly setup. > > But when large folios are involved, only the first page (aka, the head > page) of a large folio has its index properly initialized. > > The other pages inside the large folio will not have their indexes > properly initialized. > > Thus the page_offset() call inside btrfs_decompress_buf2page() will > result garbage, and completely screw up the @copy_len calculation. > > [FIX] > Instead of using page->index directly, go with page_pgoff(), which can > handle non-head pages correctly. > > So introduce a helper, file_offset_from_bvec(), to get the file offset > from a single page bio_vec, so the copy_len calculation can be done > correctly. > > Signed-off-by: Qu Wenruo <wqu@suse.com> > --- > fs/btrfs/compression.c | 18 +++++++++++++++++- > 1 file changed, 17 insertions(+), 1 deletion(-) > > diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c > index e7f8ee5d48a4..cb954f9bc332 100644 > --- a/fs/btrfs/compression.c > +++ b/fs/btrfs/compression.c > @@ -1137,6 +1137,22 @@ void __cold btrfs_exit_compress(void) > bioset_exit(&btrfs_compressed_bioset); > } > > +/* > + * The bvec is a single page bvec from a bio that contains folios from a filemap. > + * > + * Since the folios may be large one, and if the bv_page is not a head page of folios -> folio large -> a large Otherwise it looks good, thanks. Reviewed-by: Filipe Manana <fdmanana@suse.com> > + * a large folio, then page->index is unreliable. > + * > + * Thus we need this helper to grab the proper file offset. > + */ > +static u64 file_offset_from_bvec(const struct bio_vec *bvec) > +{ > + const struct page *page = bvec->bv_page; > + const struct folio *folio = page_folio(page); > + > + return (page_pgoff(folio, page) << PAGE_SHIFT) + bvec->bv_offset; > +} > + > /* > * Copy decompressed data from working buffer to pages. > * > @@ -1188,7 +1204,7 @@ int btrfs_decompress_buf2page(const char *buf, u32 buf_len, > * cb->start may underflow, but subtracting that value can still > * give us correct offset inside the full decompressed extent. > */ > - bvec_offset = page_offset(bvec.bv_page) + bvec.bv_offset - cb->start; > + bvec_offset = file_offset_from_bvec(&bvec) - cb->start; > > /* Haven't reached the bvec range, exit */ > if (decompressed + buf_len <= bvec_offset) > -- > 2.49.0 > >
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index e7f8ee5d48a4..cb954f9bc332 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -1137,6 +1137,22 @@ void __cold btrfs_exit_compress(void) bioset_exit(&btrfs_compressed_bioset); } +/* + * The bvec is a single page bvec from a bio that contains folios from a filemap. + * + * Since the folios may be large one, and if the bv_page is not a head page of + * a large folio, then page->index is unreliable. + * + * Thus we need this helper to grab the proper file offset. + */ +static u64 file_offset_from_bvec(const struct bio_vec *bvec) +{ + const struct page *page = bvec->bv_page; + const struct folio *folio = page_folio(page); + + return (page_pgoff(folio, page) << PAGE_SHIFT) + bvec->bv_offset; +} + /* * Copy decompressed data from working buffer to pages. * @@ -1188,7 +1204,7 @@ int btrfs_decompress_buf2page(const char *buf, u32 buf_len, * cb->start may underflow, but subtracting that value can still * give us correct offset inside the full decompressed extent. */ - bvec_offset = page_offset(bvec.bv_page) + bvec.bv_offset - cb->start; + bvec_offset = file_offset_from_bvec(&bvec) - cb->start; /* Haven't reached the bvec range, exit */ if (decompressed + buf_len <= bvec_offset)
[BUG WITH EXPERIMENTAL LARGE FOLIOS] When testing the experimental large data folio support with compression, there are several ASSERT()s triggered from btrfs_decompress_buf2page() when running fsstress with compress=zstd mount option: - ASSERT(copy_len) from btrfs_decompress_buf2page() - VM_BUG_ON(offset + len > PAGE_SIZE) from memcpy_to_page() [CAUSE] Inside btrfs_decompress_buf2page(), we need to grab the file offset from the current bvec.bv_page, to check if we even need to copy data into the bio. And since we're using single page bvec, and no large folio, every page inside the folio should have its index properly setup. But when large folios are involved, only the first page (aka, the head page) of a large folio has its index properly initialized. The other pages inside the large folio will not have their indexes properly initialized. Thus the page_offset() call inside btrfs_decompress_buf2page() will result garbage, and completely screw up the @copy_len calculation. [FIX] Instead of using page->index directly, go with page_pgoff(), which can handle non-head pages correctly. So introduce a helper, file_offset_from_bvec(), to get the file offset from a single page bio_vec, so the copy_len calculation can be done correctly. Signed-off-by: Qu Wenruo <wqu@suse.com> --- fs/btrfs/compression.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-)