Message ID | 8fb7820d18bef8f661f807b3d96be2591aee6494.1743487686.git.wqu@suse.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | btrfs: two small and safe fixes for large folios | expand |
On 4/1/25 2:12 AM, Qu Wenruo wrote: > +static u64 file_offset_from_bvec(const struct bio_vec *bvec) > +{ > + const struct page *page = bvec->bv_page; > + const struct folio *folio = page_folio(page); > + > + return page_pgoff(folio, page) + bvec->bv_offset; > +} I think this needs to be page_pgoff() << PAGE_SHIFT + bvec->bv_offset: page_pgoff() returns in units of PAGE_SIZE, while bv_offset is in bytes?
在 2025/4/1 17:03, Sweet Tea Dorminy 写道: > > > On 4/1/25 2:12 AM, Qu Wenruo wrote: >> +static u64 file_offset_from_bvec(const struct bio_vec *bvec) >> +{ >> + const struct page *page = bvec->bv_page; >> + const struct folio *folio = page_folio(page); >> + >> + return page_pgoff(folio, page) + bvec->bv_offset; >> +} > > I think this needs to be page_pgoff() << PAGE_SHIFT + bvec->bv_offset: > page_pgoff() returns in units of PAGE_SIZE, while bv_offset is in bytes? > Oh no, this must be some local change not committed, thanks for catching it. Thanks, Qu
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index e7f8ee5d48a4..ee70f086c884 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -1137,6 +1137,22 @@ void __cold btrfs_exit_compress(void) bioset_exit(&btrfs_compressed_bioset); } +/* + * The bvec is a single page bvec from a bio that contains folios from a filemap. + * + * Since the folios may be large one, and if the bv_page is not a head page of + * a large folio, then page->index is unreliable. + * + * Thus we need this helper to grab the proper file offset. + */ +static u64 file_offset_from_bvec(const struct bio_vec *bvec) +{ + const struct page *page = bvec->bv_page; + const struct folio *folio = page_folio(page); + + return page_pgoff(folio, page) + bvec->bv_offset; +} + /* * Copy decompressed data from working buffer to pages. * @@ -1188,7 +1204,7 @@ int btrfs_decompress_buf2page(const char *buf, u32 buf_len, * cb->start may underflow, but subtracting that value can still * give us correct offset inside the full decompressed extent. */ - bvec_offset = page_offset(bvec.bv_page) + bvec.bv_offset - cb->start; + bvec_offset = file_offset_from_bvec(&bvec) - cb->start; /* Haven't reached the bvec range, exit */ if (decompressed + buf_len <= bvec_offset)
[BUG WITH EXPERIMENTAL LARGE FOLIOS] When testing the experimental large data folio support with compression, there are several ASSERT()s triggered from btrfs_decompress_buf2page() when running fsstress with compress=zstd mount option: - ASSERT(copy_len) from btrfs_decompress_buf2page() - VM_BUG_ON(offset + len > PAGE_SIZE) from memcpy_to_page() [CAUSE] Inside btrfs_decompress_buf2page(), we need to grab the file offset from the current bvec.bv_page, to check if we even need to copy data into the bio. And since we're using single page bvec, and no large folio, every page inside the folio should have its index properly setup. But when large folios are involved, only the first page (aka, the head page) of a large folio has its index properly initialized. The other pages inside the large folio will not have their indexes properly initialized. Thus the page_offset() call inside btrfs_decompress_buf2page() will result garbage, and completely screw up the @copy_len calculation. [FIX] Instead of using page->index directly, go with page_pgoff(), which can handle non-head pages correctly. So introduce a helper, file_offset_from_bvec(), to get the file offset from a single page bio_vec, so the copy_len calculation can be done correctly. Signed-off-by: Qu Wenruo <wqu@suse.com> --- fs/btrfs/compression.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-)