Message ID | 20230314165110.372858-3-hch@lst.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] btrfs: move the bi_sector assingment out of btrfs_add_compressed_bio_pages | expand |
On 3/15/23 00:51, Christoph Hellwig wrote: > btrfs_add_compressed_bio_pages is neededlyless complicated. Instead > of iterating over the logic disk offset just to add pages to the bio > use a simple offset starting at 0, which also removes most of the > claiming. Additionally __bio_add_pages already takes care of the ^__bio_add_page > assert that the bio is always properly sized, and btrfs_submit_bio WARN_ON_ONCE() will be triggered if the bio is a CLONED bio or is full when passed to __bio_add_page(). And, in our case, we do not require the functionality of __bio_try_merge_page()? > called right after asserts that the bio size is non-zero. > > Signed-off-by: Christoph Hellwig <hch@lst.de> Other than the nitpick as above. Looks good. Thanks, Anand > --- > fs/btrfs/compression.c | 34 +++++++--------------------------- > 1 file changed, 7 insertions(+), 27 deletions(-) > > diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c > index 1487c9413e6942..44c4276741ceda 100644 > --- a/fs/btrfs/compression.c > +++ b/fs/btrfs/compression.c > @@ -258,37 +258,17 @@ static void end_compressed_bio_write(struct btrfs_bio *bbio) > > static void btrfs_add_compressed_bio_pages(struct compressed_bio *cb) > { > - struct btrfs_fs_info *fs_info = cb->bbio.inode->root->fs_info; > struct bio *bio = &cb->bbio.bio; > - u64 disk_bytenr = bio->bi_iter.bi_sector << SECTOR_SHIFT; > - u64 cur_disk_byte = disk_bytenr; > + u32 offset = 0; > > - while (cur_disk_byte < disk_bytenr + cb->compressed_len) { > - u64 offset = cur_disk_byte - disk_bytenr; > - unsigned int index = offset >> PAGE_SHIFT; > - unsigned int real_size; > - unsigned int added; > - struct page *page = cb->compressed_pages[index]; > + while (offset < cb->compressed_len) { > + u32 len = min_t(u32, cb->compressed_len - offset, PAGE_SIZE); > > - /* > - * We have various limit on the real read size: > - * - page boundary > - * - compressed length boundary > - */ > - real_size = min_t(u64, U32_MAX, PAGE_SIZE - offset_in_page(offset)); > - real_size = min_t(u64, real_size, cb->compressed_len - offset); > - ASSERT(IS_ALIGNED(real_size, fs_info->sectorsize)); > - > - added = bio_add_page(bio, page, real_size, offset_in_page(offset)); > - /* > - * Maximum compressed extent is smaller than bio size limit, > - * thus bio_add_page() should always success. > - */ > - ASSERT(added == real_size); > - cur_disk_byte += added; > + /* Maximum compressed extent is smaller than bio size limit. */ > + __bio_add_page(bio, cb->compressed_pages[offset >> PAGE_SHIFT], > + len, 0); > + offset += len; > } > - > - ASSERT(bio->bi_iter.bi_size); > } > > /*
On Thu, Mar 16, 2023 at 04:57:39PM +0800, Anand Jain wrote: > And, in our case, we do not require the functionality of > __bio_try_merge_page()? Yes. These are separately allocated pages, and we have enough space for them. In the unlikely case that they are contiguous, they will still be coalesced when merging into the scatterlist.
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 1487c9413e6942..44c4276741ceda 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -258,37 +258,17 @@ static void end_compressed_bio_write(struct btrfs_bio *bbio) static void btrfs_add_compressed_bio_pages(struct compressed_bio *cb) { - struct btrfs_fs_info *fs_info = cb->bbio.inode->root->fs_info; struct bio *bio = &cb->bbio.bio; - u64 disk_bytenr = bio->bi_iter.bi_sector << SECTOR_SHIFT; - u64 cur_disk_byte = disk_bytenr; + u32 offset = 0; - while (cur_disk_byte < disk_bytenr + cb->compressed_len) { - u64 offset = cur_disk_byte - disk_bytenr; - unsigned int index = offset >> PAGE_SHIFT; - unsigned int real_size; - unsigned int added; - struct page *page = cb->compressed_pages[index]; + while (offset < cb->compressed_len) { + u32 len = min_t(u32, cb->compressed_len - offset, PAGE_SIZE); - /* - * We have various limit on the real read size: - * - page boundary - * - compressed length boundary - */ - real_size = min_t(u64, U32_MAX, PAGE_SIZE - offset_in_page(offset)); - real_size = min_t(u64, real_size, cb->compressed_len - offset); - ASSERT(IS_ALIGNED(real_size, fs_info->sectorsize)); - - added = bio_add_page(bio, page, real_size, offset_in_page(offset)); - /* - * Maximum compressed extent is smaller than bio size limit, - * thus bio_add_page() should always success. - */ - ASSERT(added == real_size); - cur_disk_byte += added; + /* Maximum compressed extent is smaller than bio size limit. */ + __bio_add_page(bio, cb->compressed_pages[offset >> PAGE_SHIFT], + len, 0); + offset += len; } - - ASSERT(bio->bi_iter.bi_size); } /*
btrfs_add_compressed_bio_pages is neededlyless complicated. Instead of iterating over the logic disk offset just to add pages to the bio use a simple offset starting at 0, which also removes most of the claiming. Additionally __bio_add_pages already takes care of the assert that the bio is always properly sized, and btrfs_submit_bio called right after asserts that the bio size is non-zero. Signed-off-by: Christoph Hellwig <hch@lst.de> --- fs/btrfs/compression.c | 34 +++++++--------------------------- 1 file changed, 7 insertions(+), 27 deletions(-)