Message ID | 6de954aed27f8e5ebccd780bbc40ce37a6ddf4f1.1655391633.git.naohiro.aota@wdc.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | btrfs: fix error handling of cow_file_range(unlock = 0) | expand |
On Fri, Jun 17, 2022 at 12:45:40AM +0900, Naohiro Aota wrote: > btrfs_cleanup_ordered_extents() assumes locked_page to be non-NULL, so it > is not usable for submit_uncompressed_range() which can habe NULL > locked_page. > > This commit supports locked_page == NULL case. Also, it rewrites redundant > "page_offset(locked_page)". The patch looks fine, but I don't see any patch in the patchset that actually makes submit_uncompressed_range() use btrfs_cleanup_ordered_extents(). Did you forgot that, or am I missing something? Thanks. > > Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> > --- > fs/btrfs/inode.c | 36 +++++++++++++++++++++--------------- > 1 file changed, 21 insertions(+), 15 deletions(-) > > diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c > index 0c3d9998470f..4e1100f84a88 100644 > --- a/fs/btrfs/inode.c > +++ b/fs/btrfs/inode.c > @@ -195,11 +195,14 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode, > { > unsigned long index = offset >> PAGE_SHIFT; > unsigned long end_index = (offset + bytes - 1) >> PAGE_SHIFT; > - u64 page_start = page_offset(locked_page); > - u64 page_end = page_start + PAGE_SIZE - 1; > - > + u64 page_start, page_end; > struct page *page; > > + if (locked_page) { > + page_start = page_offset(locked_page); > + page_end = page_start + PAGE_SIZE - 1; > + } > + > while (index <= end_index) { > /* > * For locked page, we will call end_extent_writepage() on it > @@ -212,7 +215,7 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode, > * btrfs_mark_ordered_io_finished() would skip the accounting > * for the page range, and the ordered extent will never finish. > */ > - if (index == (page_offset(locked_page) >> PAGE_SHIFT)) { > + if (locked_page && index == (page_start >> PAGE_SHIFT)) { > index++; > continue; > } > @@ -231,17 +234,20 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode, > put_page(page); > } > > - /* The locked page covers the full range, nothing needs to be done */ > - if (bytes + offset <= page_offset(locked_page) + PAGE_SIZE) > - return; > - /* > - * In case this page belongs to the delalloc range being instantiated > - * then skip it, since the first page of a range is going to be > - * properly cleaned up by the caller of run_delalloc_range > - */ > - if (page_start >= offset && page_end <= (offset + bytes - 1)) { > - bytes = offset + bytes - page_offset(locked_page) - PAGE_SIZE; > - offset = page_offset(locked_page) + PAGE_SIZE; > + if (locked_page) { > + /* The locked page covers the full range, nothing needs to be done */ > + if (bytes + offset <= page_start + PAGE_SIZE) > + return; > + /* > + * In case this page belongs to the delalloc range being > + * instantiated then skip it, since the first page of a range is > + * going to be properly cleaned up by the caller of > + * run_delalloc_range > + */ > + if (page_start >= offset && page_end <= (offset + bytes - 1)) { > + bytes = offset + bytes - page_offset(locked_page) - PAGE_SIZE; > + offset = page_offset(locked_page) + PAGE_SIZE; > + } > } > > return __endio_write_update_ordered(inode, offset, bytes, false); > -- > 2.35.1 >
On Fri, Jun 17, 2022 at 11:15:15AM +0100, Filipe Manana wrote: > On Fri, Jun 17, 2022 at 12:45:40AM +0900, Naohiro Aota wrote: > > btrfs_cleanup_ordered_extents() assumes locked_page to be non-NULL, so it > > is not usable for submit_uncompressed_range() which can habe NULL > > locked_page. > > > > This commit supports locked_page == NULL case. Also, it rewrites redundant > > "page_offset(locked_page)". > > The patch looks fine, but I don't see any patch in the patchset that actually > makes submit_uncompressed_range() use btrfs_cleanup_ordered_extents(). > Did you forgot that, or am I missing something? As commented in the reply, you looks like to miss the first line of added code. :-) > Thanks. > > > > > Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> > > --- > > fs/btrfs/inode.c | 36 +++++++++++++++++++++--------------- > > 1 file changed, 21 insertions(+), 15 deletions(-) > > > > diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c > > index 0c3d9998470f..4e1100f84a88 100644 > > --- a/fs/btrfs/inode.c > > +++ b/fs/btrfs/inode.c > > @@ -195,11 +195,14 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode, > > { > > unsigned long index = offset >> PAGE_SHIFT; > > unsigned long end_index = (offset + bytes - 1) >> PAGE_SHIFT; > > - u64 page_start = page_offset(locked_page); > > - u64 page_end = page_start + PAGE_SIZE - 1; > > - > > + u64 page_start, page_end; > > struct page *page; > > > > + if (locked_page) { > > + page_start = page_offset(locked_page); > > + page_end = page_start + PAGE_SIZE - 1; > > + } > > + > > while (index <= end_index) { > > /* > > * For locked page, we will call end_extent_writepage() on it > > @@ -212,7 +215,7 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode, > > * btrfs_mark_ordered_io_finished() would skip the accounting > > * for the page range, and the ordered extent will never finish. > > */ > > - if (index == (page_offset(locked_page) >> PAGE_SHIFT)) { > > + if (locked_page && index == (page_start >> PAGE_SHIFT)) { > > index++; > > continue; > > } > > @@ -231,17 +234,20 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode, > > put_page(page); > > } > > > > - /* The locked page covers the full range, nothing needs to be done */ > > - if (bytes + offset <= page_offset(locked_page) + PAGE_SIZE) > > - return; > > - /* > > - * In case this page belongs to the delalloc range being instantiated > > - * then skip it, since the first page of a range is going to be > > - * properly cleaned up by the caller of run_delalloc_range > > - */ > > - if (page_start >= offset && page_end <= (offset + bytes - 1)) { > > - bytes = offset + bytes - page_offset(locked_page) - PAGE_SIZE; > > - offset = page_offset(locked_page) + PAGE_SIZE; > > + if (locked_page) { > > + /* The locked page covers the full range, nothing needs to be done */ > > + if (bytes + offset <= page_start + PAGE_SIZE) > > + return; > > + /* > > + * In case this page belongs to the delalloc range being > > + * instantiated then skip it, since the first page of a range is > > + * going to be properly cleaned up by the caller of > > + * run_delalloc_range > > + */ > > + if (page_start >= offset && page_end <= (offset + bytes - 1)) { > > + bytes = offset + bytes - page_offset(locked_page) - PAGE_SIZE; > > + offset = page_offset(locked_page) + PAGE_SIZE; > > + } > > } > > > > return __endio_write_update_ordered(inode, offset, bytes, false); > > -- > > 2.35.1 > >
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 0c3d9998470f..4e1100f84a88 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -195,11 +195,14 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode, { unsigned long index = offset >> PAGE_SHIFT; unsigned long end_index = (offset + bytes - 1) >> PAGE_SHIFT; - u64 page_start = page_offset(locked_page); - u64 page_end = page_start + PAGE_SIZE - 1; - + u64 page_start, page_end; struct page *page; + if (locked_page) { + page_start = page_offset(locked_page); + page_end = page_start + PAGE_SIZE - 1; + } + while (index <= end_index) { /* * For locked page, we will call end_extent_writepage() on it @@ -212,7 +215,7 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode, * btrfs_mark_ordered_io_finished() would skip the accounting * for the page range, and the ordered extent will never finish. */ - if (index == (page_offset(locked_page) >> PAGE_SHIFT)) { + if (locked_page && index == (page_start >> PAGE_SHIFT)) { index++; continue; } @@ -231,17 +234,20 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode, put_page(page); } - /* The locked page covers the full range, nothing needs to be done */ - if (bytes + offset <= page_offset(locked_page) + PAGE_SIZE) - return; - /* - * In case this page belongs to the delalloc range being instantiated - * then skip it, since the first page of a range is going to be - * properly cleaned up by the caller of run_delalloc_range - */ - if (page_start >= offset && page_end <= (offset + bytes - 1)) { - bytes = offset + bytes - page_offset(locked_page) - PAGE_SIZE; - offset = page_offset(locked_page) + PAGE_SIZE; + if (locked_page) { + /* The locked page covers the full range, nothing needs to be done */ + if (bytes + offset <= page_start + PAGE_SIZE) + return; + /* + * In case this page belongs to the delalloc range being + * instantiated then skip it, since the first page of a range is + * going to be properly cleaned up by the caller of + * run_delalloc_range + */ + if (page_start >= offset && page_end <= (offset + bytes - 1)) { + bytes = offset + bytes - page_offset(locked_page) - PAGE_SIZE; + offset = page_offset(locked_page) + PAGE_SIZE; + } } return __endio_write_update_ordered(inode, offset, bytes, false);
btrfs_cleanup_ordered_extents() assumes locked_page to be non-NULL, so it is not usable for submit_uncompressed_range() which can habe NULL locked_page. This commit supports locked_page == NULL case. Also, it rewrites redundant "page_offset(locked_page)". Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> --- fs/btrfs/inode.c | 36 +++++++++++++++++++++--------------- 1 file changed, 21 insertions(+), 15 deletions(-)