Message ID | 0343bb9a59f8c3d647004df471445153bf1f8a3d.1687140389.git.ritesh.list@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | iomap: Add support for per-block dirty state to improve write performance | expand |
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 790b8413b44c..8206f3628586 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -933,7 +933,7 @@ static int iomap_write_delalloc_scan(struct inode *inode, * the end of this data range, not the end of the folio. */ *punch_start_byte = min_t(loff_t, end_byte, - folio_next_index(folio) << PAGE_SHIFT); + folio_pos(folio) + folio_size(folio)); } /* move offset to start of next folio in range */
folio_next_index() returns an unsigned long value which left shifted by PAGE_SHIFT could possibly cause an overflow. Instead use folio_pos(folio) + folio_size(folio), which does this correctly. Suggested-by: Matthew Wilcox <willy@infradead.org> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> --- fs/iomap/buffered-io.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)