diff mbox series

iomap: iomap_write_failed fix

Message ID 20220503213645.3273828-1-agruenba@redhat.com (mailing list archive)
State New, archived
Headers show
Series iomap: iomap_write_failed fix | expand

Commit Message

Andreas Gruenbacher May 3, 2022, 9:36 p.m. UTC
The @lend parameter of truncate_pagecache_range() should be the offset
of the last byte of the hole, not the first byte beyond it.

Fixes: ae259a9c8593 ("fs: introduce iomap infrastructure")
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
---
 fs/iomap/buffered-io.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Matthew Wilcox May 3, 2022, 9:44 p.m. UTC | #1
On Tue, May 03, 2022 at 11:36:45PM +0200, Andreas Gruenbacher wrote:
> The @lend parameter of truncate_pagecache_range() should be the offset
> of the last byte of the hole, not the first byte beyond it.
> 
> Fixes: ae259a9c8593 ("fs: introduce iomap infrastructure")

Hm, yes, this is _true_, but it's a fix without importance (except maybe
for an overflow case?)  Look at the condition this is called in.  We
aren't punching out an extra byte in the page cache because we're
punching beyond the end of the file.

It should be fixed because people copy-and-paste code.  But it's not
urgent, and doesn't need to be backported.

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>

> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
> ---
>  fs/iomap/buffered-io.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 8ce8720093b9..358ee1fb6f0d 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -531,7 +531,8 @@ iomap_write_failed(struct inode *inode, loff_t pos, unsigned len)
>  	 * write started inside the existing inode size.
>  	 */
>  	if (pos + len > i_size)
> -		truncate_pagecache_range(inode, max(pos, i_size), pos + len);
> +		truncate_pagecache_range(inode, max(pos, i_size),
> +					 pos + len - 1);
>  }
>  
>  static int iomap_read_folio_sync(loff_t block_start, struct folio *folio,
> -- 
> 2.35.1
>
Darrick J. Wong May 3, 2022, 11:03 p.m. UTC | #2
On Tue, May 03, 2022 at 11:36:45PM +0200, Andreas Gruenbacher wrote:
> The @lend parameter of truncate_pagecache_range() should be the offset
> of the last byte of the hole, not the first byte beyond it.
> 
> Fixes: ae259a9c8593 ("fs: introduce iomap infrastructure")
> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>

I'll queue this up for ... 5.19?  Testing infrastructure still sorta
tied up until I get at least two clean runs on 5.18-rcX, which <cough>
still hasn't happened yet.

Reviewed-by: Darrick J. Wong <djwong@kernel.org>

--D

> ---
>  fs/iomap/buffered-io.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 8ce8720093b9..358ee1fb6f0d 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -531,7 +531,8 @@ iomap_write_failed(struct inode *inode, loff_t pos, unsigned len)
>  	 * write started inside the existing inode size.
>  	 */
>  	if (pos + len > i_size)
> -		truncate_pagecache_range(inode, max(pos, i_size), pos + len);
> +		truncate_pagecache_range(inode, max(pos, i_size),
> +					 pos + len - 1);
>  }
>  
>  static int iomap_read_folio_sync(loff_t block_start, struct folio *folio,
> -- 
> 2.35.1
>
Christoph Hellwig May 4, 2022, 2:09 p.m. UTC | #3
On Tue, May 03, 2022 at 11:36:45PM +0200, Andreas Gruenbacher wrote:
> The @lend parameter of truncate_pagecache_range() should be the offset
> of the last byte of the hole, not the first byte beyond it.
> 
> Fixes: ae259a9c8593 ("fs: introduce iomap infrastructure")
> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
diff mbox series

Patch

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 8ce8720093b9..358ee1fb6f0d 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -531,7 +531,8 @@  iomap_write_failed(struct inode *inode, loff_t pos, unsigned len)
 	 * write started inside the existing inode size.
 	 */
 	if (pos + len > i_size)
-		truncate_pagecache_range(inode, max(pos, i_size), pos + len);
+		truncate_pagecache_range(inode, max(pos, i_size),
+					 pos + len - 1);
 }
 
 static int iomap_read_folio_sync(loff_t block_start, struct folio *folio,