Message ID | 20231126124720.1249310-2-hch@lst.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [01/13] iomap: clear the per-folio dirty bits on all writeback failures | expand |
Christoph Hellwig <hch@lst.de> writes: > write_cache_pages always clear the page dirty bit before calling into the > file systems, and leaves folios with a writeback failure without the > dirty bit after return. We also clear the per-block writeback bits for you mean per-block dirty bits, right? > writeback failures unless no I/O has submitted, which will leave the > folio in an inconsistent state where it doesn't have the folio dirty, > but one or more per-block dirty bits. This seems to be due the place > where the iomap_clear_range_dirty call was inserted into the existing > not very clearly structured code when adding per-block dirty bit support > and not actually intentional. Switch to always clearing the dirty on > writeback failure. > > Fixes: 4ce02c679722 ("iomap: Add per-block dirty state tracking to improve performance") > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- Thanks for catching it. Small nit. > fs/iomap/buffered-io.c | 10 ++++++---- > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index f72df2babe561a..98d52feb220f0a 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -1849,10 +1849,6 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, > */ /* * Let the filesystem know what portion of the current page * failed to map. If the page hasn't been added to ioend, it * won't be affected by I/O completion and we must unlock it * now. */ The comment to unlock it now becomes stale here. > if (wpc->ops->discard_folio) > wpc->ops->discard_folio(folio, pos); > - if (!count) { > - folio_unlock(folio); > - goto done; > - } > } > > /* > @@ -1861,6 +1857,12 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, > * all the dirty bits in the folio here. > */ > iomap_clear_range_dirty(folio, 0, folio_size(folio)); Maybe why not move iomap_clear_range_dirty() before? diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 200c26f95893..c875ba632dd8 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1842,6 +1842,13 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, if (count) wpc->ioend->io_folios++; + /* + * We can have dirty bits set past end of file in page_mkwrite path + * while mapping the last partial folio. Hence it's better to clear + * all the dirty bits in the folio here. + */ + iomap_clear_range_dirty(folio, 0, folio_size(folio)); + WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list)); WARN_ON_ONCE(!folio_test_locked(folio)); WARN_ON_ONCE(folio_test_writeback(folio)); @@ -1867,13 +1874,6 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, goto done; } } - - /* - * We can have dirty bits set past end of file in page_mkwrite path - * while mapping the last partial folio. Hence it's better to clear - * all the dirty bits in the folio here. - */ - iomap_clear_range_dirty(folio, 0, folio_size(folio)); folio_start_writeback(folio); folio_unlock(folio); > + > + if (error && !count) { > + folio_unlock(folio); > + goto done; > + } > + > folio_start_writeback(folio); > folio_unlock(folio); > -ritesh
On Mon, Nov 27, 2023 at 09:17:07AM +0530, Ritesh Harjani wrote: > Christoph Hellwig <hch@lst.de> writes: > > > write_cache_pages always clear the page dirty bit before calling into the > > file systems, and leaves folios with a writeback failure without the > > dirty bit after return. We also clear the per-block writeback bits for > > you mean per-block dirty bits, right? Yes, sorry. > /* > * Let the filesystem know what portion of the current page > * failed to map. If the page hasn't been added to ioend, it > * won't be affected by I/O completion and we must unlock it > * now. > */ > The comment to unlock it now becomes stale here. Indeed. > Maybe why not move iomap_clear_range_dirty() before? A few patches down the ->discard_folio call moves into a helper, so I'd rather avoid the churn here. I'll move that part of the comment to thew new check below iomap_clear_range_dirty instead.
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index f72df2babe561a..98d52feb220f0a 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1849,10 +1849,6 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, */ if (wpc->ops->discard_folio) wpc->ops->discard_folio(folio, pos); - if (!count) { - folio_unlock(folio); - goto done; - } } /* @@ -1861,6 +1857,12 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * all the dirty bits in the folio here. */ iomap_clear_range_dirty(folio, 0, folio_size(folio)); + + if (error && !count) { + folio_unlock(folio); + goto done; + } + folio_start_writeback(folio); folio_unlock(folio);
write_cache_pages always clear the page dirty bit before calling into the file systems, and leaves folios with a writeback failure without the dirty bit after return. We also clear the per-block writeback bits for writeback failures unless no I/O has submitted, which will leave the folio in an inconsistent state where it doesn't have the folio dirty, but one or more per-block dirty bits. This seems to be due the place where the iomap_clear_range_dirty call was inserted into the existing not very clearly structured code when adding per-block dirty bit support and not actually intentional. Switch to always clearing the dirty on writeback failure. Fixes: 4ce02c679722 ("iomap: Add per-block dirty state tracking to improve performance") Signed-off-by: Christoph Hellwig <hch@lst.de> --- fs/iomap/buffered-io.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)