Message ID | 20211101203929.954622-21-willy@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | iomap/xfs folio patches | expand |
On Mon, Nov 01, 2021 at 08:39:28PM +0000, Matthew Wilcox (Oracle) wrote: > If we're punching a hole in a multi-page folio, we need to remove the > per-folio iomap data as the folio is about to be split and each page will > need its own. If a dirty folio is only partially-uptodate, the iomap > data contains the information about which blocks cannot be written back, > so assert that a dirty folio is fully uptodate. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Looks good, Reviewed-by: Christoph Hellwig <hch@lst.de>
On Mon, Nov 01, 2021 at 08:39:28PM +0000, Matthew Wilcox (Oracle) wrote: > If we're punching a hole in a multi-page folio, we need to remove the > per-folio iomap data as the folio is about to be split and each page will > need its own. If a dirty folio is only partially-uptodate, the iomap > data contains the information about which blocks cannot be written back, > so assert that a dirty folio is fully uptodate. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Looks good to me, Reviewed-by: Darrick J. Wong <djwong@kernel.org> --D > --- > fs/iomap/buffered-io.c | 9 +++++++-- > 1 file changed, 7 insertions(+), 2 deletions(-) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index 3b93fdfedb72..9d7c91f9ec1d 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -470,13 +470,18 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len) > trace_iomap_invalidatepage(folio->mapping->host, offset, len); > > /* > - * If we're invalidating the entire page, clear the dirty state from it > - * and release it to avoid unnecessary buildup of the LRU. > + * If we're invalidating the entire folio, clear the dirty state > + * from it and release it to avoid unnecessary buildup of the LRU. > */ > if (offset == 0 && len == folio_size(folio)) { > WARN_ON_ONCE(folio_test_writeback(folio)); > folio_cancel_dirty(folio); > iomap_page_release(folio); > + } else if (folio_test_multi(folio)) { > + /* Must release the iop so the page can be split */ > + WARN_ON_ONCE(!folio_test_uptodate(folio) && > + folio_test_dirty(folio)); > + iomap_page_release(folio); > } > } > EXPORT_SYMBOL_GPL(iomap_invalidate_folio); > -- > 2.33.0 >
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 3b93fdfedb72..9d7c91f9ec1d 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -470,13 +470,18 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len) trace_iomap_invalidatepage(folio->mapping->host, offset, len); /* - * If we're invalidating the entire page, clear the dirty state from it - * and release it to avoid unnecessary buildup of the LRU. + * If we're invalidating the entire folio, clear the dirty state + * from it and release it to avoid unnecessary buildup of the LRU. */ if (offset == 0 && len == folio_size(folio)) { WARN_ON_ONCE(folio_test_writeback(folio)); folio_cancel_dirty(folio); iomap_page_release(folio); + } else if (folio_test_multi(folio)) { + /* Must release the iop so the page can be split */ + WARN_ON_ONCE(!folio_test_uptodate(folio) && + folio_test_dirty(folio)); + iomap_page_release(folio); } } EXPORT_SYMBOL_GPL(iomap_invalidate_folio);
If we're punching a hole in a multi-page folio, we need to remove the per-folio iomap data as the folio is about to be split and each page will need its own. If a dirty folio is only partially-uptodate, the iomap data contains the information about which blocks cannot be written back, so assert that a dirty folio is fully uptodate. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- fs/iomap/buffered-io.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)