diff mbox series

[v2,2/2] iomap: elide flush from partial eof zero range

Message ID 20241031140449.439576-3-bfoster@redhat.com (mailing list archive)
State New
Headers show
Series iomap: avoid flushes for partial eof zeroing | expand

Commit Message

Brian Foster Oct. 31, 2024, 2:04 p.m. UTC
iomap zero range flushes pagecache in certain situations to
determine which parts of the range might require zeroing if dirty
data is present in pagecache. The kernel robot recently reported a
regression associated with this flushing in the following stress-ng
workload on XFS:

stress-ng --timeout 60 --times --verify --metrics --no-rand-seed --metamix 64

This workload involves repeated small, strided, extending writes. On
XFS, this produces a pattern of post-eof speculative preallocation,
conversion of preallocation from delalloc to unwritten, dirtying
pagecache over newly unwritten blocks, and then rinse and repeat
from the new EOF. This leads to repetitive flushing of the EOF folio
via the zero range call XFS uses for writes that start beyond
current EOF.

To mitigate this problem, special case EOF block zeroing to prefer
zeroing the folio over a flush when the EOF folio is already dirty.
To do this, split out and open code handling of an unaligned start
offset. This brings most of the performance back by avoiding flushes
on zero range calls via write and truncate extension operations. The
flush doesn't occur in these situations because the entire range is
post-eof and therefore the folio that overlaps EOF is the only one
in the range.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/iomap/buffered-io.c | 42 ++++++++++++++++++++++++++++++++++++++----
 1 file changed, 38 insertions(+), 4 deletions(-)

Comments

Darrick J. Wong Nov. 6, 2024, 12:11 a.m. UTC | #1
On Thu, Oct 31, 2024 at 10:04:48AM -0400, Brian Foster wrote:
> iomap zero range flushes pagecache in certain situations to
> determine which parts of the range might require zeroing if dirty
> data is present in pagecache. The kernel robot recently reported a
> regression associated with this flushing in the following stress-ng
> workload on XFS:
> 
> stress-ng --timeout 60 --times --verify --metrics --no-rand-seed --metamix 64
> 
> This workload involves repeated small, strided, extending writes. On
> XFS, this produces a pattern of post-eof speculative preallocation,
> conversion of preallocation from delalloc to unwritten, dirtying
> pagecache over newly unwritten blocks, and then rinse and repeat
> from the new EOF. This leads to repetitive flushing of the EOF folio
> via the zero range call XFS uses for writes that start beyond
> current EOF.
> 
> To mitigate this problem, special case EOF block zeroing to prefer
> zeroing the folio over a flush when the EOF folio is already dirty.
> To do this, split out and open code handling of an unaligned start
> offset. This brings most of the performance back by avoiding flushes
> on zero range calls via write and truncate extension operations. The
> flush doesn't occur in these situations because the entire range is
> post-eof and therefore the folio that overlaps EOF is the only one
> in the range.
> 
> Signed-off-by: Brian Foster <bfoster@redhat.com>

Cc: <stable@vger.kernel.org> # v6.12-rc1
Fixes: 7d9b474ee4cc37 ("iomap: make zero range flush conditional on unwritten mappings")

perhaps?

> ---
>  fs/iomap/buffered-io.c | 42 ++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 38 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 60386cb7b9ef..343a2fa29bec 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -227,6 +227,18 @@ static void ifs_free(struct folio *folio)
>  	kfree(ifs);
>  }
>  
> +/* helper to reset an iter for reuse */
> +static inline void
> +iomap_iter_init(struct iomap_iter *iter, struct inode *inode, loff_t pos,
> +		loff_t len, unsigned flags)

Nit: maybe call this iomap_iter_reset() ?

Also I wonder if it's really safe to zero iomap_iter::private?
Won't doing that leave a minor logic bomb?

> +{
> +	memset(iter, 0, sizeof(*iter));
> +	iter->inode = inode;
> +	iter->pos = pos;
> +	iter->len = len;
> +	iter->flags = flags;
> +}
> +
>  /*
>   * Calculate the range inside the folio that we actually need to read.
>   */
> @@ -1416,6 +1428,10 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
>  		.len		= len,
>  		.flags		= IOMAP_ZERO,
>  	};
> +	struct address_space *mapping = inode->i_mapping;
> +	unsigned int blocksize = i_blocksize(inode);
> +	unsigned int off = pos & (blocksize - 1);
> +	loff_t plen = min_t(loff_t, len, blocksize - off);
>  	int ret;
>  	bool range_dirty;
>  
> @@ -1425,12 +1441,30 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
>  	 * mapping converts on writeback completion and must be zeroed.
>  	 *
>  	 * The simplest way to deal with this is to flush pagecache and process
> -	 * the updated mappings. To avoid an unconditional flush, check dirty
> -	 * state and defer the flush until a combination of dirty pagecache and
> -	 * at least one mapping that might convert on writeback is seen.
> +	 * the updated mappings. First, special case the partial eof zeroing
> +	 * use case since it is more performance sensitive. Zero the start of
> +	 * the range if unaligned and already dirty in pagecache.
> +	 */
> +	if (off &&
> +	    filemap_range_needs_writeback(mapping, pos, pos + plen - 1)) {
> +		iter.len = plen;
> +		while ((ret = iomap_iter(&iter, ops)) > 0)
> +			iter.processed = iomap_zero_iter(&iter, did_zero);
> +
> +		/* reset iterator for the rest of the range */
> +		iomap_iter_init(&iter, inode, iter.pos,
> +			len - (iter.pos - pos), IOMAP_ZERO);

Nit: maybe one more tab ^ here?

Also from the previous thread: can you reset the original iter instead
of declaring a second one by zeroing the mappings/processed fields,
re-expanding iter::len, and resetting iter::flags?

I guess we'll still do the flush if the start of the zeroing range
aligns with an fsblock?  I guess if you're going to do a lot of small
extensions then once per fsblock isn't too bad?

--D

> +		if (ret || !iter.len)
> +			return ret;
> +	}
> +
> +	/*
> +	 * To avoid an unconditional flush, check dirty state and defer the
> +	 * flush until a combination of dirty pagecache and at least one
> +	 * mapping that might convert on writeback is seen.
>  	 */
>  	range_dirty = filemap_range_needs_writeback(inode->i_mapping,
> -					pos, pos + len - 1);
> +					iter.pos, iter.pos + iter.len - 1);
>  	while ((ret = iomap_iter(&iter, ops)) > 0) {
>  		const struct iomap *s = iomap_iter_srcmap(&iter);
>  		if (s->type == IOMAP_HOLE || s->type == IOMAP_UNWRITTEN) {
> -- 
> 2.46.2
> 
>
Brian Foster Nov. 6, 2024, 2:13 p.m. UTC | #2
On Tue, Nov 05, 2024 at 04:11:30PM -0800, Darrick J. Wong wrote:
> On Thu, Oct 31, 2024 at 10:04:48AM -0400, Brian Foster wrote:
> > iomap zero range flushes pagecache in certain situations to
> > determine which parts of the range might require zeroing if dirty
> > data is present in pagecache. The kernel robot recently reported a
> > regression associated with this flushing in the following stress-ng
> > workload on XFS:
> > 
> > stress-ng --timeout 60 --times --verify --metrics --no-rand-seed --metamix 64
> > 
> > This workload involves repeated small, strided, extending writes. On
> > XFS, this produces a pattern of post-eof speculative preallocation,
> > conversion of preallocation from delalloc to unwritten, dirtying
> > pagecache over newly unwritten blocks, and then rinse and repeat
> > from the new EOF. This leads to repetitive flushing of the EOF folio
> > via the zero range call XFS uses for writes that start beyond
> > current EOF.
> > 
> > To mitigate this problem, special case EOF block zeroing to prefer
> > zeroing the folio over a flush when the EOF folio is already dirty.
> > To do this, split out and open code handling of an unaligned start
> > offset. This brings most of the performance back by avoiding flushes
> > on zero range calls via write and truncate extension operations. The
> > flush doesn't occur in these situations because the entire range is
> > post-eof and therefore the folio that overlaps EOF is the only one
> > in the range.
> > 
> > Signed-off-by: Brian Foster <bfoster@redhat.com>
> 
> Cc: <stable@vger.kernel.org> # v6.12-rc1
> Fixes: 7d9b474ee4cc37 ("iomap: make zero range flush conditional on unwritten mappings")
> 
> perhaps?
> 

Hmm.. I am reluctant just because I was never super convinced at whether
this was all that important. A test robot called it out and it just
seemed easy enough to improve.

> > ---
> >  fs/iomap/buffered-io.c | 42 ++++++++++++++++++++++++++++++++++++++----
> >  1 file changed, 38 insertions(+), 4 deletions(-)
> > 
> > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> > index 60386cb7b9ef..343a2fa29bec 100644
> > --- a/fs/iomap/buffered-io.c
> > +++ b/fs/iomap/buffered-io.c
> > @@ -227,6 +227,18 @@ static void ifs_free(struct folio *folio)
> >  	kfree(ifs);
> >  }
> >  
> > +/* helper to reset an iter for reuse */
> > +static inline void
> > +iomap_iter_init(struct iomap_iter *iter, struct inode *inode, loff_t pos,
> > +		loff_t len, unsigned flags)
> 
> Nit: maybe call this iomap_iter_reset() ?
> 

Sure, I like that.

> Also I wonder if it's really safe to zero iomap_iter::private?
> Won't doing that leave a minor logic bomb?
> 

Indeed, good catch.

> > +{
> > +	memset(iter, 0, sizeof(*iter));
> > +	iter->inode = inode;
> > +	iter->pos = pos;
> > +	iter->len = len;
> > +	iter->flags = flags;
> > +}
> > +
> >  /*
> >   * Calculate the range inside the folio that we actually need to read.
> >   */
> > @@ -1416,6 +1428,10 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
> >  		.len		= len,
> >  		.flags		= IOMAP_ZERO,
> >  	};
> > +	struct address_space *mapping = inode->i_mapping;
> > +	unsigned int blocksize = i_blocksize(inode);
> > +	unsigned int off = pos & (blocksize - 1);
> > +	loff_t plen = min_t(loff_t, len, blocksize - off);
> >  	int ret;
> >  	bool range_dirty;
> >  
> > @@ -1425,12 +1441,30 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
> >  	 * mapping converts on writeback completion and must be zeroed.
> >  	 *
> >  	 * The simplest way to deal with this is to flush pagecache and process
> > -	 * the updated mappings. To avoid an unconditional flush, check dirty
> > -	 * state and defer the flush until a combination of dirty pagecache and
> > -	 * at least one mapping that might convert on writeback is seen.
> > +	 * the updated mappings. First, special case the partial eof zeroing
> > +	 * use case since it is more performance sensitive. Zero the start of
> > +	 * the range if unaligned and already dirty in pagecache.
> > +	 */
> > +	if (off &&
> > +	    filemap_range_needs_writeback(mapping, pos, pos + plen - 1)) {
> > +		iter.len = plen;
> > +		while ((ret = iomap_iter(&iter, ops)) > 0)
> > +			iter.processed = iomap_zero_iter(&iter, did_zero);
> > +
> > +		/* reset iterator for the rest of the range */
> > +		iomap_iter_init(&iter, inode, iter.pos,
> > +			len - (iter.pos - pos), IOMAP_ZERO);
> 
> Nit: maybe one more tab ^ here?
> 
> Also from the previous thread: can you reset the original iter instead
> of declaring a second one by zeroing the mappings/processed fields,
> re-expanding iter::len, and resetting iter::flags?
> 

I'm not sure what you mean by "declaring a second one." I think maybe
you're suggesting whether we could just zero out the fields that need to
be, rather than reinit the whole thing...?

Context: I originally had this opencoded and created the helper to clean
up the code. I opted to memset the whole thing to try and avoid creating
a dependency that would have to be updated if the iter code ever
changed, but the ->private thing kind of shows how that problem goes
both ways.

Hmmmmm.. what do you think about maybe just fixing up the iteration path
to reset these fields? We already clear them in iomap_iter_advance()
when another iteration is expected. On a first pass, I don't see
anywhere where the terminal case would care if they were reset there as
well.

I'll have to double check and test of course, but issues
notwithstanding, I suspect that would allow the original logic of just
tacking the remaining length onto iter.len and continuing on. Hm?

> I guess we'll still do the flush if the start of the zeroing range
> aligns with an fsblock?  I guess if you're going to do a lot of small
> extensions then once per fsblock isn't too bad?
> 

Yeah.. we wouldn't be partially zeroing the EOF block in that case, so
would fall back to default behavior. Did you have another case/workload
in mind you were concerned about?

BTW and just in case you missed the analysis in the original report
thread [1], the performance hit here could also be partially attributed
to commit 5ce5674187c34 ("xfs: convert delayed extents to unwritten when
zeroing post eof blocks"). I'm skeptical that an unconditional physical
block allocation per write extension is always a good idea over perhaps
something more based on a heuristic, but as often with XFS I'm a bit too
apathetic of the obstruction^Wreview process to dig into that one..

I do have another minimal iomap patch to warn about the post-eof zero
range angle. I'll tack that onto the next version of this series for
discussion. Thanks for the feedback.

Brian

[1] https://lore.kernel.org/linux-xfs/ZxkE93Vz3ZQaAFO1@bfoster/

> --D
> 
> > +		if (ret || !iter.len)
> > +			return ret;
> > +	}
> > +
> > +	/*
> > +	 * To avoid an unconditional flush, check dirty state and defer the
> > +	 * flush until a combination of dirty pagecache and at least one
> > +	 * mapping that might convert on writeback is seen.
> >  	 */
> >  	range_dirty = filemap_range_needs_writeback(inode->i_mapping,
> > -					pos, pos + len - 1);
> > +					iter.pos, iter.pos + iter.len - 1);
> >  	while ((ret = iomap_iter(&iter, ops)) > 0) {
> >  		const struct iomap *s = iomap_iter_srcmap(&iter);
> >  		if (s->type == IOMAP_HOLE || s->type == IOMAP_UNWRITTEN) {
> > -- 
> > 2.46.2
> > 
> > 
>
diff mbox series

Patch

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 60386cb7b9ef..343a2fa29bec 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -227,6 +227,18 @@  static void ifs_free(struct folio *folio)
 	kfree(ifs);
 }
 
+/* helper to reset an iter for reuse */
+static inline void
+iomap_iter_init(struct iomap_iter *iter, struct inode *inode, loff_t pos,
+		loff_t len, unsigned flags)
+{
+	memset(iter, 0, sizeof(*iter));
+	iter->inode = inode;
+	iter->pos = pos;
+	iter->len = len;
+	iter->flags = flags;
+}
+
 /*
  * Calculate the range inside the folio that we actually need to read.
  */
@@ -1416,6 +1428,10 @@  iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
 		.len		= len,
 		.flags		= IOMAP_ZERO,
 	};
+	struct address_space *mapping = inode->i_mapping;
+	unsigned int blocksize = i_blocksize(inode);
+	unsigned int off = pos & (blocksize - 1);
+	loff_t plen = min_t(loff_t, len, blocksize - off);
 	int ret;
 	bool range_dirty;
 
@@ -1425,12 +1441,30 @@  iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
 	 * mapping converts on writeback completion and must be zeroed.
 	 *
 	 * The simplest way to deal with this is to flush pagecache and process
-	 * the updated mappings. To avoid an unconditional flush, check dirty
-	 * state and defer the flush until a combination of dirty pagecache and
-	 * at least one mapping that might convert on writeback is seen.
+	 * the updated mappings. First, special case the partial eof zeroing
+	 * use case since it is more performance sensitive. Zero the start of
+	 * the range if unaligned and already dirty in pagecache.
+	 */
+	if (off &&
+	    filemap_range_needs_writeback(mapping, pos, pos + plen - 1)) {
+		iter.len = plen;
+		while ((ret = iomap_iter(&iter, ops)) > 0)
+			iter.processed = iomap_zero_iter(&iter, did_zero);
+
+		/* reset iterator for the rest of the range */
+		iomap_iter_init(&iter, inode, iter.pos,
+			len - (iter.pos - pos), IOMAP_ZERO);
+		if (ret || !iter.len)
+			return ret;
+	}
+
+	/*
+	 * To avoid an unconditional flush, check dirty state and defer the
+	 * flush until a combination of dirty pagecache and at least one
+	 * mapping that might convert on writeback is seen.
 	 */
 	range_dirty = filemap_range_needs_writeback(inode->i_mapping,
-					pos, pos + len - 1);
+					iter.pos, iter.pos + iter.len - 1);
 	while ((ret = iomap_iter(&iter, ops)) > 0) {
 		const struct iomap *s = iomap_iter_srcmap(&iter);
 		if (s->type == IOMAP_HOLE || s->type == IOMAP_UNWRITTEN) {