diff mbox series

[RFC,v3,3/3] iomap: bound ioend size to 4096 pages

Message ID 20210517171722.1266878-4-bfoster@redhat.com (mailing list archive)
State Deferred, archived
Headers show
Series iomap: avoid soft lockup warnings on large ioends | expand

Commit Message

Brian Foster May 17, 2021, 5:17 p.m. UTC
The iomap writeback infrastructure is currently able to construct
extremely large bio chains (tens of GBs) associated with a single
ioend. This consolidation provides no significant value as bio
chains increase beyond a reasonable minimum size. On the other hand,
this does hold significant numbers of pages in the writeback
state across an unnecessarily large number of bios because the ioend
is not processed for completion until the final bio in the chain
completes. Cap an individual ioend to a reasonable size of 4096
pages (16MB with 4k pages) to avoid this condition.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/iomap/buffered-io.c |  6 ++++--
 include/linux/iomap.h  | 26 ++++++++++++++++++++++++++
 2 files changed, 30 insertions(+), 2 deletions(-)

Comments

Christoph Hellwig May 19, 2021, 1:28 p.m. UTC | #1
On Mon, May 17, 2021 at 01:17:22PM -0400, Brian Foster wrote:
> The iomap writeback infrastructure is currently able to construct
> extremely large bio chains (tens of GBs) associated with a single
> ioend. This consolidation provides no significant value as bio
> chains increase beyond a reasonable minimum size. On the other hand,
> this does hold significant numbers of pages in the writeback
> state across an unnecessarily large number of bios because the ioend
> is not processed for completion until the final bio in the chain
> completes. Cap an individual ioend to a reasonable size of 4096
> pages (16MB with 4k pages) to avoid this condition.

Note that once we get huge page/folio support in the page cache this
sucks as we can trivially handle much larger sizes with very little
iteration.

I wonder if both this limit and the previous one should be based on the
number of pages added instead.  And in fact maybe if we only want the
limit at add to ioend time and skip the defer to workqueue part entirely.
Brian Foster May 19, 2021, 2:52 p.m. UTC | #2
On Wed, May 19, 2021 at 02:28:39PM +0100, Christoph Hellwig wrote:
> On Mon, May 17, 2021 at 01:17:22PM -0400, Brian Foster wrote:
> > The iomap writeback infrastructure is currently able to construct
> > extremely large bio chains (tens of GBs) associated with a single
> > ioend. This consolidation provides no significant value as bio
> > chains increase beyond a reasonable minimum size. On the other hand,
> > this does hold significant numbers of pages in the writeback
> > state across an unnecessarily large number of bios because the ioend
> > is not processed for completion until the final bio in the chain
> > completes. Cap an individual ioend to a reasonable size of 4096
> > pages (16MB with 4k pages) to avoid this condition.
> 
> Note that once we get huge page/folio support in the page cache this
> sucks as we can trivially handle much larger sizes with very little
> iteration.
> 
> I wonder if both this limit and the previous one should be based on the
> number of pages added instead.  And in fact maybe if we only want the
> limit at add to ioend time and skip the defer to workqueue part entirely.
> 

Both limits are already based on pages. I imagine they could change to
folios when appropriate.

The defer to workqueue part was based on your suggestion[1]. The primary
purpose of this series is to address the completion processing soft
lockup warning, so I don't have a strong preference on whether we do
that by capping ioend size, processing (and yielding) from non-atomic
context, or both.

Brian

[1] https://lore.kernel.org/linux-fsdevel/20200917080455.GY26262@infradead.org/
Darrick J. Wong May 20, 2021, 11:27 p.m. UTC | #3
On Mon, May 17, 2021 at 01:17:22PM -0400, Brian Foster wrote:
> The iomap writeback infrastructure is currently able to construct
> extremely large bio chains (tens of GBs) associated with a single
> ioend. This consolidation provides no significant value as bio
> chains increase beyond a reasonable minimum size. On the other hand,
> this does hold significant numbers of pages in the writeback
> state across an unnecessarily large number of bios because the ioend
> is not processed for completion until the final bio in the chain
> completes. Cap an individual ioend to a reasonable size of 4096
> pages (16MB with 4k pages) to avoid this condition.
> 
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
>  fs/iomap/buffered-io.c |  6 ++++--
>  include/linux/iomap.h  | 26 ++++++++++++++++++++++++++
>  2 files changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 642422775e4e..f2890ee434d0 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -1269,7 +1269,7 @@ iomap_chain_bio(struct bio *prev)
>  
>  static bool
>  iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
> -		sector_t sector)
> +		unsigned len, sector_t sector)
>  {
>  	if ((wpc->iomap.flags & IOMAP_F_SHARED) !=
>  	    (wpc->ioend->io_flags & IOMAP_F_SHARED))
> @@ -1280,6 +1280,8 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
>  		return false;
>  	if (sector != bio_end_sector(wpc->ioend->io_bio))
>  		return false;
> +	if (wpc->ioend->io_size + len > IOEND_MAX_IOSIZE)
> +		return false;
>  	return true;
>  }
>  
> @@ -1297,7 +1299,7 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
>  	unsigned poff = offset & (PAGE_SIZE - 1);
>  	bool merged, same_page = false;
>  
> -	if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) {
> +	if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, len, sector)) {
>  		if (wpc->ioend)
>  			list_add(&wpc->ioend->io_list, iolist);
>  		wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc);
> diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> index 07f3f4e69084..89b15cc236d5 100644
> --- a/include/linux/iomap.h
> +++ b/include/linux/iomap.h
> @@ -203,6 +203,32 @@ struct iomap_ioend {
>  	struct bio		io_inline_bio;	/* MUST BE LAST! */
>  };
>  
> +/*
> + * Maximum ioend IO size is used to prevent ioends from becoming unbound in
> + * size. bios can reach 4GB in size if pages are contiguous, and bio chains are
> + * effectively unbound in length. Hence the only limits on the size of the bio
> + * chain is the contiguity of the extent on disk and the length of the run of
> + * sequential dirty pages in the page cache. This can be tens of GBs of physical
> + * extents and if memory is large enough, tens of millions of dirty pages.
> + * Locking them all under writeback until the final bio in the chain is
> + * submitted and completed locks all those pages for the legnth of time it takes

s/legnth/length/

> + * to write those many, many GBs of data to storage.
> + *
> + * Background writeback caps any single writepages call to half the device
> + * bandwidth to ensure fairness and prevent any one dirty inode causing
> + * writeback starvation. fsync() and other WB_SYNC_ALL writebacks have no such
> + * cap on wbc->nr_pages, and that's where the above massive bio chain lengths
> + * come from. We want large IOs to reach the storage, but we need to limit
> + * completion latencies, hence we need to control the maximum IO size we
> + * dispatch to the storage stack.
> + *
> + * We don't really have to care about the extra IO completion overhead here
> + * because iomap has contiguous IO completion merging. If the device can sustain

Assuming you're referring to iomap_finish_ioends, only XFS employs the
ioend completion merging, and only for ioends where it decides to
override the default bi_end_io.  iomap on its own never calls
iomap_ioend_try_merge.

This patch establishes a maximum ioend size of 4096 pages so that we
don't trip the lockup watchdog while clearing pagewriteback and also so
that we don't pin a large number of pages while constructing a big chain
of bios.  On gfs2 and zonefs, each ioend completion will now have to
clear up to 4096 pages from whatever context bio_endio is called.

For XFS it's a more complicated -- XFS already overrode the bio handler
for ioends that required further metadata updates (e.g. unwritten
conversion, eof extension, or cow) so that it could combine ioends when
possible.  XFS wants to combine ioends to amortize the cost of getting
the ILOCK and running transactions over a larger number of pages.

So I guess I see how the two changes dovetail nicely for XFS -- iomap
issues smaller write bios, and the xfs ioend worker can recombine
however many bios complete before the worker runs.  As a bonus, we don't
have to worry about situations like the device driver completing so many
bios from a single invocation of a bottom half handler that we run afoul
of the soft lockup timer.

Is that a correct understanding of how the two changes intersect with
each other?  TBH I was expecting the two thresholds to be closer in
value.

The other two users of iomap for buffered io (gfs2 and zonefs) don't
have a means to defer and combine ioends like xfs does.  Do you think
they should?  I think it's still possible to trip the softlockup there.

--D

> + * high throughput and large bios, the ioends are merged on completion and
> + * processed in large, efficient chunks with no additional IO latency.
> + */
> +#define IOEND_MAX_IOSIZE	(4096ULL << PAGE_SHIFT)
> +
>  struct iomap_writeback_ops {
>  	/*
>  	 * Required, maps the blocks so that writeback can be performed on
> -- 
> 2.26.3
>
Brian Foster May 24, 2021, 12:02 p.m. UTC | #4
On Thu, May 20, 2021 at 04:27:37PM -0700, Darrick J. Wong wrote:
> On Mon, May 17, 2021 at 01:17:22PM -0400, Brian Foster wrote:
> > The iomap writeback infrastructure is currently able to construct
> > extremely large bio chains (tens of GBs) associated with a single
> > ioend. This consolidation provides no significant value as bio
> > chains increase beyond a reasonable minimum size. On the other hand,
> > this does hold significant numbers of pages in the writeback
> > state across an unnecessarily large number of bios because the ioend
> > is not processed for completion until the final bio in the chain
> > completes. Cap an individual ioend to a reasonable size of 4096
> > pages (16MB with 4k pages) to avoid this condition.
> > 
> > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > ---
> >  fs/iomap/buffered-io.c |  6 ++++--
> >  include/linux/iomap.h  | 26 ++++++++++++++++++++++++++
> >  2 files changed, 30 insertions(+), 2 deletions(-)
> > 
> > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> > index 642422775e4e..f2890ee434d0 100644
> > --- a/fs/iomap/buffered-io.c
> > +++ b/fs/iomap/buffered-io.c
> > @@ -1269,7 +1269,7 @@ iomap_chain_bio(struct bio *prev)
> >  
> >  static bool
> >  iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
> > -		sector_t sector)
> > +		unsigned len, sector_t sector)
> >  {
> >  	if ((wpc->iomap.flags & IOMAP_F_SHARED) !=
> >  	    (wpc->ioend->io_flags & IOMAP_F_SHARED))
> > @@ -1280,6 +1280,8 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
> >  		return false;
> >  	if (sector != bio_end_sector(wpc->ioend->io_bio))
> >  		return false;
> > +	if (wpc->ioend->io_size + len > IOEND_MAX_IOSIZE)
> > +		return false;
> >  	return true;
> >  }
> >  
> > @@ -1297,7 +1299,7 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
> >  	unsigned poff = offset & (PAGE_SIZE - 1);
> >  	bool merged, same_page = false;
> >  
> > -	if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) {
> > +	if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, len, sector)) {
> >  		if (wpc->ioend)
> >  			list_add(&wpc->ioend->io_list, iolist);
> >  		wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc);
> > diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> > index 07f3f4e69084..89b15cc236d5 100644
> > --- a/include/linux/iomap.h
> > +++ b/include/linux/iomap.h
> > @@ -203,6 +203,32 @@ struct iomap_ioend {
> >  	struct bio		io_inline_bio;	/* MUST BE LAST! */
> >  };
> >  
> > +/*
> > + * Maximum ioend IO size is used to prevent ioends from becoming unbound in
> > + * size. bios can reach 4GB in size if pages are contiguous, and bio chains are
> > + * effectively unbound in length. Hence the only limits on the size of the bio
> > + * chain is the contiguity of the extent on disk and the length of the run of
> > + * sequential dirty pages in the page cache. This can be tens of GBs of physical
> > + * extents and if memory is large enough, tens of millions of dirty pages.
> > + * Locking them all under writeback until the final bio in the chain is
> > + * submitted and completed locks all those pages for the legnth of time it takes
> 
> s/legnth/length/
>

Fixed.
 
> > + * to write those many, many GBs of data to storage.
> > + *
> > + * Background writeback caps any single writepages call to half the device
> > + * bandwidth to ensure fairness and prevent any one dirty inode causing
> > + * writeback starvation. fsync() and other WB_SYNC_ALL writebacks have no such
> > + * cap on wbc->nr_pages, and that's where the above massive bio chain lengths
> > + * come from. We want large IOs to reach the storage, but we need to limit
> > + * completion latencies, hence we need to control the maximum IO size we
> > + * dispatch to the storage stack.
> > + *
> > + * We don't really have to care about the extra IO completion overhead here
> > + * because iomap has contiguous IO completion merging. If the device can sustain
> 
> Assuming you're referring to iomap_finish_ioends, only XFS employs the
> ioend completion merging, and only for ioends where it decides to
> override the default bi_end_io.  iomap on its own never calls
> iomap_ioend_try_merge.
> 
> This patch establishes a maximum ioend size of 4096 pages so that we
> don't trip the lockup watchdog while clearing pagewriteback and also so
> that we don't pin a large number of pages while constructing a big chain
> of bios.  On gfs2 and zonefs, each ioend completion will now have to
> clear up to 4096 pages from whatever context bio_endio is called.
> 
> For XFS it's a more complicated -- XFS already overrode the bio handler
> for ioends that required further metadata updates (e.g. unwritten
> conversion, eof extension, or cow) so that it could combine ioends when
> possible.  XFS wants to combine ioends to amortize the cost of getting
> the ILOCK and running transactions over a larger number of pages.
> 
> So I guess I see how the two changes dovetail nicely for XFS -- iomap
> issues smaller write bios, and the xfs ioend worker can recombine
> however many bios complete before the worker runs.  As a bonus, we don't
> have to worry about situations like the device driver completing so many
> bios from a single invocation of a bottom half handler that we run afoul
> of the soft lockup timer.
> 
> Is that a correct understanding of how the two changes intersect with
> each other?  TBH I was expecting the two thresholds to be closer in
> value.
> 

I think so. That's interesting because my inclination was to make them
farther apart (or more specifically, increase the threshold in this
patch and leave the previous). The primary goal of this series was to
address the soft lockup warning problem, hence the thresholds on earlier
versions started at rather conservative values. I think both values have
been reasonably justified in being reduced, though this patch has a more
broad impact than the previous in that it changes behavior for all iomap
based fs'. Of course that's something that could also be addressed with
a more dynamic tunable..

> The other two users of iomap for buffered io (gfs2 and zonefs) don't
> have a means to defer and combine ioends like xfs does.  Do you think
> they should?  I think it's still possible to trip the softlockup there.
> 

I'm not sure. We'd probably want some feedback from developers of
filesystems other than XFS before incorporating a change like this. The
first patch in the series more just provides some infrastructure for
other filesystems to avoid the problem as they see fit.

Brian

> --D
> 
> > + * high throughput and large bios, the ioends are merged on completion and
> > + * processed in large, efficient chunks with no additional IO latency.
> > + */
> > +#define IOEND_MAX_IOSIZE	(4096ULL << PAGE_SHIFT)
> > +
> >  struct iomap_writeback_ops {
> >  	/*
> >  	 * Required, maps the blocks so that writeback can be performed on
> > -- 
> > 2.26.3
> > 
>
Darrick J. Wong May 25, 2021, 4:20 a.m. UTC | #5
On Mon, May 24, 2021 at 08:02:18AM -0400, Brian Foster wrote:
> On Thu, May 20, 2021 at 04:27:37PM -0700, Darrick J. Wong wrote:
> > On Mon, May 17, 2021 at 01:17:22PM -0400, Brian Foster wrote:
> > > The iomap writeback infrastructure is currently able to construct
> > > extremely large bio chains (tens of GBs) associated with a single
> > > ioend. This consolidation provides no significant value as bio
> > > chains increase beyond a reasonable minimum size. On the other hand,
> > > this does hold significant numbers of pages in the writeback
> > > state across an unnecessarily large number of bios because the ioend
> > > is not processed for completion until the final bio in the chain
> > > completes. Cap an individual ioend to a reasonable size of 4096
> > > pages (16MB with 4k pages) to avoid this condition.
> > > 
> > > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > > ---
> > >  fs/iomap/buffered-io.c |  6 ++++--
> > >  include/linux/iomap.h  | 26 ++++++++++++++++++++++++++
> > >  2 files changed, 30 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> > > index 642422775e4e..f2890ee434d0 100644
> > > --- a/fs/iomap/buffered-io.c
> > > +++ b/fs/iomap/buffered-io.c
> > > @@ -1269,7 +1269,7 @@ iomap_chain_bio(struct bio *prev)
> > >  
> > >  static bool
> > >  iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
> > > -		sector_t sector)
> > > +		unsigned len, sector_t sector)
> > >  {
> > >  	if ((wpc->iomap.flags & IOMAP_F_SHARED) !=
> > >  	    (wpc->ioend->io_flags & IOMAP_F_SHARED))
> > > @@ -1280,6 +1280,8 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
> > >  		return false;
> > >  	if (sector != bio_end_sector(wpc->ioend->io_bio))
> > >  		return false;
> > > +	if (wpc->ioend->io_size + len > IOEND_MAX_IOSIZE)
> > > +		return false;
> > >  	return true;
> > >  }
> > >  
> > > @@ -1297,7 +1299,7 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
> > >  	unsigned poff = offset & (PAGE_SIZE - 1);
> > >  	bool merged, same_page = false;
> > >  
> > > -	if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) {
> > > +	if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, len, sector)) {
> > >  		if (wpc->ioend)
> > >  			list_add(&wpc->ioend->io_list, iolist);
> > >  		wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc);
> > > diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> > > index 07f3f4e69084..89b15cc236d5 100644
> > > --- a/include/linux/iomap.h
> > > +++ b/include/linux/iomap.h
> > > @@ -203,6 +203,32 @@ struct iomap_ioend {
> > >  	struct bio		io_inline_bio;	/* MUST BE LAST! */
> > >  };
> > >  
> > > +/*
> > > + * Maximum ioend IO size is used to prevent ioends from becoming unbound in
> > > + * size. bios can reach 4GB in size if pages are contiguous, and bio chains are
> > > + * effectively unbound in length. Hence the only limits on the size of the bio
> > > + * chain is the contiguity of the extent on disk and the length of the run of
> > > + * sequential dirty pages in the page cache. This can be tens of GBs of physical
> > > + * extents and if memory is large enough, tens of millions of dirty pages.
> > > + * Locking them all under writeback until the final bio in the chain is
> > > + * submitted and completed locks all those pages for the legnth of time it takes
> > 
> > s/legnth/length/
> >
> 
> Fixed.
>  
> > > + * to write those many, many GBs of data to storage.
> > > + *
> > > + * Background writeback caps any single writepages call to half the device
> > > + * bandwidth to ensure fairness and prevent any one dirty inode causing
> > > + * writeback starvation. fsync() and other WB_SYNC_ALL writebacks have no such
> > > + * cap on wbc->nr_pages, and that's where the above massive bio chain lengths
> > > + * come from. We want large IOs to reach the storage, but we need to limit
> > > + * completion latencies, hence we need to control the maximum IO size we
> > > + * dispatch to the storage stack.
> > > + *
> > > + * We don't really have to care about the extra IO completion overhead here
> > > + * because iomap has contiguous IO completion merging. If the device can sustain
> > 
> > Assuming you're referring to iomap_finish_ioends, only XFS employs the
> > ioend completion merging, and only for ioends where it decides to
> > override the default bi_end_io.  iomap on its own never calls
> > iomap_ioend_try_merge.
> > 
> > This patch establishes a maximum ioend size of 4096 pages so that we
> > don't trip the lockup watchdog while clearing pagewriteback and also so
> > that we don't pin a large number of pages while constructing a big chain
> > of bios.  On gfs2 and zonefs, each ioend completion will now have to
> > clear up to 4096 pages from whatever context bio_endio is called.
> > 
> > For XFS it's a more complicated -- XFS already overrode the bio handler
> > for ioends that required further metadata updates (e.g. unwritten
> > conversion, eof extension, or cow) so that it could combine ioends when
> > possible.  XFS wants to combine ioends to amortize the cost of getting
> > the ILOCK and running transactions over a larger number of pages.
> > 
> > So I guess I see how the two changes dovetail nicely for XFS -- iomap
> > issues smaller write bios, and the xfs ioend worker can recombine
> > however many bios complete before the worker runs.  As a bonus, we don't
> > have to worry about situations like the device driver completing so many
> > bios from a single invocation of a bottom half handler that we run afoul
> > of the soft lockup timer.
> > 
> > Is that a correct understanding of how the two changes intersect with
> > each other?  TBH I was expecting the two thresholds to be closer in
> > value.
> > 
> 
> I think so. That's interesting because my inclination was to make them
> farther apart (or more specifically, increase the threshold in this
> patch and leave the previous). The primary goal of this series was to
> address the soft lockup warning problem, hence the thresholds on earlier
> versions started at rather conservative values. I think both values have
> been reasonably justified in being reduced, though this patch has a more
> broad impact than the previous in that it changes behavior for all iomap
> based fs'. Of course that's something that could also be addressed with
> a more dynamic tunable..

<shrug> I think I'm comfortable starting with 256 for xfs to bump an
ioend to a workqueue, and 4096 pages as the limit for an iomap ioend.
If people demonstrate a need to smart-tune or manual-tune we can always
add one later.

Though I guess I did kind of wonder if maybe a better limit for iomap
would be max_hw_sectors?  Since that's the maximum size of an IO that
the kernel will for that device?

(Hm, maybe not; my computers all have it set to 1280k, which is a
pathetic 20 pages on a 64k-page system.)

> > The other two users of iomap for buffered io (gfs2 and zonefs) don't
> > have a means to defer and combine ioends like xfs does.  Do you think
> > they should?  I think it's still possible to trip the softlockup there.
> > 
> 
> I'm not sure. We'd probably want some feedback from developers of
> filesystems other than XFS before incorporating a change like this. The
> first patch in the series more just provides some infrastructure for
> other filesystems to avoid the problem as they see fit.

Hmm.  Any input from the two other users of iomap buffered IO?  Who are
now directly in the to: list? :D

Catch-up TLDR: we're evaluating a proposal to limit the length of an
iomap writeback ioend to 4096 pages so that we don't trip the hangcheck
warning while clearing pagewriteback if the ioend completion happens to
run in softirq context (e.g. nvme completion).

--D

> Brian
> 
> > --D
> > 
> > > + * high throughput and large bios, the ioends are merged on completion and
> > > + * processed in large, efficient chunks with no additional IO latency.
> > > + */
> > > +#define IOEND_MAX_IOSIZE	(4096ULL << PAGE_SHIFT)
> > > +
> > >  struct iomap_writeback_ops {
> > >  	/*
> > >  	 * Required, maps the blocks so that writeback can be performed on
> > > -- 
> > > 2.26.3
> > > 
> > 
>
Damien Le Moal May 25, 2021, 4:29 a.m. UTC | #6
On 2021/05/25 13:20, Darrick J. Wong wrote:
> On Mon, May 24, 2021 at 08:02:18AM -0400, Brian Foster wrote:
>> On Thu, May 20, 2021 at 04:27:37PM -0700, Darrick J. Wong wrote:
>>> On Mon, May 17, 2021 at 01:17:22PM -0400, Brian Foster wrote:
>>>> The iomap writeback infrastructure is currently able to construct
>>>> extremely large bio chains (tens of GBs) associated with a single
>>>> ioend. This consolidation provides no significant value as bio
>>>> chains increase beyond a reasonable minimum size. On the other hand,
>>>> this does hold significant numbers of pages in the writeback
>>>> state across an unnecessarily large number of bios because the ioend
>>>> is not processed for completion until the final bio in the chain
>>>> completes. Cap an individual ioend to a reasonable size of 4096
>>>> pages (16MB with 4k pages) to avoid this condition.
>>>>
>>>> Signed-off-by: Brian Foster <bfoster@redhat.com>
>>>> ---
>>>>  fs/iomap/buffered-io.c |  6 ++++--
>>>>  include/linux/iomap.h  | 26 ++++++++++++++++++++++++++
>>>>  2 files changed, 30 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
>>>> index 642422775e4e..f2890ee434d0 100644
>>>> --- a/fs/iomap/buffered-io.c
>>>> +++ b/fs/iomap/buffered-io.c
>>>> @@ -1269,7 +1269,7 @@ iomap_chain_bio(struct bio *prev)
>>>>  
>>>>  static bool
>>>>  iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
>>>> -		sector_t sector)
>>>> +		unsigned len, sector_t sector)
>>>>  {
>>>>  	if ((wpc->iomap.flags & IOMAP_F_SHARED) !=
>>>>  	    (wpc->ioend->io_flags & IOMAP_F_SHARED))
>>>> @@ -1280,6 +1280,8 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
>>>>  		return false;
>>>>  	if (sector != bio_end_sector(wpc->ioend->io_bio))
>>>>  		return false;
>>>> +	if (wpc->ioend->io_size + len > IOEND_MAX_IOSIZE)
>>>> +		return false;
>>>>  	return true;
>>>>  }
>>>>  
>>>> @@ -1297,7 +1299,7 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
>>>>  	unsigned poff = offset & (PAGE_SIZE - 1);
>>>>  	bool merged, same_page = false;
>>>>  
>>>> -	if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) {
>>>> +	if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, len, sector)) {
>>>>  		if (wpc->ioend)
>>>>  			list_add(&wpc->ioend->io_list, iolist);
>>>>  		wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc);
>>>> diff --git a/include/linux/iomap.h b/include/linux/iomap.h
>>>> index 07f3f4e69084..89b15cc236d5 100644
>>>> --- a/include/linux/iomap.h
>>>> +++ b/include/linux/iomap.h
>>>> @@ -203,6 +203,32 @@ struct iomap_ioend {
>>>>  	struct bio		io_inline_bio;	/* MUST BE LAST! */
>>>>  };
>>>>  
>>>> +/*
>>>> + * Maximum ioend IO size is used to prevent ioends from becoming unbound in
>>>> + * size. bios can reach 4GB in size if pages are contiguous, and bio chains are
>>>> + * effectively unbound in length. Hence the only limits on the size of the bio
>>>> + * chain is the contiguity of the extent on disk and the length of the run of
>>>> + * sequential dirty pages in the page cache. This can be tens of GBs of physical
>>>> + * extents and if memory is large enough, tens of millions of dirty pages.
>>>> + * Locking them all under writeback until the final bio in the chain is
>>>> + * submitted and completed locks all those pages for the legnth of time it takes
>>>
>>> s/legnth/length/
>>>
>>
>> Fixed.
>>  
>>>> + * to write those many, many GBs of data to storage.
>>>> + *
>>>> + * Background writeback caps any single writepages call to half the device
>>>> + * bandwidth to ensure fairness and prevent any one dirty inode causing
>>>> + * writeback starvation. fsync() and other WB_SYNC_ALL writebacks have no such
>>>> + * cap on wbc->nr_pages, and that's where the above massive bio chain lengths
>>>> + * come from. We want large IOs to reach the storage, but we need to limit
>>>> + * completion latencies, hence we need to control the maximum IO size we
>>>> + * dispatch to the storage stack.
>>>> + *
>>>> + * We don't really have to care about the extra IO completion overhead here
>>>> + * because iomap has contiguous IO completion merging. If the device can sustain
>>>
>>> Assuming you're referring to iomap_finish_ioends, only XFS employs the
>>> ioend completion merging, and only for ioends where it decides to
>>> override the default bi_end_io.  iomap on its own never calls
>>> iomap_ioend_try_merge.
>>>
>>> This patch establishes a maximum ioend size of 4096 pages so that we
>>> don't trip the lockup watchdog while clearing pagewriteback and also so
>>> that we don't pin a large number of pages while constructing a big chain
>>> of bios.  On gfs2 and zonefs, each ioend completion will now have to
>>> clear up to 4096 pages from whatever context bio_endio is called.
>>>
>>> For XFS it's a more complicated -- XFS already overrode the bio handler
>>> for ioends that required further metadata updates (e.g. unwritten
>>> conversion, eof extension, or cow) so that it could combine ioends when
>>> possible.  XFS wants to combine ioends to amortize the cost of getting
>>> the ILOCK and running transactions over a larger number of pages.
>>>
>>> So I guess I see how the two changes dovetail nicely for XFS -- iomap
>>> issues smaller write bios, and the xfs ioend worker can recombine
>>> however many bios complete before the worker runs.  As a bonus, we don't
>>> have to worry about situations like the device driver completing so many
>>> bios from a single invocation of a bottom half handler that we run afoul
>>> of the soft lockup timer.
>>>
>>> Is that a correct understanding of how the two changes intersect with
>>> each other?  TBH I was expecting the two thresholds to be closer in
>>> value.
>>>
>>
>> I think so. That's interesting because my inclination was to make them
>> farther apart (or more specifically, increase the threshold in this
>> patch and leave the previous). The primary goal of this series was to
>> address the soft lockup warning problem, hence the thresholds on earlier
>> versions started at rather conservative values. I think both values have
>> been reasonably justified in being reduced, though this patch has a more
>> broad impact than the previous in that it changes behavior for all iomap
>> based fs'. Of course that's something that could also be addressed with
>> a more dynamic tunable..
> 
> <shrug> I think I'm comfortable starting with 256 for xfs to bump an
> ioend to a workqueue, and 4096 pages as the limit for an iomap ioend.
> If people demonstrate a need to smart-tune or manual-tune we can always
> add one later.
> 
> Though I guess I did kind of wonder if maybe a better limit for iomap
> would be max_hw_sectors?  Since that's the maximum size of an IO that
> the kernel will for that device?
> 
> (Hm, maybe not; my computers all have it set to 1280k, which is a
> pathetic 20 pages on a 64k-page system.)

Are you sure you are not looking at max_sectors (not max_hw_sectors) ?
For an average SATA HDD, generally, you get:

# cat max_hw_sectors_kb
32764
# cat max_sectors_kb
1280

So 32MB max command size per hardware limitations. That one cannot be changed.
The block IO layer uses the 1280KB soft limit defined by max_sectors
(max_sectors_kb in sysfs) but the user can tune this from 1 sector up to
max_hw_sectors_kb.

> 
>>> The other two users of iomap for buffered io (gfs2 and zonefs) don't
>>> have a means to defer and combine ioends like xfs does.  Do you think
>>> they should?  I think it's still possible to trip the softlockup there.
>>>
>>
>> I'm not sure. We'd probably want some feedback from developers of
>> filesystems other than XFS before incorporating a change like this. The
>> first patch in the series more just provides some infrastructure for
>> other filesystems to avoid the problem as they see fit.
> 
> Hmm.  Any input from the two other users of iomap buffered IO?  Who are
> now directly in the to: list? :D
> 
> Catch-up TLDR: we're evaluating a proposal to limit the length of an
> iomap writeback ioend to 4096 pages so that we don't trip the hangcheck
> warning while clearing pagewriteback if the ioend completion happens to
> run in softirq context (e.g. nvme completion).
> 
> --D
> 
>> Brian
>>
>>> --D
>>>
>>>> + * high throughput and large bios, the ioends are merged on completion and
>>>> + * processed in large, efficient chunks with no additional IO latency.
>>>> + */
>>>> +#define IOEND_MAX_IOSIZE	(4096ULL << PAGE_SHIFT)
>>>> +
>>>>  struct iomap_writeback_ops {
>>>>  	/*
>>>>  	 * Required, maps the blocks so that writeback can be performed on
>>>> -- 
>>>> 2.26.3
>>>>
>>>
>>
>
Dave Chinner May 25, 2021, 7:13 a.m. UTC | #7
On Mon, May 24, 2021 at 09:20:35PM -0700, Darrick J. Wong wrote:
> <shrug> I think I'm comfortable starting with 256 for xfs to bump an
> ioend to a workqueue, and 4096 pages as the limit for an iomap ioend.
> If people demonstrate a need to smart-tune or manual-tune we can always
> add one later.
> 
> Though I guess I did kind of wonder if maybe a better limit for iomap
> would be max_hw_sectors?  Since that's the maximum size of an IO that
> the kernel will for that device?
>
> (Hm, maybe not; my computers all have it set to 1280k, which is a
> pathetic 20 pages on a 64k-page system.)

I've got samsung nvme devices here that set max_hw_sectors_kb to
128kB....

But device sizes ignore that RAID devices give an optimal IO size
for submissions:

$ cat /sys/block/dm-0/queue/optimal_io_size
1048576
$ cat /sys/block/dm-0/queue/max_sectors_kb
128

IOWs, we might be trying to feed lots of devices through the one
submission, so using a "device" limit isn't really something we
should be doing. Ideally I think we should be looking at some
multiple of the  maximum optimal IO size that the underlying device
requests if it is set, otherwise a byte limit of some kind....

Cheers,

Dave.
Andreas Gruenbacher May 25, 2021, 9:07 a.m. UTC | #8
On Tue, May 25, 2021 at 6:20 AM Darrick J. Wong <djwong@kernel.org> wrote:
> On Mon, May 24, 2021 at 08:02:18AM -0400, Brian Foster wrote:
> > On Thu, May 20, 2021 at 04:27:37PM -0700, Darrick J. Wong wrote:
> > > On Mon, May 17, 2021 at 01:17:22PM -0400, Brian Foster wrote:
> > > > The iomap writeback infrastructure is currently able to construct
> > > > extremely large bio chains (tens of GBs) associated with a single
> > > > ioend. This consolidation provides no significant value as bio
> > > > chains increase beyond a reasonable minimum size. On the other hand,
> > > > this does hold significant numbers of pages in the writeback
> > > > state across an unnecessarily large number of bios because the ioend
> > > > is not processed for completion until the final bio in the chain
> > > > completes. Cap an individual ioend to a reasonable size of 4096
> > > > pages (16MB with 4k pages) to avoid this condition.
> > > >
> > > > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > > > ---
> > > >  fs/iomap/buffered-io.c |  6 ++++--
> > > >  include/linux/iomap.h  | 26 ++++++++++++++++++++++++++
> > > >  2 files changed, 30 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> > > > index 642422775e4e..f2890ee434d0 100644
> > > > --- a/fs/iomap/buffered-io.c
> > > > +++ b/fs/iomap/buffered-io.c
> > > > @@ -1269,7 +1269,7 @@ iomap_chain_bio(struct bio *prev)
> > > >
> > > >  static bool
> > > >  iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
> > > > -         sector_t sector)
> > > > +         unsigned len, sector_t sector)
> > > >  {
> > > >   if ((wpc->iomap.flags & IOMAP_F_SHARED) !=
> > > >       (wpc->ioend->io_flags & IOMAP_F_SHARED))
> > > > @@ -1280,6 +1280,8 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
> > > >           return false;
> > > >   if (sector != bio_end_sector(wpc->ioend->io_bio))
> > > >           return false;
> > > > + if (wpc->ioend->io_size + len > IOEND_MAX_IOSIZE)
> > > > +         return false;
> > > >   return true;
> > > >  }
> > > >
> > > > @@ -1297,7 +1299,7 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
> > > >   unsigned poff = offset & (PAGE_SIZE - 1);
> > > >   bool merged, same_page = false;
> > > >
> > > > - if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) {
> > > > + if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, len, sector)) {
> > > >           if (wpc->ioend)
> > > >                   list_add(&wpc->ioend->io_list, iolist);
> > > >           wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc);
> > > > diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> > > > index 07f3f4e69084..89b15cc236d5 100644
> > > > --- a/include/linux/iomap.h
> > > > +++ b/include/linux/iomap.h
> > > > @@ -203,6 +203,32 @@ struct iomap_ioend {
> > > >   struct bio              io_inline_bio;  /* MUST BE LAST! */
> > > >  };
> > > >
> > > > +/*
> > > > + * Maximum ioend IO size is used to prevent ioends from becoming unbound in
> > > > + * size. bios can reach 4GB in size if pages are contiguous, and bio chains are
> > > > + * effectively unbound in length. Hence the only limits on the size of the bio
> > > > + * chain is the contiguity of the extent on disk and the length of the run of
> > > > + * sequential dirty pages in the page cache. This can be tens of GBs of physical
> > > > + * extents and if memory is large enough, tens of millions of dirty pages.
> > > > + * Locking them all under writeback until the final bio in the chain is
> > > > + * submitted and completed locks all those pages for the legnth of time it takes
> > >
> > > s/legnth/length/
> > >
> >
> > Fixed.
> >
> > > > + * to write those many, many GBs of data to storage.
> > > > + *
> > > > + * Background writeback caps any single writepages call to half the device
> > > > + * bandwidth to ensure fairness and prevent any one dirty inode causing
> > > > + * writeback starvation. fsync() and other WB_SYNC_ALL writebacks have no such
> > > > + * cap on wbc->nr_pages, and that's where the above massive bio chain lengths
> > > > + * come from. We want large IOs to reach the storage, but we need to limit
> > > > + * completion latencies, hence we need to control the maximum IO size we
> > > > + * dispatch to the storage stack.
> > > > + *
> > > > + * We don't really have to care about the extra IO completion overhead here
> > > > + * because iomap has contiguous IO completion merging. If the device can sustain
> > >
> > > Assuming you're referring to iomap_finish_ioends, only XFS employs the
> > > ioend completion merging, and only for ioends where it decides to
> > > override the default bi_end_io.  iomap on its own never calls
> > > iomap_ioend_try_merge.
> > >
> > > This patch establishes a maximum ioend size of 4096 pages so that we
> > > don't trip the lockup watchdog while clearing pagewriteback and also so
> > > that we don't pin a large number of pages while constructing a big chain
> > > of bios.  On gfs2 and zonefs, each ioend completion will now have to
> > > clear up to 4096 pages from whatever context bio_endio is called.
> > >
> > > For XFS it's a more complicated -- XFS already overrode the bio handler
> > > for ioends that required further metadata updates (e.g. unwritten
> > > conversion, eof extension, or cow) so that it could combine ioends when
> > > possible.  XFS wants to combine ioends to amortize the cost of getting
> > > the ILOCK and running transactions over a larger number of pages.
> > >
> > > So I guess I see how the two changes dovetail nicely for XFS -- iomap
> > > issues smaller write bios, and the xfs ioend worker can recombine
> > > however many bios complete before the worker runs.  As a bonus, we don't
> > > have to worry about situations like the device driver completing so many
> > > bios from a single invocation of a bottom half handler that we run afoul
> > > of the soft lockup timer.
> > >
> > > Is that a correct understanding of how the two changes intersect with
> > > each other?  TBH I was expecting the two thresholds to be closer in
> > > value.
> > >
> >
> > I think so. That's interesting because my inclination was to make them
> > farther apart (or more specifically, increase the threshold in this
> > patch and leave the previous). The primary goal of this series was to
> > address the soft lockup warning problem, hence the thresholds on earlier
> > versions started at rather conservative values. I think both values have
> > been reasonably justified in being reduced, though this patch has a more
> > broad impact than the previous in that it changes behavior for all iomap
> > based fs'. Of course that's something that could also be addressed with
> > a more dynamic tunable..
>
> <shrug> I think I'm comfortable starting with 256 for xfs to bump an
> ioend to a workqueue, and 4096 pages as the limit for an iomap ioend.
> If people demonstrate a need to smart-tune or manual-tune we can always
> add one later.
>
> Though I guess I did kind of wonder if maybe a better limit for iomap
> would be max_hw_sectors?  Since that's the maximum size of an IO that
> the kernel will for that device?
>
> (Hm, maybe not; my computers all have it set to 1280k, which is a
> pathetic 20 pages on a 64k-page system.)
>
> > > The other two users of iomap for buffered io (gfs2 and zonefs) don't
> > > have a means to defer and combine ioends like xfs does.  Do you think
> > > they should?  I think it's still possible to trip the softlockup there.
> > >
> >
> > I'm not sure. We'd probably want some feedback from developers of
> > filesystems other than XFS before incorporating a change like this. The
> > first patch in the series more just provides some infrastructure for
> > other filesystems to avoid the problem as they see fit.
>
> Hmm.  Any input from the two other users of iomap buffered IO?  Who are
> now directly in the to: list? :D
>
> Catch-up TLDR: we're evaluating a proposal to limit the length of an
> iomap writeback ioend to 4096 pages so that we don't trip the hangcheck
> warning while clearing pagewriteback if the ioend completion happens to
> run in softirq context (e.g. nvme completion).

That's fine for gfs2. Due to the way our allocator works, our extents
typically are at most 509 blocks long (< 2 MB), which already limits
the maximum size. We have plans for fixing that, but even then, any
somewhat sane limit should do.

Thanks,
Andreas
Matthew Wilcox May 26, 2021, 2:12 a.m. UTC | #9
On Mon, May 24, 2021 at 09:20:35PM -0700, Darrick J. Wong wrote:
> > > This patch establishes a maximum ioend size of 4096 pages so that we
> > > don't trip the lockup watchdog while clearing pagewriteback and also so
> > > that we don't pin a large number of pages while constructing a big chain
> > > of bios.  On gfs2 and zonefs, each ioend completion will now have to
> > > clear up to 4096 pages from whatever context bio_endio is called.
> > > 
> > > For XFS it's a more complicated -- XFS already overrode the bio handler
> > > for ioends that required further metadata updates (e.g. unwritten
> > > conversion, eof extension, or cow) so that it could combine ioends when
> > > possible.  XFS wants to combine ioends to amortize the cost of getting
> > > the ILOCK and running transactions over a larger number of pages.
> > > 
> > > So I guess I see how the two changes dovetail nicely for XFS -- iomap
> > > issues smaller write bios, and the xfs ioend worker can recombine
> > > however many bios complete before the worker runs.  As a bonus, we don't
> > > have to worry about situations like the device driver completing so many
> > > bios from a single invocation of a bottom half handler that we run afoul
> > > of the soft lockup timer.
> > > 
> > > Is that a correct understanding of how the two changes intersect with
> > > each other?  TBH I was expecting the two thresholds to be closer in
> > > value.
> > > 
> > 
> > I think so. That's interesting because my inclination was to make them
> > farther apart (or more specifically, increase the threshold in this
> > patch and leave the previous). The primary goal of this series was to
> > address the soft lockup warning problem, hence the thresholds on earlier
> > versions started at rather conservative values. I think both values have
> > been reasonably justified in being reduced, though this patch has a more
> > broad impact than the previous in that it changes behavior for all iomap
> > based fs'. Of course that's something that could also be addressed with
> > a more dynamic tunable..
> 
> <shrug> I think I'm comfortable starting with 256 for xfs to bump an
> ioend to a workqueue, and 4096 pages as the limit for an iomap ioend.
> If people demonstrate a need to smart-tune or manual-tune we can always
> add one later.
> 
> Though I guess I did kind of wonder if maybe a better limit for iomap
> would be max_hw_sectors?  Since that's the maximum size of an IO that
> the kernel will for that device?

I think you're looking at this wrong.  The question is whether the
system can tolerate the additional latency of bumping to a workqueue vs
servicing directly.

If the I/O is large, then clearly it can.  It already waited for all
those DMAs to happen which took a certain amount of time on the I/O bus.
If the I/O is small, then maybe it can and maybe it can't.  So we should
be conservative and complete it in interrupt context.

This is why I think "number of pages" is really a red herring.  Sure,
that's the amount of work to be done, but really the question is "can
this I/O tolerate the extra delay".  Short of passing that information
in from the caller, number of bytes really is our best way of knowing.
And that doesn't scale with anything to do with the device or the
system bus.
Darrick J. Wong May 26, 2021, 3:32 a.m. UTC | #10
On Wed, May 26, 2021 at 03:12:50AM +0100, Matthew Wilcox wrote:
> On Mon, May 24, 2021 at 09:20:35PM -0700, Darrick J. Wong wrote:
> > > > This patch establishes a maximum ioend size of 4096 pages so that we
> > > > don't trip the lockup watchdog while clearing pagewriteback and also so
> > > > that we don't pin a large number of pages while constructing a big chain
> > > > of bios.  On gfs2 and zonefs, each ioend completion will now have to
> > > > clear up to 4096 pages from whatever context bio_endio is called.
> > > > 
> > > > For XFS it's a more complicated -- XFS already overrode the bio handler
> > > > for ioends that required further metadata updates (e.g. unwritten
> > > > conversion, eof extension, or cow) so that it could combine ioends when
> > > > possible.  XFS wants to combine ioends to amortize the cost of getting
> > > > the ILOCK and running transactions over a larger number of pages.
> > > > 
> > > > So I guess I see how the two changes dovetail nicely for XFS -- iomap
> > > > issues smaller write bios, and the xfs ioend worker can recombine
> > > > however many bios complete before the worker runs.  As a bonus, we don't
> > > > have to worry about situations like the device driver completing so many
> > > > bios from a single invocation of a bottom half handler that we run afoul
> > > > of the soft lockup timer.
> > > > 
> > > > Is that a correct understanding of how the two changes intersect with
> > > > each other?  TBH I was expecting the two thresholds to be closer in
> > > > value.
> > > > 
> > > 
> > > I think so. That's interesting because my inclination was to make them
> > > farther apart (or more specifically, increase the threshold in this
> > > patch and leave the previous). The primary goal of this series was to
> > > address the soft lockup warning problem, hence the thresholds on earlier
> > > versions started at rather conservative values. I think both values have
> > > been reasonably justified in being reduced, though this patch has a more
> > > broad impact than the previous in that it changes behavior for all iomap
> > > based fs'. Of course that's something that could also be addressed with
> > > a more dynamic tunable..
> > 
> > <shrug> I think I'm comfortable starting with 256 for xfs to bump an
> > ioend to a workqueue, and 4096 pages as the limit for an iomap ioend.
> > If people demonstrate a need to smart-tune or manual-tune we can always
> > add one later.
> > 
> > Though I guess I did kind of wonder if maybe a better limit for iomap
> > would be max_hw_sectors?  Since that's the maximum size of an IO that
> > the kernel will for that device?
> 
> I think you're looking at this wrong.  The question is whether the
> system can tolerate the additional latency of bumping to a workqueue vs
> servicing directly.
> 
> If the I/O is large, then clearly it can.  It already waited for all
> those DMAs to happen which took a certain amount of time on the I/O bus.
> If the I/O is small, then maybe it can and maybe it can't.  So we should
> be conservative and complete it in interrupt context.
> 
> This is why I think "number of pages" is really a red herring.  Sure,
> that's the amount of work to be done, but really the question is "can
> this I/O tolerate the extra delay".  Short of passing that information
> in from the caller, number of bytes really is our best way of knowing.
> And that doesn't scale with anything to do with the device or the
> system bus.  

It doesn't matter whether the process(es) that triggered writeback will
tolerate the extra latency of a workqueue.  The hangcheck timer trips,
which means we've been doing things in softirq context too long.

The next thing that happens is that the kind of people who treat **ANY**
stack trace in dmesg as grounds to file a bug and escalate it will file
a bug and escalate it, and now I'm working 10 hour days trying to stomp
down all 6 escalations, run a QA botnet, review patches, and make any
incremental progress on long term goals when I can squeeze out five
minutes of free time.

Yeah, it'd be nice to rebuild writeback with some sort of QOS system so
that it could pick different strategies based on the amount of work to
do and the impatience levels of the processes waiting for it.  But that
is a project of its own.  This is a starter fix to take the heat off.

The reason I've been running at 110% burnout for the last 9 months is
exactly this -- someone submits a patchset to fix or improve something,
but then the reviewers pile on with "No no no, you should consider
building this far more elaborate solution", withhold review tags, but
then seem to be too busy to participate in building the elaborate thing.

At least in this case I can do something about it.  We're nearly to rc4
so barring anything weird showing up in QA runs overnight I plan to
stuff this in for 5.14.

--D
diff mbox series

Patch

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 642422775e4e..f2890ee434d0 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1269,7 +1269,7 @@  iomap_chain_bio(struct bio *prev)
 
 static bool
 iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
-		sector_t sector)
+		unsigned len, sector_t sector)
 {
 	if ((wpc->iomap.flags & IOMAP_F_SHARED) !=
 	    (wpc->ioend->io_flags & IOMAP_F_SHARED))
@@ -1280,6 +1280,8 @@  iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
 		return false;
 	if (sector != bio_end_sector(wpc->ioend->io_bio))
 		return false;
+	if (wpc->ioend->io_size + len > IOEND_MAX_IOSIZE)
+		return false;
 	return true;
 }
 
@@ -1297,7 +1299,7 @@  iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
 	unsigned poff = offset & (PAGE_SIZE - 1);
 	bool merged, same_page = false;
 
-	if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) {
+	if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, len, sector)) {
 		if (wpc->ioend)
 			list_add(&wpc->ioend->io_list, iolist);
 		wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc);
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 07f3f4e69084..89b15cc236d5 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -203,6 +203,32 @@  struct iomap_ioend {
 	struct bio		io_inline_bio;	/* MUST BE LAST! */
 };
 
+/*
+ * Maximum ioend IO size is used to prevent ioends from becoming unbound in
+ * size. bios can reach 4GB in size if pages are contiguous, and bio chains are
+ * effectively unbound in length. Hence the only limits on the size of the bio
+ * chain is the contiguity of the extent on disk and the length of the run of
+ * sequential dirty pages in the page cache. This can be tens of GBs of physical
+ * extents and if memory is large enough, tens of millions of dirty pages.
+ * Locking them all under writeback until the final bio in the chain is
+ * submitted and completed locks all those pages for the legnth of time it takes
+ * to write those many, many GBs of data to storage.
+ *
+ * Background writeback caps any single writepages call to half the device
+ * bandwidth to ensure fairness and prevent any one dirty inode causing
+ * writeback starvation. fsync() and other WB_SYNC_ALL writebacks have no such
+ * cap on wbc->nr_pages, and that's where the above massive bio chain lengths
+ * come from. We want large IOs to reach the storage, but we need to limit
+ * completion latencies, hence we need to control the maximum IO size we
+ * dispatch to the storage stack.
+ *
+ * We don't really have to care about the extra IO completion overhead here
+ * because iomap has contiguous IO completion merging. If the device can sustain
+ * high throughput and large bios, the ioends are merged on completion and
+ * processed in large, efficient chunks with no additional IO latency.
+ */
+#define IOEND_MAX_IOSIZE	(4096ULL << PAGE_SHIFT)
+
 struct iomap_writeback_ops {
 	/*
 	 * Required, maps the blocks so that writeback can be performed on