diff mbox series

mm/page-writeback: Raise wb_thresh to prevent write blocking with strictlimit

Message ID 20241023100032.62952-1-jimzhao.ai@gmail.com (mailing list archive)
State New
Headers show
Series mm/page-writeback: Raise wb_thresh to prevent write blocking with strictlimit | expand

Commit Message

Jim Zhao Oct. 23, 2024, 10 a.m. UTC
With the strictlimit flag, wb_thresh acts as a hard limit in
balance_dirty_pages() and wb_position_ratio(). When device write
operations are inactive, wb_thresh can drop to 0, causing writes to
be blocked. The issue occasionally occurs in fuse fs, particularly
with network backends, the write thread is blocked frequently during
a period. To address it, this patch raises the minimum wb_thresh to a
controllable level, similar to the non-strictlimit case.

Signed-off-by: Jim Zhao <jimzhao.ai@gmail.com>
---
 mm/page-writeback.c | 25 ++++++++++++++++++++++---
 1 file changed, 22 insertions(+), 3 deletions(-)

Comments

Andrew Morton Oct. 23, 2024, 11:24 p.m. UTC | #1
On Wed, 23 Oct 2024 18:00:32 +0800 Jim Zhao <jimzhao.ai@gmail.com> wrote:

> With the strictlimit flag, wb_thresh acts as a hard limit in
> balance_dirty_pages() and wb_position_ratio(). When device write
> operations are inactive, wb_thresh can drop to 0, causing writes to
> be blocked. The issue occasionally occurs in fuse fs, particularly
> with network backends, the write thread is blocked frequently during
> a period. To address it, this patch raises the minimum wb_thresh to a
> controllable level, similar to the non-strictlimit case.

Please tell us more about the userspace-visible effects of this.  It
*sounds* like a serious (but occasional) problem, but that is unclear.

And, very much relatedly, do you feel this fix is needed in earlier
(-stable) kernels?
Jim Zhao Oct. 24, 2024, 6:09 a.m. UTC | #2
> On Wed, 23 Oct 2024 18:00:32 +0800 Jim Zhao <jimzhao.ai@gmail.com> wrote:

> > With the strictlimit flag, wb_thresh acts as a hard limit in
> > balance_dirty_pages() and wb_position_ratio(). When device write
> > operations are inactive, wb_thresh can drop to 0, causing writes to
> > be blocked. The issue occasionally occurs in fuse fs, particularly
> > with network backends, the write thread is blocked frequently during
> > a period. To address it, this patch raises the minimum wb_thresh to a
> > controllable level, similar to the non-strictlimit case.

> Please tell us more about the userspace-visible effects of this.  It
> *sounds* like a serious (but occasional) problem, but that is unclear.

> And, very much relatedly, do you feel this fix is needed in earlier
> (-stable) kernels?

The problem exists in two scenarios:
1. FUSE Write Transition from Inactive to Active

sometimes, active writes require several pauses to ramp up to the appropriate wb_thresh.
As shown in the trace below, both bdi_setpoint and task_ratelimit are 0, means wb_thresh is 0. 
The dd process pauses multiple times before reaching a normal state.

dd-1206590 [003] .... 62988.324049: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259360 dirty=454 bdi_setpoint=0 bdi_dirty=32 dirty_ratelimit=18716 task_ratelimit=0 dirtied=32 dirtied_pause=32 paused=0 pause=4 period=4 think=0 cgroup_ino=1
dd-1206590 [003] .... 62988.332063: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259453 dirty=454 bdi_setpoint=0 bdi_dirty=33 dirty_ratelimit=18716 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=4 period=4 think=4 cgroup_ino=1
dd-1206590 [003] .... 62988.340064: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259526 dirty=454 bdi_setpoint=0 bdi_dirty=34 dirty_ratelimit=18716 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=4 period=4 think=4 cgroup_ino=1
dd-1206590 [003] .... 62988.348061: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259531 dirty=489 bdi_setpoint=0 bdi_dirty=35 dirty_ratelimit=18716 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=4 period=4 think=4 cgroup_ino=1
dd-1206590 [003] .... 62988.356063: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259531 dirty=490 bdi_setpoint=0 bdi_dirty=36 dirty_ratelimit=18716 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=4 period=4 think=4 cgroup_ino=1
...

2. FUSE with Unstable Network Backends and Occasional Writes
Not easy to reproduce, but when it occurs in this scenario, 
it causes the write thread to experience more pauses and longer durations.


Currently, some code is in place to improve this situation, but seems insufficient:
if (dtc->wb_dirty < 8)
{
	// ...
}

So the patch raise min wb_thresh to keep the occasional writes won't be blocked and
active writes can rampup the threshold quickly.

--

Thanks,
Jim Zhao
Andrew Morton Oct. 24, 2024, 6:20 a.m. UTC | #3
On Thu, 24 Oct 2024 14:09:54 +0800 Jim Zhao <jimzhao.ai@gmail.com> wrote:

> > On Wed, 23 Oct 2024 18:00:32 +0800 Jim Zhao <jimzhao.ai@gmail.com> wrote:
> 
> > > With the strictlimit flag, wb_thresh acts as a hard limit in
> > > balance_dirty_pages() and wb_position_ratio(). When device write
> > > operations are inactive, wb_thresh can drop to 0, causing writes to
> > > be blocked. The issue occasionally occurs in fuse fs, particularly
> > > with network backends, the write thread is blocked frequently during
> > > a period. To address it, this patch raises the minimum wb_thresh to a
> > > controllable level, similar to the non-strictlimit case.
> 
> > Please tell us more about the userspace-visible effects of this.  It
> > *sounds* like a serious (but occasional) problem, but that is unclear.
> 
> > And, very much relatedly, do you feel this fix is needed in earlier
> > (-stable) kernels?
> 
> The problem exists in two scenarios:
> 1. FUSE Write Transition from Inactive to Active
> 
> sometimes, active writes require several pauses to ramp up to the appropriate wb_thresh.
> As shown in the trace below, both bdi_setpoint and task_ratelimit are 0, means wb_thresh is 0. 
> The dd process pauses multiple times before reaching a normal state.
> 
> dd-1206590 [003] .... 62988.324049: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259360 dirty=454 bdi_setpoint=0 bdi_dirty=32 dirty_ratelimit=18716 task_ratelimit=0 dirtied=32 dirtied_pause=32 paused=0 pause=4 period=4 think=0 cgroup_ino=1
> dd-1206590 [003] .... 62988.332063: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259453 dirty=454 bdi_setpoint=0 bdi_dirty=33 dirty_ratelimit=18716 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=4 period=4 think=4 cgroup_ino=1
> dd-1206590 [003] .... 62988.340064: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259526 dirty=454 bdi_setpoint=0 bdi_dirty=34 dirty_ratelimit=18716 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=4 period=4 think=4 cgroup_ino=1
> dd-1206590 [003] .... 62988.348061: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259531 dirty=489 bdi_setpoint=0 bdi_dirty=35 dirty_ratelimit=18716 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=4 period=4 think=4 cgroup_ino=1
> dd-1206590 [003] .... 62988.356063: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259531 dirty=490 bdi_setpoint=0 bdi_dirty=36 dirty_ratelimit=18716 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=4 period=4 think=4 cgroup_ino=1
> ...
> 
> 2. FUSE with Unstable Network Backends and Occasional Writes
> Not easy to reproduce, but when it occurs in this scenario, 
> it causes the write thread to experience more pauses and longer durations.

Thanks, but it's still unclear how this impacts our users.  How lenghty
are these pauses?
Jim Zhao Oct. 24, 2024, 7:29 a.m. UTC | #4
> On Thu, 24 Oct 2024 14:09:54 +0800 Jim Zhao <jimzhao.ai@gmail.com> wrote:

> > > On Wed, 23 Oct 2024 18:00:32 +0800 Jim Zhao <jimzhao.ai@gmail.com> wrote:
> > 
> > > > With the strictlimit flag, wb_thresh acts as a hard limit in
> > > > balance_dirty_pages() and wb_position_ratio(). When device write
> > > > operations are inactive, wb_thresh can drop to 0, causing writes to
> > > > be blocked. The issue occasionally occurs in fuse fs, particularly
> > > > with network backends, the write thread is blocked frequently during
> > > > a period. To address it, this patch raises the minimum wb_thresh to a
> > > > controllable level, similar to the non-strictlimit case.
> > 
> > > Please tell us more about the userspace-visible effects of this.  It
> > > *sounds* like a serious (but occasional) problem, but that is unclear.
> > 
> > > And, very much relatedly, do you feel this fix is needed in earlier
> > > (-stable) kernels?
> > 
> > The problem exists in two scenarios:
> > 1. FUSE Write Transition from Inactive to Active
> > 
> > sometimes, active writes require several pauses to ramp up to the appropriate wb_thresh.
> > As shown in the trace below, both bdi_setpoint and task_ratelimit are 0, means wb_thresh is 0. 
> > The dd process pauses multiple times before reaching a normal state.
> > 
> > dd-1206590 [003] .... 62988.324049: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259360 dirty=454 bdi_setpoint=0 bdi_dirty=32 dirty_ratelimit=18716 task_ratelimit=0 dirtied=32 dirtied_pause=32 paused=0 pause=4 period=4 think=0 cgroup_ino=1
> > dd-1206590 [003] .... 62988.332063: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259453 dirty=454 bdi_setpoint=0 bdi_dirty=33 dirty_ratelimit=18716 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=4 period=4 think=4 cgroup_ino=1
> > dd-1206590 [003] .... 62988.340064: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259526 dirty=454 bdi_setpoint=0 bdi_dirty=34 dirty_ratelimit=18716 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=4 period=4 think=4 cgroup_ino=1
> > dd-1206590 [003] .... 62988.348061: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259531 dirty=489 bdi_setpoint=0 bdi_dirty=35 dirty_ratelimit=18716 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=4 period=4 think=4 cgroup_ino=1
> > dd-1206590 [003] .... 62988.356063: balance_dirty_pages: bdi 0:51: limit=295073 setpoint=259531 dirty=490 bdi_setpoint=0 bdi_dirty=36 dirty_ratelimit=18716 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=4 period=4 think=4 cgroup_ino=1
> > ...
> > 
> > 2. FUSE with Unstable Network Backends and Occasional Writes
> > Not easy to reproduce, but when it occurs in this scenario, 
> > it causes the write thread to experience more pauses and longer durations.

> Thanks, but it's still unclear how this impacts our users.  How lenghty
> are these pauses?

The length is related to device writeback bandwidth.
Under normal bandwidth, each pause may last around 4ms in several times as shown in the trace above(5 times).
In extreme cases, fuse with unstable network backends,
if pauses occur frequently and bandwidth is low, each pause can exceed 10ms, the total duration of pauses can accumulate to second.


Thnaks,
Jim Zhao
Andrew Morton Oct. 26, 2024, 12:02 a.m. UTC | #5
On Thu, 24 Oct 2024 15:29:19 +0800 Jim Zhao <jimzhao.ai@gmail.com> wrote:

> > > 2. FUSE with Unstable Network Backends and Occasional Writes
> > > Not easy to reproduce, but when it occurs in this scenario, 
> > > it causes the write thread to experience more pauses and longer durations.
> 
> > Thanks, but it's still unclear how this impacts our users.  How lenghty
> > are these pauses?
> 
> The length is related to device writeback bandwidth.
> Under normal bandwidth, each pause may last around 4ms in several times as shown in the trace above(5 times).
> In extreme cases, fuse with unstable network backends,
> if pauses occur frequently and bandwidth is low, each pause can exceed 10ms, the total duration of pauses can accumulate to second.

Thanks.  I'll assume that the userspace impact isn't serious to warrant
a backport into -stable kernel.

If you disagree with this, please let me know and send along additional
changelog text which helps others understand why we think our users
will significantly benefit from this change.
Jim Zhao Nov. 1, 2024, 7:17 a.m. UTC | #6
> On Thu, 24 Oct 2024 15:29:19 +0800 Jim Zhao <jimzhao.ai@gmail.com> wrote:
> 
> > > > 2. FUSE with Unstable Network Backends and Occasional Writes
> > > > Not easy to reproduce, but when it occurs in this scenario, 
> > > > it causes the write thread to experience more pauses and longer durations.
> > 
> > > Thanks, but it's still unclear how this impacts our users.  How lenghty
> > > are these pauses?
> > 
> > The length is related to device writeback bandwidth.
> > Under normal bandwidth, each pause may last around 4ms in several times as shown in the trace above(5 times).
> > In extreme cases, fuse with unstable network backends,
> > if pauses occur frequently and bandwidth is low, each pause can exceed 10ms, the total duration of pauses can accumulate to second.
> 
> Thanks.  I'll assume that the userspace impact isn't serious to warrant
> a backport into -stable kernel.
> 
> If you disagree with this, please let me know and send along additional
> changelog text which helps others understand why we think our users
> will significantly benefit from this change.

It’s acceptable not to backport this to earlier kernels. 
After additional testing,  under normal conditions, the impact on userspace is limited, with blocking times generally in the millisecond range.
However, I recommend including this patch in the next kernel version. 
In cases of low writeback bandwidth and high writeback delay, blocking times can significantly increase. 
This patch helps eliminate unnecessary blocks in those scenarios.
Thanks.
Jan Kara Nov. 7, 2024, 3:32 p.m. UTC | #7
On Wed 23-10-24 18:00:32, Jim Zhao wrote:
> With the strictlimit flag, wb_thresh acts as a hard limit in
> balance_dirty_pages() and wb_position_ratio(). When device write
> operations are inactive, wb_thresh can drop to 0, causing writes to
> be blocked. The issue occasionally occurs in fuse fs, particularly
> with network backends, the write thread is blocked frequently during
> a period. To address it, this patch raises the minimum wb_thresh to a
> controllable level, similar to the non-strictlimit case.
> 
> Signed-off-by: Jim Zhao <jimzhao.ai@gmail.com>

...

> +	/*
> +	 * With strictlimit flag, the wb_thresh is treated as
> +	 * a hard limit in balance_dirty_pages() and wb_position_ratio().
> +	 * It's possible that wb_thresh is close to zero, not because
> +	 * the device is slow, but because it has been inactive.
> +	 * To prevent occasional writes from being blocked, we raise wb_thresh.
> +	 */
> +	if (unlikely(wb->bdi->capabilities & BDI_CAP_STRICTLIMIT)) {
> +		unsigned long limit = hard_dirty_limit(dom, dtc->thresh);
> +		u64 wb_scale_thresh = 0;
> +
> +		if (limit > dtc->dirty)
> +			wb_scale_thresh = (limit - dtc->dirty) / 100;
> +		wb_thresh = max(wb_thresh, min(wb_scale_thresh, wb_max_thresh / 4));
> +	}

What you propose makes sense in principle although I'd say this is mostly a
userspace setup issue - with strictlimit enabled, you're kind of expected
to set min_ratio exactly if you want to avoid these startup issues. But I
tend to agree that we can provide a bit of a slack for a bdi without
min_ratio configured to ramp up.

But I'd rather pick the logic like:

	/*
	 * If bdi does not have min_ratio configured and it was inactive,
	 * bump its min_ratio to 0.1% to provide it some room to ramp up.
	 */
	if (!wb_min_ratio && !numerator)
		wb_min_ratio = min(BDI_RATIO_SCALE / 10, wb_max_ratio / 2);

That would seem like a bit more systematic way than the formula you propose
above...

								Honza
Jim Zhao Nov. 8, 2024, 3:19 a.m. UTC | #8
> On Wed 23-10-24 18:00:32, Jim Zhao wrote:
> > With the strictlimit flag, wb_thresh acts as a hard limit in
> > balance_dirty_pages() and wb_position_ratio(). When device write
> > operations are inactive, wb_thresh can drop to 0, causing writes to
> > be blocked. The issue occasionally occurs in fuse fs, particularly
> > with network backends, the write thread is blocked frequently during
> > a period. To address it, this patch raises the minimum wb_thresh to a
> > controllable level, similar to the non-strictlimit case.
> > 
> > Signed-off-by: Jim Zhao <jimzhao.ai@gmail.com>
> 
> ...
> 
> > +	/*
> > +	 * With strictlimit flag, the wb_thresh is treated as
> > +	 * a hard limit in balance_dirty_pages() and wb_position_ratio().
> > +	 * It's possible that wb_thresh is close to zero, not because
> > +	 * the device is slow, but because it has been inactive.
> > +	 * To prevent occasional writes from being blocked, we raise wb_thresh.
> > +	 */
> > +	if (unlikely(wb->bdi->capabilities & BDI_CAP_STRICTLIMIT)) {
> > +		unsigned long limit = hard_dirty_limit(dom, dtc->thresh);
> > +		u64 wb_scale_thresh = 0;
> > +
> > +		if (limit > dtc->dirty)
> > +			wb_scale_thresh = (limit - dtc->dirty) / 100;
> > +		wb_thresh = max(wb_thresh, min(wb_scale_thresh, wb_max_thresh / 4));
> > +	}
> 
> What you propose makes sense in principle although I'd say this is mostly a
> userspace setup issue - with strictlimit enabled, you're kind of expected
> to set min_ratio exactly if you want to avoid these startup issues. But I
> tend to agree that we can provide a bit of a slack for a bdi without
> min_ratio configured to ramp up.
> 
> But I'd rather pick the logic like:
> 
> 	/*
> 	 * If bdi does not have min_ratio configured and it was inactive,
> 	 * bump its min_ratio to 0.1% to provide it some room to ramp up.
> 	 */
> 	if (!wb_min_ratio && !numerator)
> 		wb_min_ratio = min(BDI_RATIO_SCALE / 10, wb_max_ratio / 2);
> 
> That would seem like a bit more systematic way than the formula you propose
> above...

Thanks for the advice.
Here's the explanation of the formula:
1. when writes are small and intermittent,wb_thresh can approach 0, not just 0, making the numerator value difficult to verify.
2. The ramp-up margin, whether 0.1% or another value, needs consideration.
I based this on the logic of wb_position_ratio in the non-strictlimit scenario:
wb_thresh = max(wb_thresh, (limit - dtc->dirty) / 8);
It seems provides more room and ensures ramping up within a controllable range.

---
Jim Zhao
Thanks
Jan Kara Nov. 8, 2024, 10:02 p.m. UTC | #9
On Fri 08-11-24 11:19:49, Jim Zhao wrote:
> > On Wed 23-10-24 18:00:32, Jim Zhao wrote:
> > > With the strictlimit flag, wb_thresh acts as a hard limit in
> > > balance_dirty_pages() and wb_position_ratio(). When device write
> > > operations are inactive, wb_thresh can drop to 0, causing writes to
> > > be blocked. The issue occasionally occurs in fuse fs, particularly
> > > with network backends, the write thread is blocked frequently during
> > > a period. To address it, this patch raises the minimum wb_thresh to a
> > > controllable level, similar to the non-strictlimit case.
> > > 
> > > Signed-off-by: Jim Zhao <jimzhao.ai@gmail.com>
> > 
> > ...
> > 
> > > +	/*
> > > +	 * With strictlimit flag, the wb_thresh is treated as
> > > +	 * a hard limit in balance_dirty_pages() and wb_position_ratio().
> > > +	 * It's possible that wb_thresh is close to zero, not because
> > > +	 * the device is slow, but because it has been inactive.
> > > +	 * To prevent occasional writes from being blocked, we raise wb_thresh.
> > > +	 */
> > > +	if (unlikely(wb->bdi->capabilities & BDI_CAP_STRICTLIMIT)) {
> > > +		unsigned long limit = hard_dirty_limit(dom, dtc->thresh);
> > > +		u64 wb_scale_thresh = 0;
> > > +
> > > +		if (limit > dtc->dirty)
> > > +			wb_scale_thresh = (limit - dtc->dirty) / 100;
> > > +		wb_thresh = max(wb_thresh, min(wb_scale_thresh, wb_max_thresh / 4));
> > > +	}
> > 
> > What you propose makes sense in principle although I'd say this is mostly a
> > userspace setup issue - with strictlimit enabled, you're kind of expected
> > to set min_ratio exactly if you want to avoid these startup issues. But I
> > tend to agree that we can provide a bit of a slack for a bdi without
> > min_ratio configured to ramp up.
> > 
> > But I'd rather pick the logic like:
> > 
> > 	/*
> > 	 * If bdi does not have min_ratio configured and it was inactive,
> > 	 * bump its min_ratio to 0.1% to provide it some room to ramp up.
> > 	 */
> > 	if (!wb_min_ratio && !numerator)
> > 		wb_min_ratio = min(BDI_RATIO_SCALE / 10, wb_max_ratio / 2);
> > 
> > That would seem like a bit more systematic way than the formula you propose
> > above...
> 
> Thanks for the advice.
> Here's the explanation of the formula:
> 1. when writes are small and intermittent,wb_thresh can approach 0, not
> just 0, making the numerator value difficult to verify.

I see, ok.

> 2. The ramp-up margin, whether 0.1% or another value, needs
> consideration.
> I based this on the logic of wb_position_ratio in the non-strictlimit
> scenario: wb_thresh = max(wb_thresh, (limit - dtc->dirty) / 8); It seems
> provides more room and ensures ramping up within a controllable range.

I see, thanks for explanation. So I was thinking how to make the code more
consistent instead of adding another special constant and workaround. What
I'd suggest is:

1) There's already code that's supposed to handle ramping up with
strictlimit in wb_update_dirty_ratelimit():

        /*
         * For strictlimit case, calculations above were based on wb counters
         * and limits (starting from pos_ratio = wb_position_ratio() and up to
         * balanced_dirty_ratelimit = task_ratelimit * write_bw / dirty_rate).
         * Hence, to calculate "step" properly, we have to use wb_dirty as
         * "dirty" and wb_setpoint as "setpoint".
         *
         * We rampup dirty_ratelimit forcibly if wb_dirty is low because
         * it's possible that wb_thresh is close to zero due to inactivity
         * of backing device.
         */
        if (unlikely(wb->bdi->capabilities & BDI_CAP_STRICTLIMIT)) {
                dirty = dtc->wb_dirty;
                if (dtc->wb_dirty < 8)
                        setpoint = dtc->wb_dirty + 1;
                else
                        setpoint = (dtc->wb_thresh + dtc->wb_bg_thresh) / 2;
        }

Now I agree that increasing wb_thresh directly is more understandable and
transparent so I'd just drop this special case.

2) I'd just handle all the bumping of wb_thresh in a single place instead
of having is spread over multiple places. So __wb_calc_thresh() could have
a code like:

        wb_thresh = (thresh * (100 * BDI_RATIO_SCALE - bdi_min_ratio)) / (100 * BDI_RATIO_SCALE)
        wb_thresh *= numerator;
        wb_thresh = div64_ul(wb_thresh, denominator);

        wb_min_max_ratio(dtc->wb, &wb_min_ratio, &wb_max_ratio);

        wb_thresh += (thresh * wb_min_ratio) / (100 * BDI_RATIO_SCALE);
	limit = hard_dirty_limit(dtc_dom(dtc), dtc->thresh);
        /*
         * It's very possible that wb_thresh is close to 0 not because the
         * device is slow, but that it has remained inactive for long time.
         * Honour such devices a reasonable good (hopefully IO efficient)
         * threshold, so that the occasional writes won't be blocked and active
         * writes can rampup the threshold quickly.
         */
	if (limit > dtc->dirty)
		wb_thresh = max(wb_thresh, (limit - dtc->dirty) / 8);
	if (wb_thresh > (thresh * wb_max_ratio) / (100 * BDI_RATIO_SCALE))
		wb_thresh = thresh * wb_max_ratio / (100 * BDI_RATIO_SCALE);

and we can drop the bumping from wb_position)_ratio(). This way have the
wb_thresh bumping in a single logical place. Since we still limit wb_tresh
with max_ratio, untrusted bdis for which max_ratio should be configured
(otherwise they can grow amount of dirty pages upto global treshold anyway)
are still under control.

If we really wanted, we could introduce a different bumping in case of
strictlimit, but at this point I don't think it is warranted so I'd leave
that as an option if someone comes with a situation where this bumping
proves to be too aggressive.

								Honza
Jim Zhao Nov. 12, 2024, 8:45 a.m. UTC | #10
> On Fri 08-11-24 11:19:49, Jim Zhao wrote:
> > > On Wed 23-10-24 18:00:32, Jim Zhao wrote:
> > > > With the strictlimit flag, wb_thresh acts as a hard limit in
> > > > balance_dirty_pages() and wb_position_ratio(). When device write
> > > > operations are inactive, wb_thresh can drop to 0, causing writes to
> > > > be blocked. The issue occasionally occurs in fuse fs, particularly
> > > > with network backends, the write thread is blocked frequently during
> > > > a period. To address it, this patch raises the minimum wb_thresh to a
> > > > controllable level, similar to the non-strictlimit case.
> > > >
> > > > Signed-off-by: Jim Zhao <jimzhao.ai@gmail.com>
> > >
> > > ...
> > >
> > > > +       /*
> > > > +        * With strictlimit flag, the wb_thresh is treated as
> > > > +        * a hard limit in balance_dirty_pages() and wb_position_ratio().
> > > > +        * It's possible that wb_thresh is close to zero, not because
> > > > +        * the device is slow, but because it has been inactive.
> > > > +        * To prevent occasional writes from being blocked, we raise wb_thresh.
> > > > +        */
> > > > +       if (unlikely(wb->bdi->capabilities & BDI_CAP_STRICTLIMIT)) {
> > > > +               unsigned long limit = hard_dirty_limit(dom, dtc->thresh);
> > > > +               u64 wb_scale_thresh = 0;
> > > > +
> > > > +               if (limit > dtc->dirty)
> > > > +                       wb_scale_thresh = (limit - dtc->dirty) / 100;
> > > > +               wb_thresh = max(wb_thresh, min(wb_scale_thresh, wb_max_thresh / 4));
> > > > +       }
> > >
> > > What you propose makes sense in principle although I'd say this is mostly a
> > > userspace setup issue - with strictlimit enabled, you're kind of expected
> > > to set min_ratio exactly if you want to avoid these startup issues. But I
> > > tend to agree that we can provide a bit of a slack for a bdi without
> > > min_ratio configured to ramp up.
> > >
> > > But I'd rather pick the logic like:
> > >
> > >   /*
> > >    * If bdi does not have min_ratio configured and it was inactive,
> > >    * bump its min_ratio to 0.1% to provide it some room to ramp up.
> > >    */
> > >   if (!wb_min_ratio && !numerator)
> > >           wb_min_ratio = min(BDI_RATIO_SCALE / 10, wb_max_ratio / 2);
> > >
> > > That would seem like a bit more systematic way than the formula you propose
> > > above...
> >
> > Thanks for the advice.
> > Here's the explanation of the formula:
> > 1. when writes are small and intermittent,wb_thresh can approach 0, not
> > just 0, making the numerator value difficult to verify.
>
> I see, ok.
>
> > 2. The ramp-up margin, whether 0.1% or another value, needs
> > consideration.
> > I based this on the logic of wb_position_ratio in the non-strictlimit
> > scenario: wb_thresh = max(wb_thresh, (limit - dtc->dirty) / 8); It seems
> > provides more room and ensures ramping up within a controllable range.
>
> I see, thanks for explanation. So I was thinking how to make the code more
> consistent instead of adding another special constant and workaround. What
> I'd suggest is:
>
> 1) There's already code that's supposed to handle ramping up with
> strictlimit in wb_update_dirty_ratelimit():
>
>         /*
>          * For strictlimit case, calculations above were based on wb counters
>          * and limits (starting from pos_ratio = wb_position_ratio() and up to
>          * balanced_dirty_ratelimit = task_ratelimit * write_bw / dirty_rate).
>          * Hence, to calculate "step" properly, we have to use wb_dirty as
>          * "dirty" and wb_setpoint as "setpoint".
>          *
>          * We rampup dirty_ratelimit forcibly if wb_dirty is low because
>          * it's possible that wb_thresh is close to zero due to inactivity
>          * of backing device.
>          */
>         if (unlikely(wb->bdi->capabilities & BDI_CAP_STRICTLIMIT)) {
>                 dirty = dtc->wb_dirty;
>                 if (dtc->wb_dirty < 8)
>                         setpoint = dtc->wb_dirty + 1;
>                 else
>                         setpoint = (dtc->wb_thresh + dtc->wb_bg_thresh) / 2;
>         }
>
> Now I agree that increasing wb_thresh directly is more understandable and
> transparent so I'd just drop this special case.

yes, I agree.

> 2) I'd just handle all the bumping of wb_thresh in a single place instead
> of having is spread over multiple places. So __wb_calc_thresh() could have
> a code like:
>
>         wb_thresh = (thresh * (100 * BDI_RATIO_SCALE - bdi_min_ratio)) / (100 * BDI_RATIO_SCALE)
>         wb_thresh *= numerator;
>         wb_thresh = div64_ul(wb_thresh, denominator);
>
>         wb_min_max_ratio(dtc->wb, &wb_min_ratio, &wb_max_ratio);
>
>         wb_thresh += (thresh * wb_min_ratio) / (100 * BDI_RATIO_SCALE);
>       limit = hard_dirty_limit(dtc_dom(dtc), dtc->thresh);
>         /*
>          * It's very possible that wb_thresh is close to 0 not because the
>          * device is slow, but that it has remained inactive for long time.
>          * Honour such devices a reasonable good (hopefully IO efficient)
>          * threshold, so that the occasional writes won't be blocked and active
>          * writes can rampup the threshold quickly.
>          */
>       if (limit > dtc->dirty)
>               wb_thresh = max(wb_thresh, (limit - dtc->dirty) / 8);
>       if (wb_thresh > (thresh * wb_max_ratio) / (100 * BDI_RATIO_SCALE))
>               wb_thresh = thresh * wb_max_ratio / (100 * BDI_RATIO_SCALE);
>
> and we can drop the bumping from wb_position)_ratio(). This way have the
> wb_thresh bumping in a single logical place. Since we still limit wb_tresh
> with max_ratio, untrusted bdis for which max_ratio should be configured
> (otherwise they can grow amount of dirty pages upto global treshold anyway)
> are still under control.
>
> If we really wanted, we could introduce a different bumping in case of
> strictlimit, but at this point I don't think it is warranted so I'd leave
> that as an option if someone comes with a situation where this bumping
> proves to be too aggressive.

Thank you, this is very helpful. And I have 2 concerns:

1.
In the current non-strictlimit logic, wb_thresh is only bumped within wb_position_ratio() for calculating pos_ratio, and this bump isn’t restricted by max_ratio. 
I’m unsure if moving this adjustment to __wb_calc_thresh() would effect existing behavior. 
Would it be possible to keep the current logic for non-strictlimit case?

2. Regarding the formula:
wb_thresh = max(wb_thresh, (limit - dtc->dirty) / 8);

Consider a case: 
With 100 fuse devices(with high max_ratio) experiencing high writeback delays, the pages being written back are accounted in NR_WRITEBACK_TEMP, not dtc->dirty. 
As a result, the bumped wb_thresh may remain high. While individual devices are under control, the total could exceed expectations.

Although lowering the max_ratio can avoid this issue, how about reducing the bumped wb_thresh?

The formula in my patch:
wb_scale_thresh = (limit - dtc->dirty) / 100;
The intention is to use the default fuse max_ratio(1%) as the multiplier.


Thanks
Jim Zhao
Jan Kara Nov. 13, 2024, 10:07 a.m. UTC | #11
On Tue 12-11-24 16:45:39, Jim Zhao wrote:
> > On Fri 08-11-24 11:19:49, Jim Zhao wrote:
> > > > On Wed 23-10-24 18:00:32, Jim Zhao wrote:
> > > > > With the strictlimit flag, wb_thresh acts as a hard limit in
> > > > > balance_dirty_pages() and wb_position_ratio(). When device write
> > > > > operations are inactive, wb_thresh can drop to 0, causing writes to
> > > > > be blocked. The issue occasionally occurs in fuse fs, particularly
> > > > > with network backends, the write thread is blocked frequently during
> > > > > a period. To address it, this patch raises the minimum wb_thresh to a
> > > > > controllable level, similar to the non-strictlimit case.
> > > > >
> > > > > Signed-off-by: Jim Zhao <jimzhao.ai@gmail.com>
> > > >
> > > > ...
> > > >
> > > > > +       /*
> > > > > +        * With strictlimit flag, the wb_thresh is treated as
> > > > > +        * a hard limit in balance_dirty_pages() and wb_position_ratio().
> > > > > +        * It's possible that wb_thresh is close to zero, not because
> > > > > +        * the device is slow, but because it has been inactive.
> > > > > +        * To prevent occasional writes from being blocked, we raise wb_thresh.
> > > > > +        */
> > > > > +       if (unlikely(wb->bdi->capabilities & BDI_CAP_STRICTLIMIT)) {
> > > > > +               unsigned long limit = hard_dirty_limit(dom, dtc->thresh);
> > > > > +               u64 wb_scale_thresh = 0;
> > > > > +
> > > > > +               if (limit > dtc->dirty)
> > > > > +                       wb_scale_thresh = (limit - dtc->dirty) / 100;
> > > > > +               wb_thresh = max(wb_thresh, min(wb_scale_thresh, wb_max_thresh / 4));
> > > > > +       }
> > > >
> > > > What you propose makes sense in principle although I'd say this is mostly a
> > > > userspace setup issue - with strictlimit enabled, you're kind of expected
> > > > to set min_ratio exactly if you want to avoid these startup issues. But I
> > > > tend to agree that we can provide a bit of a slack for a bdi without
> > > > min_ratio configured to ramp up.
> > > >
> > > > But I'd rather pick the logic like:
> > > >
> > > >   /*
> > > >    * If bdi does not have min_ratio configured and it was inactive,
> > > >    * bump its min_ratio to 0.1% to provide it some room to ramp up.
> > > >    */
> > > >   if (!wb_min_ratio && !numerator)
> > > >           wb_min_ratio = min(BDI_RATIO_SCALE / 10, wb_max_ratio / 2);
> > > >
> > > > That would seem like a bit more systematic way than the formula you propose
> > > > above...
> > >
> > > Thanks for the advice.
> > > Here's the explanation of the formula:
> > > 1. when writes are small and intermittent,wb_thresh can approach 0, not
> > > just 0, making the numerator value difficult to verify.
> >
> > I see, ok.
> >
> > > 2. The ramp-up margin, whether 0.1% or another value, needs
> > > consideration.
> > > I based this on the logic of wb_position_ratio in the non-strictlimit
> > > scenario: wb_thresh = max(wb_thresh, (limit - dtc->dirty) / 8); It seems
> > > provides more room and ensures ramping up within a controllable range.
> >
> > I see, thanks for explanation. So I was thinking how to make the code more
> > consistent instead of adding another special constant and workaround. What
> > I'd suggest is:
> >
> > 1) There's already code that's supposed to handle ramping up with
> > strictlimit in wb_update_dirty_ratelimit():
> >
> >         /*
> >          * For strictlimit case, calculations above were based on wb counters
> >          * and limits (starting from pos_ratio = wb_position_ratio() and up to
> >          * balanced_dirty_ratelimit = task_ratelimit * write_bw / dirty_rate).
> >          * Hence, to calculate "step" properly, we have to use wb_dirty as
> >          * "dirty" and wb_setpoint as "setpoint".
> >          *
> >          * We rampup dirty_ratelimit forcibly if wb_dirty is low because
> >          * it's possible that wb_thresh is close to zero due to inactivity
> >          * of backing device.
> >          */
> >         if (unlikely(wb->bdi->capabilities & BDI_CAP_STRICTLIMIT)) {
> >                 dirty = dtc->wb_dirty;
> >                 if (dtc->wb_dirty < 8)
> >                         setpoint = dtc->wb_dirty + 1;
> >                 else
> >                         setpoint = (dtc->wb_thresh + dtc->wb_bg_thresh) / 2;
> >         }
> >
> > Now I agree that increasing wb_thresh directly is more understandable and
> > transparent so I'd just drop this special case.
> 
> yes, I agree.
> 
> > 2) I'd just handle all the bumping of wb_thresh in a single place instead
> > of having is spread over multiple places. So __wb_calc_thresh() could have
> > a code like:
> >
> >         wb_thresh = (thresh * (100 * BDI_RATIO_SCALE - bdi_min_ratio)) / (100 * BDI_RATIO_SCALE)
> >         wb_thresh *= numerator;
> >         wb_thresh = div64_ul(wb_thresh, denominator);
> >
> >         wb_min_max_ratio(dtc->wb, &wb_min_ratio, &wb_max_ratio);
> >
> >         wb_thresh += (thresh * wb_min_ratio) / (100 * BDI_RATIO_SCALE);
> >       limit = hard_dirty_limit(dtc_dom(dtc), dtc->thresh);
> >         /*
> >          * It's very possible that wb_thresh is close to 0 not because the
> >          * device is slow, but that it has remained inactive for long time.
> >          * Honour such devices a reasonable good (hopefully IO efficient)
> >          * threshold, so that the occasional writes won't be blocked and active
> >          * writes can rampup the threshold quickly.
> >          */
> >       if (limit > dtc->dirty)
> >               wb_thresh = max(wb_thresh, (limit - dtc->dirty) / 8);
> >       if (wb_thresh > (thresh * wb_max_ratio) / (100 * BDI_RATIO_SCALE))
> >               wb_thresh = thresh * wb_max_ratio / (100 * BDI_RATIO_SCALE);
> >
> > and we can drop the bumping from wb_position)_ratio(). This way have the
> > wb_thresh bumping in a single logical place. Since we still limit wb_tresh
> > with max_ratio, untrusted bdis for which max_ratio should be configured
> > (otherwise they can grow amount of dirty pages upto global treshold anyway)
> > are still under control.
> >
> > If we really wanted, we could introduce a different bumping in case of
> > strictlimit, but at this point I don't think it is warranted so I'd leave
> > that as an option if someone comes with a situation where this bumping
> > proves to be too aggressive.
> 
> Thank you, this is very helpful. And I have 2 concerns:
> 
> 1.
> In the current non-strictlimit logic, wb_thresh is only bumped within
> wb_position_ratio() for calculating pos_ratio, and this bump isn’t
> restricted by max_ratio.  I’m unsure if moving this adjustment to
> __wb_calc_thresh() would effect existing behavior.  Would it be possible
> to keep the current logic for non-strictlimit case?

You are correct that current bumping is not affected by max_ratio and that
is actually a bug. wb_thresh should never exceed what is corresponding to
the configured max_ratio. Furthermore in practical configurations I don't
think the max_ratio limiting will actually make a big difference because
bumping should happen when wb_thresh is really low. So for consistency I
would apply it also to the non-strictlimit case.

> 2. Regarding the formula:
> wb_thresh = max(wb_thresh, (limit - dtc->dirty) / 8);
> 
> Consider a case: 
> With 100 fuse devices(with high max_ratio) experiencing high writeback
> delays, the pages being written back are accounted in NR_WRITEBACK_TEMP,
> not dtc->dirty.  As a result, the bumped wb_thresh may remain high. While
> individual devices are under control, the total could exceed
> expectations.

I agree but this is a potential problem with any kind of bumping based on
'limit - dtc->dirty'. It is just a matter of how many fuse devices you have
and how exactly you have max_ratio configured.

> Although lowering the max_ratio can avoid this issue, how about reducing
> the bumped wb_thresh?
> 
> The formula in my patch:
> wb_scale_thresh = (limit - dtc->dirty) / 100;
> The intention is to use the default fuse max_ratio(1%) as the multiplier.

So basically you propose to use the "/ 8" factor for the normal case and "/
100" factor for the strictlimit case. My position is that I would not
complicate the logic unless somebody comes with a real world setup where
the simpler logic is causing real problems. But if you feel strongly about
this, I'm fine with that option.

								Honza
diff mbox series

Patch

diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 72a5d8836425..f21d856c408b 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -917,7 +917,9 @@  static unsigned long __wb_calc_thresh(struct dirty_throttle_control *dtc,
 				      unsigned long thresh)
 {
 	struct wb_domain *dom = dtc_dom(dtc);
+	struct bdi_writeback *wb = dtc->wb;
 	u64 wb_thresh;
+	u64 wb_max_thresh;
 	unsigned long numerator, denominator;
 	unsigned long wb_min_ratio, wb_max_ratio;
 
@@ -931,11 +933,28 @@  static unsigned long __wb_calc_thresh(struct dirty_throttle_control *dtc,
 	wb_thresh *= numerator;
 	wb_thresh = div64_ul(wb_thresh, denominator);
 
-	wb_min_max_ratio(dtc->wb, &wb_min_ratio, &wb_max_ratio);
+	wb_min_max_ratio(wb, &wb_min_ratio, &wb_max_ratio);
 
 	wb_thresh += (thresh * wb_min_ratio) / (100 * BDI_RATIO_SCALE);
-	if (wb_thresh > (thresh * wb_max_ratio) / (100 * BDI_RATIO_SCALE))
-		wb_thresh = thresh * wb_max_ratio / (100 * BDI_RATIO_SCALE);
+	wb_max_thresh = thresh * wb_max_ratio / (100 * BDI_RATIO_SCALE);
+	if (wb_thresh > wb_max_thresh)
+		wb_thresh = wb_max_thresh;
+
+	/*
+	 * With strictlimit flag, the wb_thresh is treated as
+	 * a hard limit in balance_dirty_pages() and wb_position_ratio().
+	 * It's possible that wb_thresh is close to zero, not because
+	 * the device is slow, but because it has been inactive.
+	 * To prevent occasional writes from being blocked, we raise wb_thresh.
+	 */
+	if (unlikely(wb->bdi->capabilities & BDI_CAP_STRICTLIMIT)) {
+		unsigned long limit = hard_dirty_limit(dom, dtc->thresh);
+		u64 wb_scale_thresh = 0;
+
+		if (limit > dtc->dirty)
+			wb_scale_thresh = (limit - dtc->dirty) / 100;
+		wb_thresh = max(wb_thresh, min(wb_scale_thresh, wb_max_thresh / 4));
+	}
 
 	return wb_thresh;
 }