diff mbox series

[2/2] mm: Avoid overflows in dirty throttling logic

Message ID 20240621144246.11148-2-jack@suse.cz (mailing list archive)
State New
Headers show
Series mm: Avoid possible overflows in dirty throttling | expand

Commit Message

Jan Kara June 21, 2024, 2:42 p.m. UTC
The dirty throttling logic is interspersed with assumptions that dirty
limits in PAGE_SIZE units fit into 32-bit (so that various
multiplications fit into 64-bits). If limits end up being larger, we
will hit overflows, possible divisions by 0 etc. Fix these problems by
never allowing so large dirty limits as they have dubious practical
value anyway. For dirty_bytes / dirty_background_bytes interfaces we can
just refuse to set so large limits. For dirty_ratio /
dirty_background_ratio it isn't so simple as the dirty limit is computed
from the amount of available memory which can change due to memory
hotplug etc. So when converting dirty limits from ratios to numbers of
pages, we just don't allow the result to exceed UINT_MAX.

Reported-by: Zach O'Keefe <zokeefe@google.com>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/page-writeback.c | 30 ++++++++++++++++++++++++++----
 1 file changed, 26 insertions(+), 4 deletions(-)

Comments

Andrew Morton June 21, 2024, 5:10 p.m. UTC | #1
On Fri, 21 Jun 2024 16:42:38 +0200 Jan Kara <jack@suse.cz> wrote:

> The dirty throttling logic is interspersed with assumptions that dirty
> limits in PAGE_SIZE units fit into 32-bit (so that various
> multiplications fit into 64-bits). If limits end up being larger, we
> will hit overflows, possible divisions by 0 etc. Fix these problems by
> never allowing so large dirty limits as they have dubious practical
> value anyway. For dirty_bytes / dirty_background_bytes interfaces we can
> just refuse to set so large limits. For dirty_ratio /
> dirty_background_ratio it isn't so simple as the dirty limit is computed
> from the amount of available memory which can change due to memory
> hotplug etc. So when converting dirty limits from ratios to numbers of
> pages, we just don't allow the result to exceed UINT_MAX.
> 

Shouldn't this also be cc:stable?
Zach O'Keefe June 21, 2024, 5:29 p.m. UTC | #2
On Fri, Jun 21, 2024 at 10:11 AM Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> On Fri, 21 Jun 2024 16:42:38 +0200 Jan Kara <jack@suse.cz> wrote:
>
> > The dirty throttling logic is interspersed with assumptions that dirty
> > limits in PAGE_SIZE units fit into 32-bit (so that various
> > multiplications fit into 64-bits). If limits end up being larger, we
> > will hit overflows, possible divisions by 0 etc. Fix these problems by
> > never allowing so large dirty limits as they have dubious practical
> > value anyway. For dirty_bytes / dirty_background_bytes interfaces we can
> > just refuse to set so large limits. For dirty_ratio /
> > dirty_background_ratio it isn't so simple as the dirty limit is computed
> > from the amount of available memory which can change due to memory
> > hotplug etc. So when converting dirty limits from ratios to numbers of
> > pages, we just don't allow the result to exceed UINT_MAX.
> >
>
> Shouldn't this also be cc:stable?

+1 for stable, following previous attempts to address this.

Either way, it seems this closes the door on the particular overflow
issue encountered previously. Thanks for the proper fix.

Reviewed-By: Zach O'Keefe <zokeefe@google.com>
Jan Kara June 24, 2024, 8:16 a.m. UTC | #3
On Fri 21-06-24 10:10:58, Andrew Morton wrote:
> On Fri, 21 Jun 2024 16:42:38 +0200 Jan Kara <jack@suse.cz> wrote:
> 
> > The dirty throttling logic is interspersed with assumptions that dirty
> > limits in PAGE_SIZE units fit into 32-bit (so that various
> > multiplications fit into 64-bits). If limits end up being larger, we
> > will hit overflows, possible divisions by 0 etc. Fix these problems by
> > never allowing so large dirty limits as they have dubious practical
> > value anyway. For dirty_bytes / dirty_background_bytes interfaces we can
> > just refuse to set so large limits. For dirty_ratio /
> > dirty_background_ratio it isn't so simple as the dirty limit is computed
> > from the amount of available memory which can change due to memory
> > hotplug etc. So when converting dirty limits from ratios to numbers of
> > pages, we just don't allow the result to exceed UINT_MAX.
> 
> Shouldn't this also be cc:stable?

So this is root-only triggerable problem and kind of "don't do it when it
hurts" issue (who really wants to set dirty limits to > 16 TB?). So I'm not
sure CC stable is warranted but I won't object.

								Honza
diff mbox series

Patch

diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 2573e2d504af..8a1c92090129 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -415,13 +415,20 @@  static void domain_dirty_limits(struct dirty_throttle_control *dtc)
 	else
 		bg_thresh = (bg_ratio * available_memory) / PAGE_SIZE;
 
-	if (bg_thresh >= thresh)
-		bg_thresh = thresh / 2;
 	tsk = current;
 	if (rt_task(tsk)) {
 		bg_thresh += bg_thresh / 4 + global_wb_domain.dirty_limit / 32;
 		thresh += thresh / 4 + global_wb_domain.dirty_limit / 32;
 	}
+	/*
+	 * Dirty throttling logic assumes the limits in page units fit into
+	 * 32-bits. This gives 16TB dirty limits max which is hopefully enough.
+	 */
+	if (thresh > UINT_MAX)
+		thresh = UINT_MAX;
+	/* This makes sure bg_thresh is within 32-bits as well */
+	if (bg_thresh >= thresh)
+		bg_thresh = thresh / 2;
 	dtc->thresh = thresh;
 	dtc->bg_thresh = bg_thresh;
 
@@ -471,7 +478,11 @@  static unsigned long node_dirty_limit(struct pglist_data *pgdat)
 	if (rt_task(tsk))
 		dirty += dirty / 4;
 
-	return dirty;
+	/*
+	 * Dirty throttling logic assumes the limits in page units fit into
+	 * 32-bits. This gives 16TB dirty limits max which is hopefully enough.
+	 */
+	return min_t(unsigned long, dirty, UINT_MAX);
 }
 
 /**
@@ -508,10 +519,17 @@  static int dirty_background_bytes_handler(struct ctl_table *table, int write,
 		void *buffer, size_t *lenp, loff_t *ppos)
 {
 	int ret;
+	unsigned long old_bytes = dirty_background_bytes;
 
 	ret = proc_doulongvec_minmax(table, write, buffer, lenp, ppos);
-	if (ret == 0 && write)
+	if (ret == 0 && write) {
+		if (DIV_ROUND_UP(dirty_background_bytes, PAGE_SIZE) >
+								UINT_MAX) {
+			dirty_background_bytes = old_bytes;
+			return -ERANGE;
+		}
 		dirty_background_ratio = 0;
+	}
 	return ret;
 }
 
@@ -537,6 +555,10 @@  static int dirty_bytes_handler(struct ctl_table *table, int write,
 
 	ret = proc_doulongvec_minmax(table, write, buffer, lenp, ppos);
 	if (ret == 0 && write && vm_dirty_bytes != old_bytes) {
+		if (DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE) > UINT_MAX) {
+			vm_dirty_bytes = old_bytes;
+			return -ERANGE;
+		}
 		writeback_set_ratelimit();
 		vm_dirty_ratio = 0;
 	}