diff mbox series

[v3,1/4] mm/ksm: add ksm advisor

Message ID 20231204234906.1237478-2-shr@devkernel.io (mailing list archive)
State New
Headers show
Series mm/ksm: Add ksm advisor | expand

Commit Message

Stefan Roesch Dec. 4, 2023, 11:49 p.m. UTC
This adds the ksm advisor. The ksm advisor automatically manages the
pages_to_scan setting to achieve a target scan time. The target scan
time defines how many seconds it should take to scan all the candidate
KSM pages. In other words the pages_to_scan rate is changed by the
advisor to achieve the target scan time. The algorithm has a max and min
value to:
- guarantee responsiveness to changes
- limit CPU resource consumption

The respective parameters are:
- ksm_advisor_target_scan_time (how many seconds a scan should take)
- ksm_advisor_max_cpu (maximum value for cpu percent usage)

- ksm_advisor_min_pages (minimum value for pages_to_scan per batch)
- ksm_advisor_max_pages (maximum value for pages_to_scan per batch)

The algorithm calculates the change value based on the target scan time
and the previous scan time. To avoid pertubations an exponentially
weighted moving average is applied.

The advisor is managed by two main parameters: target scan time,
cpu max time for the ksmd background thread. These parameters determine
how aggresive ksmd scans.

In addition there are min and max values for the pages_to_scan parameter
to make sure that its initial and max values are not set too low or too
high. This ensures that it is able to react to changes quickly enough.

The default values are:
- target scan time: 200 secs
- max cpu: 70%
- min pages: 500
- max pages: 30000

By default the advisor is disabled. Currently there are two advisors:
none and scan-time.

Tests with various workloads have shown considerable CPU savings. Most
of the workloads I have investigated have more candidate pages during
startup, once the workload is stable in terms of memory, the number of
candidate pages is reduced. Without the advisor, the pages_to_scan needs
to be sized for the maximum number of candidate pages. So having this
advisor definitely helps in reducing CPU consumption.

For the instagram workload, the advisor achieves a 25% CPU reduction.
Once the memory is stable, the pages_to_scan parameter gets reduced to
about 40% of its max value.

Signed-off-by: Stefan Roesch <shr@devkernel.io>
---
 mm/ksm.c | 175 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 174 insertions(+), 1 deletion(-)

Comments

kernel test robot Dec. 6, 2023, 3:17 p.m. UTC | #1
Hi Stefan,

kernel test robot noticed the following build errors:

[auto build test ERROR on 12d04a7bf0da67321229d2bc8b1a7074d65415a9]

url:    https://github.com/intel-lab-lkp/linux/commits/Stefan-Roesch/mm-ksm-add-ksm-advisor/20231205-075118
base:   12d04a7bf0da67321229d2bc8b1a7074d65415a9
patch link:    https://lore.kernel.org/r/20231204234906.1237478-2-shr%40devkernel.io
patch subject: [PATCH v3 1/4] mm/ksm: add ksm advisor
config: i386-randconfig-011-20231206 (https://download.01.org/0day-ci/archive/20231206/202312062353.lBYSA7Eb-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231206/202312062353.lBYSA7Eb-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202312062353.lBYSA7Eb-lkp@intel.com/

All errors (new ones prefixed by >>):

   ld: mm/ksm.o: in function `scan_time_advisor':
>> mm/ksm.c:423: undefined reference to `__divdi3'
>> ld: mm/ksm.c:435: undefined reference to `__divdi3'
   ld: mm/ksm.c:428: undefined reference to `__divdi3'


vim +423 mm/ksm.c

   383	
   384	/*
   385	 * The scan time advisor is based on the current scan rate and the target
   386	 * scan rate.
   387	 *
   388	 *      new_pages_to_scan = pages_to_scan * (scan_time / target_scan_time)
   389	 *
   390	 * To avoid pertubations it calculates a change factor of previous changes.
   391	 * A new change factor is calculated for each iteration and it uses an
   392	 * exponentially weighted moving average. The new pages_to_scan value is
   393	 * multiplied with that change factor:
   394	 *
   395	 *      new_pages_to_scan *= change facor
   396	 *
   397	 * In addition the new pages_to_scan value is capped by the max and min
   398	 * limits.
   399	 */
   400	static void scan_time_advisor(void)
   401	{
   402		unsigned int cpu_percent;
   403		unsigned long cpu_time;
   404		unsigned long cpu_time_diff;
   405		unsigned long cpu_time_diff_ms;
   406		unsigned long pages;
   407		unsigned long per_page_cost;
   408		unsigned long factor;
   409		unsigned long change;
   410		unsigned long last_scan_time;
   411		s64 scan_time;
   412	
   413		/* Convert scan time to seconds */
   414		scan_time = advisor_stop_scan();
   415		scan_time = div_s64(scan_time, MSEC_PER_SEC);
   416		scan_time = scan_time ? scan_time : 1;
   417	
   418		/* Calculate CPU consumption of ksmd background thread */
   419		cpu_time = task_sched_runtime(current);
   420		cpu_time_diff = cpu_time - advisor_ctx.cpu_time;
   421		cpu_time_diff_ms = cpu_time_diff / 1000 / 1000;
   422	
 > 423		cpu_percent = (cpu_time_diff_ms * 100) / (scan_time * 1000);
   424		cpu_percent = cpu_percent ? cpu_percent : 1;
   425		last_scan_time = prev_scan_time(&advisor_ctx, scan_time);
   426	
   427		/* Calculate scan time as percentage of target scan time */
   428		factor = ksm_advisor_target_scan_time * 100 / scan_time;
   429		factor = factor ? factor : 1;
   430	
   431		/*
   432		 * Calculate scan time as percentage of last scan time and use
   433		 * exponentially weighted average to smooth it
   434		 */
 > 435		change = scan_time * 100 / last_scan_time;
   436		change = change ? change : 1;
   437		change = ewma(advisor_ctx.change, change);
   438	
   439		/* Calculate new scan rate based on target scan rate. */
   440		pages = ksm_thread_pages_to_scan * 100 / factor;
   441		/* Update pages_to_scan by weighted change percentage. */
   442		pages = pages * change / 100;
   443	
   444		/* Cap new pages_to_scan value */
   445		per_page_cost = ksm_thread_pages_to_scan / cpu_percent;
   446		per_page_cost = per_page_cost ? per_page_cost : 1;
   447	
   448		pages = min(pages, per_page_cost * ksm_advisor_max_cpu);
   449		pages = max(pages, per_page_cost * ksm_advisor_min_cpu);
   450		pages = min(pages, ksm_advisor_max_pages);
   451	
   452		/* Update advisor context */
   453		advisor_ctx.change = change;
   454		advisor_ctx.scan_time = scan_time;
   455		advisor_ctx.cpu_time = cpu_time;
   456	
   457		ksm_thread_pages_to_scan = pages;
   458	}
   459
David Hildenbrand Dec. 12, 2023, 1:44 p.m. UTC | #2
[...]

> +
> +/**
> + * struct advisor_ctx - metadata for KSM advisor
> + * @start_scan: start time of the current scan
> + * @scan_time: scan time of previous scan
> + * @change: change in percent to pages_to_scan parameter
> + * @cpu_time: cpu time consumed by the ksmd thread in the previous scan
> + */
> +struct advisor_ctx {
> +	ktime_t start_scan;
> +	unsigned long scan_time;
> +	unsigned long change;
> +	unsigned long long cpu_time;
> +};
> +static struct advisor_ctx advisor_ctx;
> +
> +/* Define different advisor's */
> +enum ksm_advisor_type {
> +	KSM_ADVISOR_NONE,
> +	KSM_ADVISOR_SCAN_TIME,
> +};
> +static enum ksm_advisor_type ksm_advisor;
> +
> +static void init_advisor(void)
> +{
> +	advisor_ctx = (const struct advisor_ctx){ 0 };
> +}

Again, you can drop this completely. The static values are already 
initialized to 0.

Or is there any reason to initialize to 0 explicitly?

> +
> +static void set_advisor_defaults(void)
> +{
> +	if (ksm_advisor == KSM_ADVISOR_NONE)
> +		ksm_thread_pages_to_scan = DEFAULT_PAGES_TO_SCAN;
> +	else if (ksm_advisor == KSM_ADVISOR_SCAN_TIME)
> +		ksm_thread_pages_to_scan = ksm_advisor_min_pages;
> +}

That function is unused?

> +
> +static inline void advisor_start_scan(void)
> +{
> +	if (ksm_advisor == KSM_ADVISOR_SCAN_TIME)
> +		advisor_ctx.start_scan = ktime_get();
> +}
> +
> +static inline s64 advisor_stop_scan(void)
> +{
> +	return ktime_ms_delta(ktime_get(), advisor_ctx.start_scan);
> +}

Just inline that into the caller. Then rename run_advisor() into 
advisor_stop_scan(). So in scan_get_next_rmap_item)( you have paired 
start+stop hooks.

> +
> +/*
> + * Use previous scan time if available, otherwise use current scan time as an
> + * approximation for the previous scan time.
> + */
> +static inline unsigned long prev_scan_time(struct advisor_ctx *ctx,
> +					   unsigned long scan_time)
> +{
> +	return ctx->scan_time ? ctx->scan_time : scan_time;
> +}
> +
> +/* Calculate exponential weighted moving average */
> +static unsigned long ewma(unsigned long prev, unsigned long curr)
> +{
> +	return ((100 - EWMA_WEIGHT) * prev + EWMA_WEIGHT * curr) / 100;
> +}
> +
> +/*
> + * The scan time advisor is based on the current scan rate and the target
> + * scan rate.
> + *
> + *      new_pages_to_scan = pages_to_scan * (scan_time / target_scan_time)
> + *
> + * To avoid pertubations it calculates a change factor of previous changes.

s/pertubations/perturbations/

Do you also want to describe how min/max CPU comes into play?

> + * A new change factor is calculated for each iteration and it uses an
> + * exponentially weighted moving average. The new pages_to_scan value is
> + * multiplied with that change factor:
> + *
> + *      new_pages_to_scan *= change facor
> + *
> + * In addition the new pages_to_scan value is capped by the max and min
> + * limits.
> + */



With that, LGTM

Acked-by: David Hildenbrand <david@redhat.com>
Stefan Roesch Dec. 12, 2023, 6:32 p.m. UTC | #3
On Tue, Dec 12, 2023, at 5:44 AM, David Hildenbrand wrote:
> [...]
>
>> +
>> +/**
>> + * struct advisor_ctx - metadata for KSM advisor
>> + * @start_scan: start time of the current scan
>> + * @scan_time: scan time of previous scan
>> + * @change: change in percent to pages_to_scan parameter
>> + * @cpu_time: cpu time consumed by the ksmd thread in the previous scan
>> + */
>> +struct advisor_ctx {
>> +	ktime_t start_scan;
>> +	unsigned long scan_time;
>> +	unsigned long change;
>> +	unsigned long long cpu_time;
>> +};
>> +static struct advisor_ctx advisor_ctx;
>> +
>> +/* Define different advisor's */
>> +enum ksm_advisor_type {
>> +	KSM_ADVISOR_NONE,
>> +	KSM_ADVISOR_SCAN_TIME,
>> +};
>> +static enum ksm_advisor_type ksm_advisor;
>> +
>> +static void init_advisor(void)
>> +{
>> +	advisor_ctx = (const struct advisor_ctx){ 0 };
>> +}
>
> Again, you can drop this completely. The static values are already 
> initialized to 0.
>

It is needed for patch 2, I folded it into set_advisor_defaults

> Or is there any reason to initialize to 0 explicitly?
>
>> +
>> +static void set_advisor_defaults(void)
>> +{
>> +	if (ksm_advisor == KSM_ADVISOR_NONE)
>> +		ksm_thread_pages_to_scan = DEFAULT_PAGES_TO_SCAN;
>> +	else if (ksm_advisor == KSM_ADVISOR_SCAN_TIME)
>> +		ksm_thread_pages_to_scan = ksm_advisor_min_pages;
>> +}
>
> That function is unused?
>

I think you already saw it, it is used in patch 2, moving the function to patch 2.

>> +
>> +static inline void advisor_start_scan(void)
>> +{
>> +	if (ksm_advisor == KSM_ADVISOR_SCAN_TIME)
>> +		advisor_ctx.start_scan = ktime_get();
>> +}
>> +
>> +static inline s64 advisor_stop_scan(void)
>> +{
>> +	return ktime_ms_delta(ktime_get(), advisor_ctx.start_scan);
>> +}
>
> Just inline that into the caller. Then rename run_advisor() into 
> advisor_stop_scan(). So in scan_get_next_rmap_item)( you have paired 
> start+stop hooks.
>

The next version has this change.

>> +
>> +/*
>> + * Use previous scan time if available, otherwise use current scan time as an
>> + * approximation for the previous scan time.
>> + */
>> +static inline unsigned long prev_scan_time(struct advisor_ctx *ctx,
>> +					   unsigned long scan_time)
>> +{
>> +	return ctx->scan_time ? ctx->scan_time : scan_time;
>> +}
>> +
>> +/* Calculate exponential weighted moving average */
>> +static unsigned long ewma(unsigned long prev, unsigned long curr)
>> +{
>> +	return ((100 - EWMA_WEIGHT) * prev + EWMA_WEIGHT * curr) / 100;
>> +}
>> +
>> +/*
>> + * The scan time advisor is based on the current scan rate and the target
>> + * scan rate.
>> + *
>> + *      new_pages_to_scan = pages_to_scan * (scan_time / target_scan_time)
>> + *
>> + * To avoid pertubations it calculates a change factor of previous changes.
>
> s/pertubations/perturbations/

Fixed.

>
> Do you also want to describe how min/max CPU comes into play?
>

I added additional documentation for it in the next version of the patch.

>> + * A new change factor is calculated for each iteration and it uses an
>> + * exponentially weighted moving average. The new pages_to_scan value is
>> + * multiplied with that change factor:
>> + *
>> + *      new_pages_to_scan *= change facor
>> + *
>> + * In addition the new pages_to_scan value is capped by the max and min
>> + * limits.
>> + */
>
>
>
> With that, LGTM
>
> Acked-by: David Hildenbrand <david@redhat.com>
>
> -- 
> Cheers,
>
> David / dhildenb
diff mbox series

Patch

diff --git a/mm/ksm.c b/mm/ksm.c
index 7efcc68ccc6ea..b27010fa2e946 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -21,6 +21,7 @@ 
 #include <linux/sched.h>
 #include <linux/sched/mm.h>
 #include <linux/sched/coredump.h>
+#include <linux/sched/cputime.h>
 #include <linux/rwsem.h>
 #include <linux/pagemap.h>
 #include <linux/rmap.h>
@@ -248,6 +249,9 @@  static struct kmem_cache *rmap_item_cache;
 static struct kmem_cache *stable_node_cache;
 static struct kmem_cache *mm_slot_cache;
 
+/* Default number of pages to scan per batch */
+#define DEFAULT_PAGES_TO_SCAN 100
+
 /* The number of pages scanned */
 static unsigned long ksm_pages_scanned;
 
@@ -276,7 +280,7 @@  static unsigned int ksm_stable_node_chains_prune_millisecs = 2000;
 static int ksm_max_page_sharing = 256;
 
 /* Number of pages ksmd should scan in one batch */
-static unsigned int ksm_thread_pages_to_scan = 100;
+static unsigned int ksm_thread_pages_to_scan = DEFAULT_PAGES_TO_SCAN;
 
 /* Milliseconds ksmd should sleep between batches */
 static unsigned int ksm_thread_sleep_millisecs = 20;
@@ -297,6 +301,168 @@  unsigned long ksm_zero_pages;
 /* The number of pages that have been skipped due to "smart scanning" */
 static unsigned long ksm_pages_skipped;
 
+/* Don't scan more than max pages per batch. */
+static unsigned long ksm_advisor_max_pages = 30000;
+
+/* At least scan this many pages per batch. */
+static unsigned long ksm_advisor_min_pages = 500;
+
+/* Min CPU for scanning pages per scan */
+static unsigned int ksm_advisor_min_cpu =  10;
+
+/* Max CPU for scanning pages per scan */
+static unsigned int ksm_advisor_max_cpu =  70;
+
+/* Target scan time in seconds to analyze all KSM candidate pages. */
+static unsigned long ksm_advisor_target_scan_time = 200;
+
+/* Exponentially weighted moving average. */
+#define EWMA_WEIGHT 30
+
+/**
+ * struct advisor_ctx - metadata for KSM advisor
+ * @start_scan: start time of the current scan
+ * @scan_time: scan time of previous scan
+ * @change: change in percent to pages_to_scan parameter
+ * @cpu_time: cpu time consumed by the ksmd thread in the previous scan
+ */
+struct advisor_ctx {
+	ktime_t start_scan;
+	unsigned long scan_time;
+	unsigned long change;
+	unsigned long long cpu_time;
+};
+static struct advisor_ctx advisor_ctx;
+
+/* Define different advisor's */
+enum ksm_advisor_type {
+	KSM_ADVISOR_NONE,
+	KSM_ADVISOR_SCAN_TIME,
+};
+static enum ksm_advisor_type ksm_advisor;
+
+static void init_advisor(void)
+{
+	advisor_ctx = (const struct advisor_ctx){ 0 };
+}
+
+static void set_advisor_defaults(void)
+{
+	if (ksm_advisor == KSM_ADVISOR_NONE)
+		ksm_thread_pages_to_scan = DEFAULT_PAGES_TO_SCAN;
+	else if (ksm_advisor == KSM_ADVISOR_SCAN_TIME)
+		ksm_thread_pages_to_scan = ksm_advisor_min_pages;
+}
+
+static inline void advisor_start_scan(void)
+{
+	if (ksm_advisor == KSM_ADVISOR_SCAN_TIME)
+		advisor_ctx.start_scan = ktime_get();
+}
+
+static inline s64 advisor_stop_scan(void)
+{
+	return ktime_ms_delta(ktime_get(), advisor_ctx.start_scan);
+}
+
+/*
+ * Use previous scan time if available, otherwise use current scan time as an
+ * approximation for the previous scan time.
+ */
+static inline unsigned long prev_scan_time(struct advisor_ctx *ctx,
+					   unsigned long scan_time)
+{
+	return ctx->scan_time ? ctx->scan_time : scan_time;
+}
+
+/* Calculate exponential weighted moving average */
+static unsigned long ewma(unsigned long prev, unsigned long curr)
+{
+	return ((100 - EWMA_WEIGHT) * prev + EWMA_WEIGHT * curr) / 100;
+}
+
+/*
+ * The scan time advisor is based on the current scan rate and the target
+ * scan rate.
+ *
+ *      new_pages_to_scan = pages_to_scan * (scan_time / target_scan_time)
+ *
+ * To avoid pertubations it calculates a change factor of previous changes.
+ * A new change factor is calculated for each iteration and it uses an
+ * exponentially weighted moving average. The new pages_to_scan value is
+ * multiplied with that change factor:
+ *
+ *      new_pages_to_scan *= change facor
+ *
+ * In addition the new pages_to_scan value is capped by the max and min
+ * limits.
+ */
+static void scan_time_advisor(void)
+{
+	unsigned int cpu_percent;
+	unsigned long cpu_time;
+	unsigned long cpu_time_diff;
+	unsigned long cpu_time_diff_ms;
+	unsigned long pages;
+	unsigned long per_page_cost;
+	unsigned long factor;
+	unsigned long change;
+	unsigned long last_scan_time;
+	s64 scan_time;
+
+	/* Convert scan time to seconds */
+	scan_time = advisor_stop_scan();
+	scan_time = div_s64(scan_time, MSEC_PER_SEC);
+	scan_time = scan_time ? scan_time : 1;
+
+	/* Calculate CPU consumption of ksmd background thread */
+	cpu_time = task_sched_runtime(current);
+	cpu_time_diff = cpu_time - advisor_ctx.cpu_time;
+	cpu_time_diff_ms = cpu_time_diff / 1000 / 1000;
+
+	cpu_percent = (cpu_time_diff_ms * 100) / (scan_time * 1000);
+	cpu_percent = cpu_percent ? cpu_percent : 1;
+	last_scan_time = prev_scan_time(&advisor_ctx, scan_time);
+
+	/* Calculate scan time as percentage of target scan time */
+	factor = ksm_advisor_target_scan_time * 100 / scan_time;
+	factor = factor ? factor : 1;
+
+	/*
+	 * Calculate scan time as percentage of last scan time and use
+	 * exponentially weighted average to smooth it
+	 */
+	change = scan_time * 100 / last_scan_time;
+	change = change ? change : 1;
+	change = ewma(advisor_ctx.change, change);
+
+	/* Calculate new scan rate based on target scan rate. */
+	pages = ksm_thread_pages_to_scan * 100 / factor;
+	/* Update pages_to_scan by weighted change percentage. */
+	pages = pages * change / 100;
+
+	/* Cap new pages_to_scan value */
+	per_page_cost = ksm_thread_pages_to_scan / cpu_percent;
+	per_page_cost = per_page_cost ? per_page_cost : 1;
+
+	pages = min(pages, per_page_cost * ksm_advisor_max_cpu);
+	pages = max(pages, per_page_cost * ksm_advisor_min_cpu);
+	pages = min(pages, ksm_advisor_max_pages);
+
+	/* Update advisor context */
+	advisor_ctx.change = change;
+	advisor_ctx.scan_time = scan_time;
+	advisor_ctx.cpu_time = cpu_time;
+
+	ksm_thread_pages_to_scan = pages;
+}
+
+static void run_advisor(void)
+{
+	if (ksm_advisor == KSM_ADVISOR_SCAN_TIME)
+		scan_time_advisor();
+}
+
 #ifdef CONFIG_NUMA
 /* Zeroed when merging across nodes is not allowed */
 static unsigned int ksm_merge_across_nodes = 1;
@@ -2401,6 +2567,7 @@  static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 
 	mm_slot = ksm_scan.mm_slot;
 	if (mm_slot == &ksm_mm_head) {
+		advisor_start_scan();
 		trace_ksm_start_scan(ksm_scan.seqnr, ksm_rmap_items);
 
 		/*
@@ -2558,6 +2725,8 @@  static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 	if (mm_slot != &ksm_mm_head)
 		goto next_mm;
 
+	run_advisor();
+
 	trace_ksm_stop_scan(ksm_scan.seqnr, ksm_rmap_items);
 	ksm_scan.seqnr++;
 	return NULL;
@@ -3244,6 +3413,9 @@  static ssize_t pages_to_scan_store(struct kobject *kobj,
 	unsigned int nr_pages;
 	int err;
 
+	if (ksm_advisor != KSM_ADVISOR_NONE)
+		return -EINVAL;
+
 	err = kstrtouint(buf, 10, &nr_pages);
 	if (err)
 		return -EINVAL;
@@ -3603,6 +3775,7 @@  static int __init ksm_init(void)
 	zero_checksum = calc_checksum(ZERO_PAGE(0));
 	/* Default to false for backwards compatibility */
 	ksm_use_zero_pages = false;
+	init_advisor();
 
 	err = ksm_slab_init();
 	if (err)