diff mbox series

[V5] mm: compaction: support triggering of proactive compaction by user

Message ID 1627653207-12317-1-git-send-email-charante@codeaurora.org (mailing list archive)
State New
Headers show
Series [V5] mm: compaction: support triggering of proactive compaction by user | expand

Commit Message

Charan Teja Kalla July 30, 2021, 1:53 p.m. UTC
The proactive compaction[1] gets triggered for every 500msec and run
compaction on the node for COMPACTION_HPAGE_ORDER (usually order-9)
pages based on the value set to sysctl.compaction_proactiveness.
Triggering the compaction for every 500msec in search of
COMPACTION_HPAGE_ORDER pages is not needed for all applications,
especially on the embedded system usecases which may have few MB's of
RAM. Enabling the proactive compaction in its state will endup in
running almost always on such systems.

Other side, proactive compaction can still be very much useful for
getting a set of higher order pages in some controllable
manner(controlled by using the sysctl.compaction_proactiveness). So, on
systems where enabling the proactive compaction always may proove not
required, can trigger the same from user space on write to its sysctl
interface. As an example, say app launcher decide to launch the memory
heavy application which can be launched fast if it gets more higher
order pages thus launcher can prepare the system in advance by
triggering the proactive compaction from userspace.

This triggering of proactive compaction is done on a write to
sysctl.compaction_proactiveness by user.

[1]https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit?id=facdaa917c4d5a376d09d25865f5a863f906234a

Signed-off-by: Charan Teja Reddy <charante@codeaurora.org>
---
 Changes in V5:
 	-- Avoid unnecessary wakeup of proactive compaction when it is disabled.
	-- No changes in the logic of triggering the proactive compaction.

 Changes in V4:
	-- Changed the code as the 'proactive_defer' counter is removed.
	-- No changes in the logic of triggering the proactive compaction.
	-- https://lore.kernel.org/patchwork/patch/1448777/

 Changes in V3:
        -- Fixed review comments from Valstimil and others.
        -- https://lore.kernel.org/patchwork/patch/1438211/

 Changes in V2:
	-- remove /proc/../proactive_compact_memory interface trigger for proactive compaction
        -- Intention is same that add a way to trigger proactive compaction by user.
        -- https://lore.kernel.org/patchwork/patch/1431283/

 changes in V1:
	-- Created the new /proc/sys/vm/proactive_compact_memory in
	   interface to trigger proactive compaction from user 
        -- https://lore.kernel.org/lkml/1619098678-8501-1-git-send-email-charante@codeaurora.org/

 Documentation/admin-guide/sysctl/vm.rst |  3 ++-
 include/linux/compaction.h              |  2 ++
 include/linux/mmzone.h                  |  1 +
 kernel/sysctl.c                         |  2 +-
 mm/compaction.c                         | 38 +++++++++++++++++++++++++++++++--
 5 files changed, 42 insertions(+), 4 deletions(-)

Comments

Vlastimil Babka July 30, 2021, 2:06 p.m. UTC | #1
On 7/30/21 3:53 PM, Charan Teja Reddy wrote:
> The proactive compaction[1] gets triggered for every 500msec and run
> compaction on the node for COMPACTION_HPAGE_ORDER (usually order-9)
> pages based on the value set to sysctl.compaction_proactiveness.
> Triggering the compaction for every 500msec in search of
> COMPACTION_HPAGE_ORDER pages is not needed for all applications,
> especially on the embedded system usecases which may have few MB's of
> RAM. Enabling the proactive compaction in its state will endup in
> running almost always on such systems.
> 
> Other side, proactive compaction can still be very much useful for
> getting a set of higher order pages in some controllable
> manner(controlled by using the sysctl.compaction_proactiveness). So, on
> systems where enabling the proactive compaction always may proove not
> required, can trigger the same from user space on write to its sysctl
> interface. As an example, say app launcher decide to launch the memory
> heavy application which can be launched fast if it gets more higher
> order pages thus launcher can prepare the system in advance by
> triggering the proactive compaction from userspace.
> 
> This triggering of proactive compaction is done on a write to
> sysctl.compaction_proactiveness by user.
> 
> [1]https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit?id=facdaa917c4d5a376d09d25865f5a863f906234a
> 
> Signed-off-by: Charan Teja Reddy <charante@codeaurora.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> @@ -2895,9 +2920,16 @@ static int kcompactd(void *p)
>  	while (!kthread_should_stop()) {
>  		unsigned long pflags;
>  
> +		/*
> +		 * Avoid the unnecessary wakeup for proactive compaction
> +		 * when it is disabled.
> +		 */
> +		if (!sysctl_compaction_proactiveness)
> +			timeout = MAX_SCHEDULE_TIMEOUT;

Does this part actually logically belong more to your previous patch that
optimized the deferred timeouts?

>  		trace_mm_compaction_kcompactd_sleep(pgdat->node_id);
>  		if (wait_event_freezable_timeout(pgdat->kcompactd_wait,
> -			kcompactd_work_requested(pgdat), timeout)) {
> +			kcompactd_work_requested(pgdat), timeout) &&
> +			!pgdat->proactive_compact_trigger) {
>  
>  			psi_memstall_enter(&pflags);
>  			kcompactd_do_work(pgdat);
> @@ -2932,6 +2964,8 @@ static int kcompactd(void *p)
>  				timeout =
>  				   default_timeout << COMPACT_MAX_DEFER_SHIFT;
>  		}
> +		if (unlikely(pgdat->proactive_compact_trigger))
> +			pgdat->proactive_compact_trigger = false;
>  	}
>  
>  	return 0;
>
Charan Teja Kalla July 30, 2021, 2:46 p.m. UTC | #2
Thanks Vlastimil!!

On 7/30/2021 7:36 PM, Vlastimil Babka wrote:
>> The proactive compaction[1] gets triggered for every 500msec and run
>> compaction on the node for COMPACTION_HPAGE_ORDER (usually order-9)
>> pages based on the value set to sysctl.compaction_proactiveness.
>> Triggering the compaction for every 500msec in search of
>> COMPACTION_HPAGE_ORDER pages is not needed for all applications,
>> especially on the embedded system usecases which may have few MB's of
>> RAM. Enabling the proactive compaction in its state will endup in
>> running almost always on such systems.
>>
>> Other side, proactive compaction can still be very much useful for
>> getting a set of higher order pages in some controllable
>> manner(controlled by using the sysctl.compaction_proactiveness). So, on
>> systems where enabling the proactive compaction always may proove not
>> required, can trigger the same from user space on write to its sysctl
>> interface. As an example, say app launcher decide to launch the memory
>> heavy application which can be launched fast if it gets more higher
>> order pages thus launcher can prepare the system in advance by
>> triggering the proactive compaction from userspace.
>>
>> This triggering of proactive compaction is done on a write to
>> sysctl.compaction_proactiveness by user.
>>
>> [1]https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit?id=facdaa917c4d5a376d09d25865f5a863f906234a
>>
>> Signed-off-by: Charan Teja Reddy <charante@codeaurora.org>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>

Thanks for the tag here.

> 
>> @@ -2895,9 +2920,16 @@ static int kcompactd(void *p)
>>  	while (!kthread_should_stop()) {
>>  		unsigned long pflags;
>>  
>> +		/*
>> +		 * Avoid the unnecessary wakeup for proactive compaction
>> +		 * when it is disabled.
>> +		 */
>> +		if (!sysctl_compaction_proactiveness)
>> +			timeout = MAX_SCHEDULE_TIMEOUT;
> Does this part actually logically belong more to your previous patch that
> optimized the deferred timeouts?

IMO, it won't fit there. Reason is that when user writes
sysctl_compaction_proactiveness = 0, it will goes to sleep with
MAX_SCHEDULE_TIMEOUT. Say now user writes non-zero value to
sysctl_compaction_proactiveness then no condition is there to wake it up
for proactive compaction, means, it will still be in sleep with
MAX_SCHEDULE_TIMEOUT.

Thus this logic is put in this patch, where, proactive compaction work
will be scheduled immediately on switch of proactiveness value from zero
to a non-zero.

>
Vlastimil Babka July 30, 2021, 2:47 p.m. UTC | #3
On 7/30/21 4:46 PM, Charan Teja Kalla wrote:
> Thanks Vlastimil!!
> 
> On 7/30/2021 7:36 PM, Vlastimil Babka wrote:
>>> The proactive compaction[1] gets triggered for every 500msec and run
>>> compaction on the node for COMPACTION_HPAGE_ORDER (usually order-9)
>>> pages based on the value set to sysctl.compaction_proactiveness.
>>> Triggering the compaction for every 500msec in search of
>>> COMPACTION_HPAGE_ORDER pages is not needed for all applications,
>>> especially on the embedded system usecases which may have few MB's of
>>> RAM. Enabling the proactive compaction in its state will endup in
>>> running almost always on such systems.
>>>
>>> Other side, proactive compaction can still be very much useful for
>>> getting a set of higher order pages in some controllable
>>> manner(controlled by using the sysctl.compaction_proactiveness). So, on
>>> systems where enabling the proactive compaction always may proove not
>>> required, can trigger the same from user space on write to its sysctl
>>> interface. As an example, say app launcher decide to launch the memory
>>> heavy application which can be launched fast if it gets more higher
>>> order pages thus launcher can prepare the system in advance by
>>> triggering the proactive compaction from userspace.
>>>
>>> This triggering of proactive compaction is done on a write to
>>> sysctl.compaction_proactiveness by user.
>>>
>>> [1]https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit?id=facdaa917c4d5a376d09d25865f5a863f906234a
>>>
>>> Signed-off-by: Charan Teja Reddy <charante@codeaurora.org>
>> Acked-by: Vlastimil Babka <vbabka@suse.cz>
> 
> Thanks for the tag here.

Np.

>> 
>>> @@ -2895,9 +2920,16 @@ static int kcompactd(void *p)
>>>  	while (!kthread_should_stop()) {
>>>  		unsigned long pflags;
>>>  
>>> +		/*
>>> +		 * Avoid the unnecessary wakeup for proactive compaction
>>> +		 * when it is disabled.
>>> +		 */
>>> +		if (!sysctl_compaction_proactiveness)
>>> +			timeout = MAX_SCHEDULE_TIMEOUT;
>> Does this part actually logically belong more to your previous patch that
>> optimized the deferred timeouts?
> 
> IMO, it won't fit there. Reason is that when user writes
> sysctl_compaction_proactiveness = 0, it will goes to sleep with
> MAX_SCHEDULE_TIMEOUT. Say now user writes non-zero value to
> sysctl_compaction_proactiveness then no condition is there to wake it up
> for proactive compaction, means, it will still be in sleep with
> MAX_SCHEDULE_TIMEOUT.

Good point!

> Thus this logic is put in this patch, where, proactive compaction work
> will be scheduled immediately on switch of proactiveness value from zero
> to a non-zero.

Agreed. Thanks!

>> 
>
Mike Rapoport July 30, 2021, 7:31 p.m. UTC | #4
On Fri, Jul 30, 2021 at 07:23:27PM +0530, Charan Teja Reddy wrote:
> The proactive compaction[1] gets triggered for every 500msec and run
> compaction on the node for COMPACTION_HPAGE_ORDER (usually order-9)
> pages based on the value set to sysctl.compaction_proactiveness.
> Triggering the compaction for every 500msec in search of
> COMPACTION_HPAGE_ORDER pages is not needed for all applications,
> especially on the embedded system usecases which may have few MB's of
> RAM. Enabling the proactive compaction in its state will endup in
> running almost always on such systems.
> 
> Other side, proactive compaction can still be very much useful for
> getting a set of higher order pages in some controllable
> manner(controlled by using the sysctl.compaction_proactiveness). So, on
> systems where enabling the proactive compaction always may proove not
> required, can trigger the same from user space on write to its sysctl
> interface. As an example, say app launcher decide to launch the memory
> heavy application which can be launched fast if it gets more higher
> order pages thus launcher can prepare the system in advance by
> triggering the proactive compaction from userspace.
> 
> This triggering of proactive compaction is done on a write to
> sysctl.compaction_proactiveness by user.
> 
> [1]https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit?id=facdaa917c4d5a376d09d25865f5a863f906234a
> 
> Signed-off-by: Charan Teja Reddy <charante@codeaurora.org>
> ---
>  Changes in V5:
>  	-- Avoid unnecessary wakeup of proactive compaction when it is disabled.
> 	-- No changes in the logic of triggering the proactive compaction.
> 
>  Changes in V4:
> 	-- Changed the code as the 'proactive_defer' counter is removed.
> 	-- No changes in the logic of triggering the proactive compaction.
> 	-- https://lore.kernel.org/patchwork/patch/1448777/
> 
>  Changes in V3:
>         -- Fixed review comments from Valstimil and others.
>         -- https://lore.kernel.org/patchwork/patch/1438211/
> 
>  Changes in V2:
> 	-- remove /proc/../proactive_compact_memory interface trigger for proactive compaction
>         -- Intention is same that add a way to trigger proactive compaction by user.
>         -- https://lore.kernel.org/patchwork/patch/1431283/
> 
>  changes in V1:
> 	-- Created the new /proc/sys/vm/proactive_compact_memory in
> 	   interface to trigger proactive compaction from user 
>         -- https://lore.kernel.org/lkml/1619098678-8501-1-git-send-email-charante@codeaurora.org/
> 
>  Documentation/admin-guide/sysctl/vm.rst |  3 ++-
>  include/linux/compaction.h              |  2 ++
>  include/linux/mmzone.h                  |  1 +
>  kernel/sysctl.c                         |  2 +-
>  mm/compaction.c                         | 38 +++++++++++++++++++++++++++++++--
>  5 files changed, 42 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
> index 003d5cc..b526cf6 100644
> --- a/Documentation/admin-guide/sysctl/vm.rst
> +++ b/Documentation/admin-guide/sysctl/vm.rst
> @@ -118,7 +118,8 @@ compaction_proactiveness
>  
>  This tunable takes a value in the range [0, 100] with a default value of
>  20. This tunable determines how aggressively compaction is done in the
> -background. Setting it to 0 disables proactive compaction.
> +background. On write of non zero value to this tunable will immediately

Nit: I think "Write of non zero ..."

> +trigger the proactive compaction. Setting it to 0 disables proactive compaction.
>  
>  Note that compaction has a non-trivial system-wide impact as pages
>  belonging to different processes are moved around, which could also lead
> diff --git a/include/linux/compaction.h b/include/linux/compaction.h
> index c24098c..34bce35 100644
> --- a/include/linux/compaction.h
> +++ b/include/linux/compaction.h
> @@ -84,6 +84,8 @@ static inline unsigned long compact_gap(unsigned int order)
>  extern unsigned int sysctl_compaction_proactiveness;
>  extern int sysctl_compaction_handler(struct ctl_table *table, int write,
>  			void *buffer, size_t *length, loff_t *ppos);
> +extern int compaction_proactiveness_sysctl_handler(struct ctl_table *table,
> +		int write, void *buffer, size_t *length, loff_t *ppos);
>  extern int sysctl_extfrag_threshold;
>  extern int sysctl_compact_unevictable_allowed;
>  
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 4610750..6a1d79d 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -853,6 +853,7 @@ typedef struct pglist_data {
>  	enum zone_type kcompactd_highest_zoneidx;
>  	wait_queue_head_t kcompactd_wait;
>  	struct task_struct *kcompactd;
> +	bool proactive_compact_trigger;
>  #endif
>  	/*
>  	 * This is a per-node reserve of pages that are not available
> diff --git a/kernel/sysctl.c b/kernel/sysctl.c
> index 82d6ff6..65bc6f7 100644
> --- a/kernel/sysctl.c
> +++ b/kernel/sysctl.c
> @@ -2871,7 +2871,7 @@ static struct ctl_table vm_table[] = {
>  		.data		= &sysctl_compaction_proactiveness,
>  		.maxlen		= sizeof(sysctl_compaction_proactiveness),
>  		.mode		= 0644,
> -		.proc_handler	= proc_dointvec_minmax,
> +		.proc_handler	= compaction_proactiveness_sysctl_handler,
>  		.extra1		= SYSCTL_ZERO,
>  		.extra2		= &one_hundred,
>  	},
> diff --git a/mm/compaction.c b/mm/compaction.c
> index f984ad0..fbc60f9 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -2700,6 +2700,30 @@ static void compact_nodes(void)
>   */
>  unsigned int __read_mostly sysctl_compaction_proactiveness = 20;
>  
> +int compaction_proactiveness_sysctl_handler(struct ctl_table *table, int write,
> +		void *buffer, size_t *length, loff_t *ppos)
> +{
> +	int rc, nid;
> +
> +	rc = proc_dointvec_minmax(table, write, buffer, length, ppos);
> +	if (rc)
> +		return rc;
> +
> +	if (write && sysctl_compaction_proactiveness) {
> +		for_each_online_node(nid) {
> +			pg_data_t *pgdat = NODE_DATA(nid);
> +
> +			if (pgdat->proactive_compact_trigger)
> +				continue;
> +
> +			pgdat->proactive_compact_trigger = true;
> +			wake_up_interruptible(&pgdat->kcompactd_wait);
> +		}
> +	}
> +
> +	return 0;
> +}
> +
>  /*
>   * This is the entry point for compacting all nodes via
>   * /proc/sys/vm/compact_memory
> @@ -2744,7 +2768,8 @@ void compaction_unregister_node(struct node *node)
>  
>  static inline bool kcompactd_work_requested(pg_data_t *pgdat)
>  {
> -	return pgdat->kcompactd_max_order > 0 || kthread_should_stop();
> +	return pgdat->kcompactd_max_order > 0 || kthread_should_stop() ||
> +		pgdat->proactive_compact_trigger;
>  }
>  
>  static bool kcompactd_node_suitable(pg_data_t *pgdat)
> @@ -2895,9 +2920,16 @@ static int kcompactd(void *p)
>  	while (!kthread_should_stop()) {
>  		unsigned long pflags;
>  
> +		/*
> +		 * Avoid the unnecessary wakeup for proactive compaction
> +		 * when it is disabled.
> +		 */
> +		if (!sysctl_compaction_proactiveness)
> +			timeout = MAX_SCHEDULE_TIMEOUT;
>  		trace_mm_compaction_kcompactd_sleep(pgdat->node_id);
>  		if (wait_event_freezable_timeout(pgdat->kcompactd_wait,
> -			kcompactd_work_requested(pgdat), timeout)) {
> +			kcompactd_work_requested(pgdat), timeout) &&
> +			!pgdat->proactive_compact_trigger) {
>  
>  			psi_memstall_enter(&pflags);
>  			kcompactd_do_work(pgdat);
> @@ -2932,6 +2964,8 @@ static int kcompactd(void *p)
>  				timeout =
>  				   default_timeout << COMPACT_MAX_DEFER_SHIFT;
>  		}
> +		if (unlikely(pgdat->proactive_compact_trigger))
> +			pgdat->proactive_compact_trigger = false;
>  	}
>  
>  	return 0;
> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
> member of the Code Aurora Forum, hosted by The Linux Foundation
> 
>
Charan Teja Kalla Aug. 2, 2021, noon UTC | #5
Thanks Mike for the review!!

On 7/31/2021 1:01 AM, Mike Rapoport wrote:
>> diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
>> index 003d5cc..b526cf6 100644
>> --- a/Documentation/admin-guide/sysctl/vm.rst
>> +++ b/Documentation/admin-guide/sysctl/vm.rst
>> @@ -118,7 +118,8 @@ compaction_proactiveness
>>  
>>  This tunable takes a value in the range [0, 100] with a default value of
>>  20. This tunable determines how aggressively compaction is done in the
>> -background. Setting it to 0 disables proactive compaction.
>> +background. On write of non zero value to this tunable will immediately
> Nit: I think "Write of non zero ..."

Can Andrew change it while applying the patch ?

>
Andrew Morton Aug. 2, 2021, 8:41 p.m. UTC | #6
On Mon, 2 Aug 2021 17:30:16 +0530 Charan Teja Kalla <charante@codeaurora.org> wrote:

> Thanks Mike for the review!!
> 
> On 7/31/2021 1:01 AM, Mike Rapoport wrote:
> >> diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
> >> index 003d5cc..b526cf6 100644
> >> --- a/Documentation/admin-guide/sysctl/vm.rst
> >> +++ b/Documentation/admin-guide/sysctl/vm.rst
> >> @@ -118,7 +118,8 @@ compaction_proactiveness
> >>  
> >>  This tunable takes a value in the range [0, 100] with a default value of
> >>  20. This tunable determines how aggressively compaction is done in the
> >> -background. Setting it to 0 disables proactive compaction.
> >> +background. On write of non zero value to this tunable will immediately
> > Nit: I think "Write of non zero ..."
> 
> Can Andrew change it while applying the patch ?

I have done so, thanks.
Rafael Aquini Aug. 4, 2021, 9:11 p.m. UTC | #7
On Fri, Jul 30, 2021 at 07:23:27PM +0530, Charan Teja Reddy wrote:
> The proactive compaction[1] gets triggered for every 500msec and run
> compaction on the node for COMPACTION_HPAGE_ORDER (usually order-9)
> pages based on the value set to sysctl.compaction_proactiveness.
> Triggering the compaction for every 500msec in search of
> COMPACTION_HPAGE_ORDER pages is not needed for all applications,
> especially on the embedded system usecases which may have few MB's of
> RAM. Enabling the proactive compaction in its state will endup in
> running almost always on such systems.
> 
> Other side, proactive compaction can still be very much useful for
> getting a set of higher order pages in some controllable
> manner(controlled by using the sysctl.compaction_proactiveness). So, on
> systems where enabling the proactive compaction always may proove not
> required, can trigger the same from user space on write to its sysctl
> interface. As an example, say app launcher decide to launch the memory
> heavy application which can be launched fast if it gets more higher
> order pages thus launcher can prepare the system in advance by
> triggering the proactive compaction from userspace.
> 
> This triggering of proactive compaction is done on a write to
> sysctl.compaction_proactiveness by user.
> 
> [1]https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit?id=facdaa917c4d5a376d09d25865f5a863f906234a
> 
> Signed-off-by: Charan Teja Reddy <charante@codeaurora.org>

Acked-by: Rafael Aquini <aquini@redhat.com>

> ---
>  Changes in V5:
>  	-- Avoid unnecessary wakeup of proactive compaction when it is disabled.
> 	-- No changes in the logic of triggering the proactive compaction.
> 
>  Changes in V4:
> 	-- Changed the code as the 'proactive_defer' counter is removed.
> 	-- No changes in the logic of triggering the proactive compaction.
> 	-- https://lore.kernel.org/patchwork/patch/1448777/
> 
>  Changes in V3:
>         -- Fixed review comments from Valstimil and others.
>         -- https://lore.kernel.org/patchwork/patch/1438211/
> 
>  Changes in V2:
> 	-- remove /proc/../proactive_compact_memory interface trigger for proactive compaction
>         -- Intention is same that add a way to trigger proactive compaction by user.
>         -- https://lore.kernel.org/patchwork/patch/1431283/
> 
>  changes in V1:
> 	-- Created the new /proc/sys/vm/proactive_compact_memory in
> 	   interface to trigger proactive compaction from user 
>         -- https://lore.kernel.org/lkml/1619098678-8501-1-git-send-email-charante@codeaurora.org/
> 
>  Documentation/admin-guide/sysctl/vm.rst |  3 ++-
>  include/linux/compaction.h              |  2 ++
>  include/linux/mmzone.h                  |  1 +
>  kernel/sysctl.c                         |  2 +-
>  mm/compaction.c                         | 38 +++++++++++++++++++++++++++++++--
>  5 files changed, 42 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
> index 003d5cc..b526cf6 100644
> --- a/Documentation/admin-guide/sysctl/vm.rst
> +++ b/Documentation/admin-guide/sysctl/vm.rst
> @@ -118,7 +118,8 @@ compaction_proactiveness
>  
>  This tunable takes a value in the range [0, 100] with a default value of
>  20. This tunable determines how aggressively compaction is done in the
> -background. Setting it to 0 disables proactive compaction.
> +background. On write of non zero value to this tunable will immediately
> +trigger the proactive compaction. Setting it to 0 disables proactive compaction.
>  
>  Note that compaction has a non-trivial system-wide impact as pages
>  belonging to different processes are moved around, which could also lead
> diff --git a/include/linux/compaction.h b/include/linux/compaction.h
> index c24098c..34bce35 100644
> --- a/include/linux/compaction.h
> +++ b/include/linux/compaction.h
> @@ -84,6 +84,8 @@ static inline unsigned long compact_gap(unsigned int order)
>  extern unsigned int sysctl_compaction_proactiveness;
>  extern int sysctl_compaction_handler(struct ctl_table *table, int write,
>  			void *buffer, size_t *length, loff_t *ppos);
> +extern int compaction_proactiveness_sysctl_handler(struct ctl_table *table,
> +		int write, void *buffer, size_t *length, loff_t *ppos);
>  extern int sysctl_extfrag_threshold;
>  extern int sysctl_compact_unevictable_allowed;
>  
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 4610750..6a1d79d 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -853,6 +853,7 @@ typedef struct pglist_data {
>  	enum zone_type kcompactd_highest_zoneidx;
>  	wait_queue_head_t kcompactd_wait;
>  	struct task_struct *kcompactd;
> +	bool proactive_compact_trigger;
>  #endif
>  	/*
>  	 * This is a per-node reserve of pages that are not available
> diff --git a/kernel/sysctl.c b/kernel/sysctl.c
> index 82d6ff6..65bc6f7 100644
> --- a/kernel/sysctl.c
> +++ b/kernel/sysctl.c
> @@ -2871,7 +2871,7 @@ static struct ctl_table vm_table[] = {
>  		.data		= &sysctl_compaction_proactiveness,
>  		.maxlen		= sizeof(sysctl_compaction_proactiveness),
>  		.mode		= 0644,
> -		.proc_handler	= proc_dointvec_minmax,
> +		.proc_handler	= compaction_proactiveness_sysctl_handler,
>  		.extra1		= SYSCTL_ZERO,
>  		.extra2		= &one_hundred,
>  	},
> diff --git a/mm/compaction.c b/mm/compaction.c
> index f984ad0..fbc60f9 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -2700,6 +2700,30 @@ static void compact_nodes(void)
>   */
>  unsigned int __read_mostly sysctl_compaction_proactiveness = 20;
>  
> +int compaction_proactiveness_sysctl_handler(struct ctl_table *table, int write,
> +		void *buffer, size_t *length, loff_t *ppos)
> +{
> +	int rc, nid;
> +
> +	rc = proc_dointvec_minmax(table, write, buffer, length, ppos);
> +	if (rc)
> +		return rc;
> +
> +	if (write && sysctl_compaction_proactiveness) {
> +		for_each_online_node(nid) {
> +			pg_data_t *pgdat = NODE_DATA(nid);
> +
> +			if (pgdat->proactive_compact_trigger)
> +				continue;
> +
> +			pgdat->proactive_compact_trigger = true;
> +			wake_up_interruptible(&pgdat->kcompactd_wait);
> +		}
> +	}
> +
> +	return 0;
> +}
> +
>  /*
>   * This is the entry point for compacting all nodes via
>   * /proc/sys/vm/compact_memory
> @@ -2744,7 +2768,8 @@ void compaction_unregister_node(struct node *node)
>  
>  static inline bool kcompactd_work_requested(pg_data_t *pgdat)
>  {
> -	return pgdat->kcompactd_max_order > 0 || kthread_should_stop();
> +	return pgdat->kcompactd_max_order > 0 || kthread_should_stop() ||
> +		pgdat->proactive_compact_trigger;
>  }
>  
>  static bool kcompactd_node_suitable(pg_data_t *pgdat)
> @@ -2895,9 +2920,16 @@ static int kcompactd(void *p)
>  	while (!kthread_should_stop()) {
>  		unsigned long pflags;
>  
> +		/*
> +		 * Avoid the unnecessary wakeup for proactive compaction
> +		 * when it is disabled.
> +		 */
> +		if (!sysctl_compaction_proactiveness)
> +			timeout = MAX_SCHEDULE_TIMEOUT;
>  		trace_mm_compaction_kcompactd_sleep(pgdat->node_id);
>  		if (wait_event_freezable_timeout(pgdat->kcompactd_wait,
> -			kcompactd_work_requested(pgdat), timeout)) {
> +			kcompactd_work_requested(pgdat), timeout) &&
> +			!pgdat->proactive_compact_trigger) {
>  
>  			psi_memstall_enter(&pflags);
>  			kcompactd_do_work(pgdat);
> @@ -2932,6 +2964,8 @@ static int kcompactd(void *p)
>  				timeout =
>  				   default_timeout << COMPACT_MAX_DEFER_SHIFT;
>  		}
> +		if (unlikely(pgdat->proactive_compact_trigger))
> +			pgdat->proactive_compact_trigger = false;
>  	}
>  
>  	return 0;
> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
> member of the Code Aurora Forum, hosted by The Linux Foundation
> 
>
diff mbox series

Patch

diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
index 003d5cc..b526cf6 100644
--- a/Documentation/admin-guide/sysctl/vm.rst
+++ b/Documentation/admin-guide/sysctl/vm.rst
@@ -118,7 +118,8 @@  compaction_proactiveness
 
 This tunable takes a value in the range [0, 100] with a default value of
 20. This tunable determines how aggressively compaction is done in the
-background. Setting it to 0 disables proactive compaction.
+background. On write of non zero value to this tunable will immediately
+trigger the proactive compaction. Setting it to 0 disables proactive compaction.
 
 Note that compaction has a non-trivial system-wide impact as pages
 belonging to different processes are moved around, which could also lead
diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index c24098c..34bce35 100644
--- a/include/linux/compaction.h
+++ b/include/linux/compaction.h
@@ -84,6 +84,8 @@  static inline unsigned long compact_gap(unsigned int order)
 extern unsigned int sysctl_compaction_proactiveness;
 extern int sysctl_compaction_handler(struct ctl_table *table, int write,
 			void *buffer, size_t *length, loff_t *ppos);
+extern int compaction_proactiveness_sysctl_handler(struct ctl_table *table,
+		int write, void *buffer, size_t *length, loff_t *ppos);
 extern int sysctl_extfrag_threshold;
 extern int sysctl_compact_unevictable_allowed;
 
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 4610750..6a1d79d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -853,6 +853,7 @@  typedef struct pglist_data {
 	enum zone_type kcompactd_highest_zoneidx;
 	wait_queue_head_t kcompactd_wait;
 	struct task_struct *kcompactd;
+	bool proactive_compact_trigger;
 #endif
 	/*
 	 * This is a per-node reserve of pages that are not available
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 82d6ff6..65bc6f7 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -2871,7 +2871,7 @@  static struct ctl_table vm_table[] = {
 		.data		= &sysctl_compaction_proactiveness,
 		.maxlen		= sizeof(sysctl_compaction_proactiveness),
 		.mode		= 0644,
-		.proc_handler	= proc_dointvec_minmax,
+		.proc_handler	= compaction_proactiveness_sysctl_handler,
 		.extra1		= SYSCTL_ZERO,
 		.extra2		= &one_hundred,
 	},
diff --git a/mm/compaction.c b/mm/compaction.c
index f984ad0..fbc60f9 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2700,6 +2700,30 @@  static void compact_nodes(void)
  */
 unsigned int __read_mostly sysctl_compaction_proactiveness = 20;
 
+int compaction_proactiveness_sysctl_handler(struct ctl_table *table, int write,
+		void *buffer, size_t *length, loff_t *ppos)
+{
+	int rc, nid;
+
+	rc = proc_dointvec_minmax(table, write, buffer, length, ppos);
+	if (rc)
+		return rc;
+
+	if (write && sysctl_compaction_proactiveness) {
+		for_each_online_node(nid) {
+			pg_data_t *pgdat = NODE_DATA(nid);
+
+			if (pgdat->proactive_compact_trigger)
+				continue;
+
+			pgdat->proactive_compact_trigger = true;
+			wake_up_interruptible(&pgdat->kcompactd_wait);
+		}
+	}
+
+	return 0;
+}
+
 /*
  * This is the entry point for compacting all nodes via
  * /proc/sys/vm/compact_memory
@@ -2744,7 +2768,8 @@  void compaction_unregister_node(struct node *node)
 
 static inline bool kcompactd_work_requested(pg_data_t *pgdat)
 {
-	return pgdat->kcompactd_max_order > 0 || kthread_should_stop();
+	return pgdat->kcompactd_max_order > 0 || kthread_should_stop() ||
+		pgdat->proactive_compact_trigger;
 }
 
 static bool kcompactd_node_suitable(pg_data_t *pgdat)
@@ -2895,9 +2920,16 @@  static int kcompactd(void *p)
 	while (!kthread_should_stop()) {
 		unsigned long pflags;
 
+		/*
+		 * Avoid the unnecessary wakeup for proactive compaction
+		 * when it is disabled.
+		 */
+		if (!sysctl_compaction_proactiveness)
+			timeout = MAX_SCHEDULE_TIMEOUT;
 		trace_mm_compaction_kcompactd_sleep(pgdat->node_id);
 		if (wait_event_freezable_timeout(pgdat->kcompactd_wait,
-			kcompactd_work_requested(pgdat), timeout)) {
+			kcompactd_work_requested(pgdat), timeout) &&
+			!pgdat->proactive_compact_trigger) {
 
 			psi_memstall_enter(&pflags);
 			kcompactd_do_work(pgdat);
@@ -2932,6 +2964,8 @@  static int kcompactd(void *p)
 				timeout =
 				   default_timeout << COMPACT_MAX_DEFER_SHIFT;
 		}
+		if (unlikely(pgdat->proactive_compact_trigger))
+			pgdat->proactive_compact_trigger = false;
 	}
 
 	return 0;