diff mbox series

[v2,1/2] mm: clear pte for folios that are zero filled

Message ID 20240604105950.1134192-2-usamaarif642@gmail.com (mailing list archive)
State New
Headers show
Series mm: clear pte for folios that are zero filled | expand

Commit Message

Usama Arif June 4, 2024, 10:58 a.m. UTC
Approximately 10-20% of pages to be swapped out are zero pages [1].
Rather than reading/writing these pages to flash resulting
in increased I/O and flash wear, the pte can be cleared for those
addresses at unmap time while shrinking folio list. When this
causes a page fault, do_pte_missing will take care of this page.
With this patch, NVMe writes in Meta server fleet decreased
by almost 10% with conventional swap setup (zswap disabled).

[1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/

Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
 include/linux/rmap.h |   1 +
 mm/rmap.c            | 163 ++++++++++++++++++++++---------------------
 mm/vmscan.c          |  89 ++++++++++++++++-------
 3 files changed, 150 insertions(+), 103 deletions(-)

Comments

Matthew Wilcox (Oracle) June 4, 2024, 12:18 p.m. UTC | #1
On Tue, Jun 04, 2024 at 11:58:24AM +0100, Usama Arif wrote:
> +++ b/mm/rmap.c
> @@ -1819,96 +1819,101 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>  			 */
>  			dec_mm_counter(mm, mm_counter(folio));
>  		} else if (folio_test_anon(folio)) {
> -			swp_entry_t entry = page_swap_entry(subpage);
> -			pte_t swp_pte;
> -			/*
> -			 * Store the swap location in the pte.
> -			 * See handle_pte_fault() ...
> -			 */
> -			if (unlikely(folio_test_swapbacked(folio) !=
> -					folio_test_swapcache(folio))) {
> +			if (flags & TTU_ZERO_FOLIO) {
> +				pte_clear(mm, address, pvmw.pte);
> +				dec_mm_counter(mm, MM_ANONPAGES);
> +			} else {

This is very hard to review.  Is what you've done the same as:

			if (flags & TTU_ZERO_FOLIO) {
				pte_clear(mm, address, pvmw.pte);
				dec_mm_counter(mm, MM_ANONPAGES);
				goto discard;
			}

?  I genuinely can't tell.
David Hildenbrand June 4, 2024, 12:30 p.m. UTC | #2
On 04.06.24 12:58, Usama Arif wrote:
> Approximately 10-20% of pages to be swapped out are zero pages [1].
> Rather than reading/writing these pages to flash resulting
> in increased I/O and flash wear, the pte can be cleared for those
> addresses at unmap time while shrinking folio list. When this
> causes a page fault, do_pte_missing will take care of this page.
> With this patch, NVMe writes in Meta server fleet decreased
> by almost 10% with conventional swap setup (zswap disabled).
> 
> [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/
> 
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
> ---
>   include/linux/rmap.h |   1 +
>   mm/rmap.c            | 163 ++++++++++++++++++++++---------------------
>   mm/vmscan.c          |  89 ++++++++++++++++-------
>   3 files changed, 150 insertions(+), 103 deletions(-)
> 
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index bb53e5920b88..b36db1e886e4 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -100,6 +100,7 @@ enum ttu_flags {
>   					 * do a final flush if necessary */
>   	TTU_RMAP_LOCKED		= 0x80,	/* do not grab rmap lock:
>   					 * caller holds it */
> +	TTU_ZERO_FOLIO		= 0x100,/* zero folio */
>   };
>   
>   #ifdef CONFIG_MMU
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 52357d79917c..d98f70876327 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1819,96 +1819,101 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>   			 */
>   			dec_mm_counter(mm, mm_counter(folio));
>   		} else if (folio_test_anon(folio)) {
> -			swp_entry_t entry = page_swap_entry(subpage);
> -			pte_t swp_pte;
> -			/*
> -			 * Store the swap location in the pte.
> -			 * See handle_pte_fault() ...
> -			 */
> -			if (unlikely(folio_test_swapbacked(folio) !=
> -					folio_test_swapcache(folio))) {
> +			if (flags & TTU_ZERO_FOLIO) {
> +				pte_clear(mm, address, pvmw.pte);
> +				dec_mm_counter(mm, MM_ANONPAGES);

Is there an easy way to reduce the code churn and highlight the added code?

Like

} else if (folio_test_anon(folio) && (flags & TTU_ZERO_FOLIO)) {

} else if (folio_test_anon(folio)) {



Also to concerns that I want to spell out:

(a) what stops the page from getting modified in the meantime? The CPU
     can write it until the TLB was flushed.

(b) do you properly handle if the page is pinned (or just got pinned)
     and we must not discard it?
Usama Arif June 4, 2024, 12:42 p.m. UTC | #3
On 04/06/2024 13:18, Matthew Wilcox wrote:
> On Tue, Jun 04, 2024 at 11:58:24AM +0100, Usama Arif wrote:
>> +++ b/mm/rmap.c
>> @@ -1819,96 +1819,101 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>   			 */
>>   			dec_mm_counter(mm, mm_counter(folio));
>>   		} else if (folio_test_anon(folio)) {
>> -			swp_entry_t entry = page_swap_entry(subpage);
>> -			pte_t swp_pte;
>> -			/*
>> -			 * Store the swap location in the pte.
>> -			 * See handle_pte_fault() ...
>> -			 */
>> -			if (unlikely(folio_test_swapbacked(folio) !=
>> -					folio_test_swapcache(folio))) {
>> +			if (flags & TTU_ZERO_FOLIO) {
>> +				pte_clear(mm, address, pvmw.pte);
>> +				dec_mm_counter(mm, MM_ANONPAGES);
>> +			} else {
> This is very hard to review.  Is what you've done the same as:
>
> 			if (flags & TTU_ZERO_FOLIO) {
> 				pte_clear(mm, address, pvmw.pte);
> 				dec_mm_counter(mm, MM_ANONPAGES);
> 				goto discard;
> 			}
>
> ?  I genuinely can't tell.
>
Yes, thats what I am doing, will switch to above in next revision, Thanks!
David Hildenbrand June 4, 2024, 12:43 p.m. UTC | #4
On 04.06.24 14:30, David Hildenbrand wrote:
> On 04.06.24 12:58, Usama Arif wrote:
>> Approximately 10-20% of pages to be swapped out are zero pages [1].
>> Rather than reading/writing these pages to flash resulting
>> in increased I/O and flash wear, the pte can be cleared for those
>> addresses at unmap time while shrinking folio list. When this
>> causes a page fault, do_pte_missing will take care of this page.
>> With this patch, NVMe writes in Meta server fleet decreased
>> by almost 10% with conventional swap setup (zswap disabled).
>>
>> [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/
>>
>> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
>> ---
>>    include/linux/rmap.h |   1 +
>>    mm/rmap.c            | 163 ++++++++++++++++++++++---------------------
>>    mm/vmscan.c          |  89 ++++++++++++++++-------
>>    3 files changed, 150 insertions(+), 103 deletions(-)
>>
>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>> index bb53e5920b88..b36db1e886e4 100644
>> --- a/include/linux/rmap.h
>> +++ b/include/linux/rmap.h
>> @@ -100,6 +100,7 @@ enum ttu_flags {
>>    					 * do a final flush if necessary */
>>    	TTU_RMAP_LOCKED		= 0x80,	/* do not grab rmap lock:
>>    					 * caller holds it */
>> +	TTU_ZERO_FOLIO		= 0x100,/* zero folio */
>>    };
>>    
>>    #ifdef CONFIG_MMU
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 52357d79917c..d98f70876327 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1819,96 +1819,101 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>    			 */
>>    			dec_mm_counter(mm, mm_counter(folio));
>>    		} else if (folio_test_anon(folio)) {
>> -			swp_entry_t entry = page_swap_entry(subpage);
>> -			pte_t swp_pte;
>> -			/*
>> -			 * Store the swap location in the pte.
>> -			 * See handle_pte_fault() ...
>> -			 */
>> -			if (unlikely(folio_test_swapbacked(folio) !=
>> -					folio_test_swapcache(folio))) {
>> +			if (flags & TTU_ZERO_FOLIO) {
>> +				pte_clear(mm, address, pvmw.pte);
>> +				dec_mm_counter(mm, MM_ANONPAGES);
> 
> Is there an easy way to reduce the code churn and highlight the added code?
> 
> Like
> 
> } else if (folio_test_anon(folio) && (flags & TTU_ZERO_FOLIO)) {
> 
> } else if (folio_test_anon(folio)) {
> 
> 
> 
> Also to concerns that I want to spell out:
> 
> (a) what stops the page from getting modified in the meantime? The CPU
>       can write it until the TLB was flushed.
> 
> (b) do you properly handle if the page is pinned (or just got pinned)
>       and we must not discard it?

Oh, and I forgot, are you handling userfaultd as expected? IIRC there 
are some really nasty side-effects with userfaultfd even when 
userfaultfd is currently not registered for a VMA [1].

[1] 
https://lore.kernel.org/linux-mm/3a4b1027-df6e-31b8-b0de-ff202828228d@redhat.com/

What should work is replacing all-zero anonymous pages by the shared 
zeropage iff the anonymous page is not pinned and we synchronize against 
GUP fast. Well, and we handle possible concurrent writes accordingly.

KSM does essentially that when told to de-duplicate the shared zeropage, 
and I was thinking a while ago if we would want a zeropage-only KSM 
version that doesn't need stable tress and all that, but only 
deduplicates zero-filled pages into the shared zeropage in a safe way.
Shakeel Butt June 5, 2024, 8:55 a.m. UTC | #5
On Tue, Jun 04, 2024 at 11:58:24AM GMT, Usama Arif wrote:
[...]
>  
> +static bool is_folio_page_zero_filled(struct folio *folio, int i)
> +{
> +	unsigned long *data;
> +	unsigned int pos, last_pos = PAGE_SIZE / sizeof(*data) - 1;
> +	bool ret = false;
> +
> +	data = kmap_local_folio(folio, i * PAGE_SIZE);
> +
> +	if (data[last_pos])
> +		goto out;
> +

Use memchr_inv() instead of the following.

> +	for (pos = 0; pos < last_pos; pos++) {
> +		if (data[pos])
> +			goto out;
> +	}
> +	ret = true;
> +out:
> +	kunmap_local(data);
> +	return ret;
> +}
> +
[...]
> +
>  /*
>   * shrink_folio_list() returns the number of reclaimed pages
>   */
> @@ -1053,6 +1085,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
>  		enum folio_references references = FOLIOREF_RECLAIM;
>  		bool dirty, writeback;
>  		unsigned int nr_pages;
> +		bool folio_zero_filled = false;
>  
>  		cond_resched();
>  
> @@ -1270,6 +1303,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
>  			nr_pages = 1;
>  		}
>  
> +		folio_zero_filled = is_folio_zero_filled(folio);

You need to check for zeroes after the unmap below otherwise you may
lost data. So you need to do two rmap walks. Most probably the first one
would be the standard one (inserting swap entry in the ptes) but the
second one would be different where swap entries should be replaced by
the zeropage. Also at the end you need to make sure to release all the
swap resources associated with the given page/folio.

>  		/*
>  		 * The folio is mapped into the page tables of one or more
>  		 * processes. Try to unmap it here.
> @@ -1295,6 +1329,9 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
>  			if (folio_test_large(folio) && list_empty(&folio->_deferred_list))
>  				flags |= TTU_SYNC;
>  
> +			if (folio_zero_filled)
> +				flags |= TTU_ZERO_FOLIO;
> +
>  			try_to_unmap(folio, flags);
>  			if (folio_mapped(folio)) {
>  				stat->nr_unmap_fail += nr_pages;
Usama Arif June 7, 2024, 10:24 a.m. UTC | #6
On 04/06/2024 13:43, David Hildenbrand wrote:
> On 04.06.24 14:30, David Hildenbrand wrote:
>> On 04.06.24 12:58, Usama Arif wrote:
>>> Approximately 10-20% of pages to be swapped out are zero pages [1].
>>> Rather than reading/writing these pages to flash resulting
>>> in increased I/O and flash wear, the pte can be cleared for those
>>> addresses at unmap time while shrinking folio list. When this
>>> causes a page fault, do_pte_missing will take care of this page.
>>> With this patch, NVMe writes in Meta server fleet decreased
>>> by almost 10% with conventional swap setup (zswap disabled).
>>>
>>> [1] 
>>> https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/
>>>
>>> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
>>> ---
>>>    include/linux/rmap.h |   1 +
>>>    mm/rmap.c            | 163 
>>> ++++++++++++++++++++++---------------------
>>>    mm/vmscan.c          |  89 ++++++++++++++++-------
>>>    3 files changed, 150 insertions(+), 103 deletions(-)
>>>
>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>>> index bb53e5920b88..b36db1e886e4 100644
>>> --- a/include/linux/rmap.h
>>> +++ b/include/linux/rmap.h
>>> @@ -100,6 +100,7 @@ enum ttu_flags {
>>>                         * do a final flush if necessary */
>>>        TTU_RMAP_LOCKED        = 0x80,    /* do not grab rmap lock:
>>>                         * caller holds it */
>>> +    TTU_ZERO_FOLIO        = 0x100,/* zero folio */
>>>    };
>>>       #ifdef CONFIG_MMU
>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>> index 52357d79917c..d98f70876327 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -1819,96 +1819,101 @@ static bool try_to_unmap_one(struct folio 
>>> *folio, struct vm_area_struct *vma,
>>>                 */
>>>                dec_mm_counter(mm, mm_counter(folio));
>>>            } else if (folio_test_anon(folio)) {
>>> -            swp_entry_t entry = page_swap_entry(subpage);
>>> -            pte_t swp_pte;
>>> -            /*
>>> -             * Store the swap location in the pte.
>>> -             * See handle_pte_fault() ...
>>> -             */
>>> -            if (unlikely(folio_test_swapbacked(folio) !=
>>> -                    folio_test_swapcache(folio))) {
>>> +            if (flags & TTU_ZERO_FOLIO) {
>>> +                pte_clear(mm, address, pvmw.pte);
>>> +                dec_mm_counter(mm, MM_ANONPAGES);
>>
>> Is there an easy way to reduce the code churn and highlight the added 
>> code?
>>
>> Like
>>
>> } else if (folio_test_anon(folio) && (flags & TTU_ZERO_FOLIO)) {
>>
>> } else if (folio_test_anon(folio)) {
>>
>>
>>
>> Also to concerns that I want to spell out:
>>
>> (a) what stops the page from getting modified in the meantime? The CPU
>>       can write it until the TLB was flushed.
>>
Thanks for pointing this out David and Shakeel. This is a big issue in 
this v2, and as Shakeel pointed out in [1] we need to do a second rmap 
walk. Looking at how ksm deals with this in try_to_merge_one_page which 
calls write_protect_page for each vma (i.e. basically an rmap walk), 
this would be much more CPU expensive and complicated compared to v1 
[2], where the swap subsystem can handle all complexities. I will go 
back to my v1 solution for the next revision as its much more simpler 
and the memory usage is very low (0.003%) as pointed out by Johannes [3] 
which would likely go away with the memory savings of not having a 
zswap_entry for zero filled pages, and the solution being a lot simpler 
than what a valid v2 approach would look like.


[1] 
https://lore.kernel.org/all/nes73bwc5p6yhwt5tw3upxcqrn5kenn6lvqb6exrf4yppmz6jx@ywhuevpkxlvh/

[2] 
https://lore.kernel.org/all/20240530102126.357438-1-usamaarif642@gmail.com/

[3] https://lore.kernel.org/all/20240530122715.GB1222079@cmpxchg.org/

>> (b) do you properly handle if the page is pinned (or just got pinned)
>>       and we must not discard it?
>
> Oh, and I forgot, are you handling userfaultd as expected? IIRC there 
> are some really nasty side-effects with userfaultfd even when 
> userfaultfd is currently not registered for a VMA [1].
>
> [1] 
> https://lore.kernel.org/linux-mm/3a4b1027-df6e-31b8-b0de-ff202828228d@redhat.com/
>
> What should work is replacing all-zero anonymous pages by the shared 
> zeropage iff the anonymous page is not pinned and we synchronize 
> against GUP fast. Well, and we handle possible concurrent writes 
> accordingly.
>
> KSM does essentially that when told to de-duplicate the shared 
> zeropage, and I was thinking a while ago if we would want a 
> zeropage-only KSM version that doesn't need stable tress and all that, 
> but only deduplicates zero-filled pages into the shared zeropage in a 
> safe way.
>
Thanks for the pointer to KSM code.
Usama Arif June 7, 2024, 10:40 a.m. UTC | #7
On 05/06/2024 09:55, Shakeel Butt wrote:
> On Tue, Jun 04, 2024 at 11:58:24AM GMT, Usama Arif wrote:
> [...]
>>   
>> +static bool is_folio_page_zero_filled(struct folio *folio, int i)
>> +{
>> +	unsigned long *data;
>> +	unsigned int pos, last_pos = PAGE_SIZE / sizeof(*data) - 1;
>> +	bool ret = false;
>> +
>> +	data = kmap_local_folio(folio, i * PAGE_SIZE);
>> +
>> +	if (data[last_pos])
>> +		goto out;
>> +
> Use memchr_inv() instead of the following.

I had done some benchmarking before sending v1 and this version is 35% 
faster than using memchr_inv(). Its likely because this does long 
comparison, while memchr_inv does a byte comparison using check_bytes8 
[1]. I will stick with the current version for my next revision. I have 
added the kernel module I used for benchmarking below:

[308797.975269] Time taken for orig: 2850 ms
[308801.911439] Time taken for memchr_inv: 3936 ms

[1] https://elixir.bootlin.com/linux/v6.9.3/source/lib/string.c#L800


#include <linux/time.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/string.h>

#define ITERATIONS 10000000
static int is_page_zero_filled(void *ptr, unsigned long *value)
{
     unsigned long *page;
     unsigned long val;
     unsigned int pos, last_pos = PAGE_SIZE / sizeof(*page) - 1;

     page = (unsigned long *)ptr;
     val = page[0];

     if (page[last_pos] != 0)
         return 0;

     for (pos = 1; pos < last_pos; pos++) {
         if (page[pos] != 0)
             return 0;
     }

     *value = val;

     return 1;
}

static int is_page_zero_filled_memchr_inv(void *ptr, unsigned long *value)
{
     unsigned long *page;
     unsigned long val;
     unsigned long *ret;
     page = (unsigned long *)ptr;

     val = page[0];
     *value = val;

     ret = memchr_inv(ptr, 0, PAGE_SIZE);

     return ret == NULL ? 1: 0;
}

static int __init zsmalloc_test_init(void)
{
     unsigned long *src;
     unsigned long value;
     ktime_t start_time, end_time;
     volatile int res = 0;
     unsigned long milliseconds;

     src = kmalloc(PAGE_SIZE, GFP_KERNEL);
     if (!src)
         return -ENOMEM;

     for (unsigned int pos = 0; pos <= PAGE_SIZE / sizeof(*src) - 1; 
pos++) {
         src[pos] = 0x0;
     }

     start_time = ktime_get();
     for (int i = 0; i < ITERATIONS; i++)
         res = is_page_zero_filled(src, &value);
     end_time = ktime_get();
     milliseconds = ktime_ms_delta(end_time, start_time);
     // printk(KERN_INFO "Result: %d, Value: %lu\n", res, value);
     printk(KERN_INFO "Time taken for orig: %lu ms\n", milliseconds);

     start_time = ktime_get();
     for (int i = 0; i < ITERATIONS; i++)
         res = is_page_zero_filled_memchr_inv(src, &value);
     end_time = ktime_get();
     milliseconds = ktime_ms_delta(end_time, start_time);
     // printk(KERN_INFO "Result: %d, Value: %lu\n", res, value);
     printk(KERN_INFO "Time taken for memchr_inv: %lu ms\n", milliseconds);

     kfree(src);
     // Dont insmod so that you can re-run
     return -1;
}

module_init(zsmalloc_test_init);
MODULE_LICENSE("GPL");
David Hildenbrand June 7, 2024, 11:16 a.m. UTC | #8
On 07.06.24 12:24, Usama Arif wrote:
> 
> On 04/06/2024 13:43, David Hildenbrand wrote:
>> On 04.06.24 14:30, David Hildenbrand wrote:
>>> On 04.06.24 12:58, Usama Arif wrote:
>>>> Approximately 10-20% of pages to be swapped out are zero pages [1].
>>>> Rather than reading/writing these pages to flash resulting
>>>> in increased I/O and flash wear, the pte can be cleared for those
>>>> addresses at unmap time while shrinking folio list. When this
>>>> causes a page fault, do_pte_missing will take care of this page.
>>>> With this patch, NVMe writes in Meta server fleet decreased
>>>> by almost 10% with conventional swap setup (zswap disabled).
>>>>
>>>> [1]
>>>> https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/
>>>>
>>>> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
>>>> ---
>>>>     include/linux/rmap.h |   1 +
>>>>     mm/rmap.c            | 163
>>>> ++++++++++++++++++++++---------------------
>>>>     mm/vmscan.c          |  89 ++++++++++++++++-------
>>>>     3 files changed, 150 insertions(+), 103 deletions(-)
>>>>
>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>>>> index bb53e5920b88..b36db1e886e4 100644
>>>> --- a/include/linux/rmap.h
>>>> +++ b/include/linux/rmap.h
>>>> @@ -100,6 +100,7 @@ enum ttu_flags {
>>>>                          * do a final flush if necessary */
>>>>         TTU_RMAP_LOCKED        = 0x80,    /* do not grab rmap lock:
>>>>                          * caller holds it */
>>>> +    TTU_ZERO_FOLIO        = 0x100,/* zero folio */
>>>>     };
>>>>        #ifdef CONFIG_MMU
>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>> index 52357d79917c..d98f70876327 100644
>>>> --- a/mm/rmap.c
>>>> +++ b/mm/rmap.c
>>>> @@ -1819,96 +1819,101 @@ static bool try_to_unmap_one(struct folio
>>>> *folio, struct vm_area_struct *vma,
>>>>                  */
>>>>                 dec_mm_counter(mm, mm_counter(folio));
>>>>             } else if (folio_test_anon(folio)) {
>>>> -            swp_entry_t entry = page_swap_entry(subpage);
>>>> -            pte_t swp_pte;
>>>> -            /*
>>>> -             * Store the swap location in the pte.
>>>> -             * See handle_pte_fault() ...
>>>> -             */
>>>> -            if (unlikely(folio_test_swapbacked(folio) !=
>>>> -                    folio_test_swapcache(folio))) {
>>>> +            if (flags & TTU_ZERO_FOLIO) {
>>>> +                pte_clear(mm, address, pvmw.pte);
>>>> +                dec_mm_counter(mm, MM_ANONPAGES);
>>>
>>> Is there an easy way to reduce the code churn and highlight the added
>>> code?
>>>
>>> Like
>>>
>>> } else if (folio_test_anon(folio) && (flags & TTU_ZERO_FOLIO)) {
>>>
>>> } else if (folio_test_anon(folio)) {
>>>
>>>
>>>
>>> Also to concerns that I want to spell out:
>>>
>>> (a) what stops the page from getting modified in the meantime? The CPU
>>>        can write it until the TLB was flushed.
>>>
> Thanks for pointing this out David and Shakeel. This is a big issue in
> this v2, and as Shakeel pointed out in [1] we need to do a second rmap
> walk. Looking at how ksm deals with this in try_to_merge_one_page which
> calls write_protect_page for each vma (i.e. basically an rmap walk),
> this would be much more CPU expensive and complicated compared to v1
> [2], where the swap subsystem can handle all complexities. I will go
> back to my v1 solution for the next revision as its much more simpler
> and the memory usage is very low (0.003%) as pointed out by Johannes [3]
> which would likely go away with the memory savings of not having a
> zswap_entry for zero filled pages, and the solution being a lot simpler
> than what a valid v2 approach would look like.

Agreed.
diff mbox series

Patch

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index bb53e5920b88..b36db1e886e4 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -100,6 +100,7 @@  enum ttu_flags {
 					 * do a final flush if necessary */
 	TTU_RMAP_LOCKED		= 0x80,	/* do not grab rmap lock:
 					 * caller holds it */
+	TTU_ZERO_FOLIO		= 0x100,/* zero folio */
 };
 
 #ifdef CONFIG_MMU
diff --git a/mm/rmap.c b/mm/rmap.c
index 52357d79917c..d98f70876327 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1819,96 +1819,101 @@  static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 			 */
 			dec_mm_counter(mm, mm_counter(folio));
 		} else if (folio_test_anon(folio)) {
-			swp_entry_t entry = page_swap_entry(subpage);
-			pte_t swp_pte;
-			/*
-			 * Store the swap location in the pte.
-			 * See handle_pte_fault() ...
-			 */
-			if (unlikely(folio_test_swapbacked(folio) !=
-					folio_test_swapcache(folio))) {
+			if (flags & TTU_ZERO_FOLIO) {
+				pte_clear(mm, address, pvmw.pte);
+				dec_mm_counter(mm, MM_ANONPAGES);
+			} else {
+				swp_entry_t entry = page_swap_entry(subpage);
+				pte_t swp_pte;
 				/*
-				 * unmap_huge_pmd_locked() will unmark a
-				 * PMD-mapped folio as lazyfree if the folio or
-				 * its PMD was redirtied.
+				 * Store the swap location in the pte.
+				 * See handle_pte_fault() ...
 				 */
-				if (!pmd_mapped)
-					WARN_ON_ONCE(1);
-				goto walk_done_err;
-			}
+				if (unlikely(folio_test_swapbacked(folio) !=
+						folio_test_swapcache(folio))) {
+					/*
+					 * unmap_huge_pmd_locked() will unmark a
+					 * PMD-mapped folio as lazyfree if the folio or
+					 * its PMD was redirtied.
+					 */
+					if (!pmd_mapped)
+						WARN_ON_ONCE(1);
+					goto walk_done_err;
+				}
 
-			/* MADV_FREE page check */
-			if (!folio_test_swapbacked(folio)) {
-				int ref_count, map_count;
+				/* MADV_FREE page check */
+				if (!folio_test_swapbacked(folio)) {
+					int ref_count, map_count;
 
-				/*
-				 * Synchronize with gup_pte_range():
-				 * - clear PTE; barrier; read refcount
-				 * - inc refcount; barrier; read PTE
-				 */
-				smp_mb();
+					/*
+					 * Synchronize with gup_pte_range():
+					 * - clear PTE; barrier; read refcount
+					 * - inc refcount; barrier; read PTE
+					 */
+					smp_mb();
 
-				ref_count = folio_ref_count(folio);
-				map_count = folio_mapcount(folio);
+					ref_count = folio_ref_count(folio);
+					map_count = folio_mapcount(folio);
 
-				/*
-				 * Order reads for page refcount and dirty flag
-				 * (see comments in __remove_mapping()).
-				 */
-				smp_rmb();
+					/*
+					 * Order reads for page refcount and dirty flag
+					 * (see comments in __remove_mapping()).
+					 */
+					smp_rmb();
 
-				/*
-				 * The only page refs must be one from isolation
-				 * plus the rmap(s) (dropped by discard:).
-				 */
-				if (ref_count == 1 + map_count &&
-				    !folio_test_dirty(folio)) {
-					dec_mm_counter(mm, MM_ANONPAGES);
-					goto discard;
-				}
+					/*
+					 * The only page refs must be one from isolation
+					 * plus the rmap(s) (dropped by discard:).
+					 */
+					if (ref_count == 1 + map_count &&
+					    !folio_test_dirty(folio)) {
+						dec_mm_counter(mm, MM_ANONPAGES);
+						goto discard;
+					}
 
-				/*
-				 * If the folio was redirtied, it cannot be
-				 * discarded. Remap the page to page table.
-				 */
-				set_pte_at(mm, address, pvmw.pte, pteval);
-				folio_set_swapbacked(folio);
-				goto walk_done_err;
-			}
+					/*
+					 * If the folio was redirtied, it cannot be
+					 * discarded. Remap the page to page table.
+					 */
+					set_pte_at(mm, address, pvmw.pte, pteval);
+					folio_set_swapbacked(folio);
+					goto walk_done_err;
+				}
 
-			if (swap_duplicate(entry) < 0) {
-				set_pte_at(mm, address, pvmw.pte, pteval);
-				goto walk_done_err;
-			}
-			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
-				swap_free(entry);
-				set_pte_at(mm, address, pvmw.pte, pteval);
-				goto walk_done_err;
-			}
+				if (swap_duplicate(entry) < 0) {
+					set_pte_at(mm, address, pvmw.pte, pteval);
+					goto walk_done_err;
+				}
+				if (arch_unmap_one(mm, vma, address, pteval) < 0) {
+					swap_free(entry);
+					set_pte_at(mm, address, pvmw.pte, pteval);
+					goto walk_done_err;
+				}
 
-			/* See folio_try_share_anon_rmap(): clear PTE first. */
-			if (anon_exclusive &&
-			    folio_try_share_anon_rmap_pte(folio, subpage)) {
-				swap_free(entry);
-				set_pte_at(mm, address, pvmw.pte, pteval);
-				goto walk_done_err;
-			}
-			if (list_empty(&mm->mmlist)) {
-				spin_lock(&mmlist_lock);
-				if (list_empty(&mm->mmlist))
-					list_add(&mm->mmlist, &init_mm.mmlist);
-				spin_unlock(&mmlist_lock);
+				/* See folio_try_share_anon_rmap(): clear PTE first. */
+				if (anon_exclusive &&
+				    folio_try_share_anon_rmap_pte(folio, subpage)) {
+					swap_free(entry);
+					set_pte_at(mm, address, pvmw.pte, pteval);
+					goto walk_done_err;
+				}
+				if (list_empty(&mm->mmlist)) {
+					spin_lock(&mmlist_lock);
+					if (list_empty(&mm->mmlist))
+						list_add(&mm->mmlist, &init_mm.mmlist);
+					spin_unlock(&mmlist_lock);
+				}
+				dec_mm_counter(mm, MM_ANONPAGES);
+				inc_mm_counter(mm, MM_SWAPENTS);
+				swp_pte = swp_entry_to_pte(entry);
+				if (anon_exclusive)
+					swp_pte = pte_swp_mkexclusive(swp_pte);
+				if (pte_soft_dirty(pteval))
+					swp_pte = pte_swp_mksoft_dirty(swp_pte);
+				if (pte_uffd_wp(pteval))
+					swp_pte = pte_swp_mkuffd_wp(swp_pte);
+				set_pte_at(mm, address, pvmw.pte, swp_pte);
 			}
-			dec_mm_counter(mm, MM_ANONPAGES);
-			inc_mm_counter(mm, MM_SWAPENTS);
-			swp_pte = swp_entry_to_pte(entry);
-			if (anon_exclusive)
-				swp_pte = pte_swp_mkexclusive(swp_pte);
-			if (pte_soft_dirty(pteval))
-				swp_pte = pte_swp_mksoft_dirty(swp_pte);
-			if (pte_uffd_wp(pteval))
-				swp_pte = pte_swp_mkuffd_wp(swp_pte);
-			set_pte_at(mm, address, pvmw.pte, swp_pte);
 		} else {
 			/*
 			 * This is a locked file-backed folio,
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b9170f767353..d54f44b556f0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1026,6 +1026,38 @@  static bool may_enter_fs(struct folio *folio, gfp_t gfp_mask)
 	return !data_race(folio_swap_flags(folio) & SWP_FS_OPS);
 }
 
+static bool is_folio_page_zero_filled(struct folio *folio, int i)
+{
+	unsigned long *data;
+	unsigned int pos, last_pos = PAGE_SIZE / sizeof(*data) - 1;
+	bool ret = false;
+
+	data = kmap_local_folio(folio, i * PAGE_SIZE);
+
+	if (data[last_pos])
+		goto out;
+
+	for (pos = 0; pos < last_pos; pos++) {
+		if (data[pos])
+			goto out;
+	}
+	ret = true;
+out:
+	kunmap_local(data);
+	return ret;
+}
+
+static bool is_folio_zero_filled(struct folio *folio)
+{
+	unsigned int i;
+
+	for (i = 0; i < folio_nr_pages(folio); i++) {
+		if (!is_folio_page_zero_filled(folio, i))
+			return false;
+	}
+	return true;
+}
+
 /*
  * shrink_folio_list() returns the number of reclaimed pages
  */
@@ -1053,6 +1085,7 @@  static unsigned int shrink_folio_list(struct list_head *folio_list,
 		enum folio_references references = FOLIOREF_RECLAIM;
 		bool dirty, writeback;
 		unsigned int nr_pages;
+		bool folio_zero_filled = false;
 
 		cond_resched();
 
@@ -1270,6 +1303,7 @@  static unsigned int shrink_folio_list(struct list_head *folio_list,
 			nr_pages = 1;
 		}
 
+		folio_zero_filled = is_folio_zero_filled(folio);
 		/*
 		 * The folio is mapped into the page tables of one or more
 		 * processes. Try to unmap it here.
@@ -1295,6 +1329,9 @@  static unsigned int shrink_folio_list(struct list_head *folio_list,
 			if (folio_test_large(folio) && list_empty(&folio->_deferred_list))
 				flags |= TTU_SYNC;
 
+			if (folio_zero_filled)
+				flags |= TTU_ZERO_FOLIO;
+
 			try_to_unmap(folio, flags);
 			if (folio_mapped(folio)) {
 				stat->nr_unmap_fail += nr_pages;
@@ -1358,32 +1395,36 @@  static unsigned int shrink_folio_list(struct list_head *folio_list,
 			 * starts and then write it out here.
 			 */
 			try_to_unmap_flush_dirty();
-			switch (pageout(folio, mapping, &plug)) {
-			case PAGE_KEEP:
-				goto keep_locked;
-			case PAGE_ACTIVATE:
-				goto activate_locked;
-			case PAGE_SUCCESS:
-				stat->nr_pageout += nr_pages;
+			if (folio_zero_filled) {
+				folio_clear_dirty(folio);
+			} else {
+				switch (pageout(folio, mapping, &plug)) {
+				case PAGE_KEEP:
+					goto keep_locked;
+				case PAGE_ACTIVATE:
+					goto activate_locked;
+				case PAGE_SUCCESS:
+					stat->nr_pageout += nr_pages;
 
-				if (folio_test_writeback(folio))
-					goto keep;
-				if (folio_test_dirty(folio))
-					goto keep;
+					if (folio_test_writeback(folio))
+						goto keep;
+					if (folio_test_dirty(folio))
+						goto keep;
 
-				/*
-				 * A synchronous write - probably a ramdisk.  Go
-				 * ahead and try to reclaim the folio.
-				 */
-				if (!folio_trylock(folio))
-					goto keep;
-				if (folio_test_dirty(folio) ||
-				    folio_test_writeback(folio))
-					goto keep_locked;
-				mapping = folio_mapping(folio);
-				fallthrough;
-			case PAGE_CLEAN:
-				; /* try to free the folio below */
+					/*
+					 * A synchronous write - probably a ramdisk.  Go
+					 * ahead and try to reclaim the folio.
+					 */
+					if (!folio_trylock(folio))
+						goto keep;
+					if (folio_test_dirty(folio) ||
+					    folio_test_writeback(folio))
+						goto keep_locked;
+					mapping = folio_mapping(folio);
+					fallthrough;
+				case PAGE_CLEAN:
+					; /* try to free the folio below */
+				}
 			}
 		}