diff mbox series

[v2] mm: use clear_user_(high)page() for arch with special user folio handling

Message ID 20241206174246.2799715-1-ziy@nvidia.com (mailing list archive)
State New
Headers show
Series [v2] mm: use clear_user_(high)page() for arch with special user folio handling | expand

Commit Message

Zi Yan Dec. 6, 2024, 5:42 p.m. UTC
For architectures setting cpu_dcache_is_aliasing() to true, which require
flushing cache, and arc, which changes folio->flags after clearing a user
folio, __GFP_ZERO using only clear_page() is not enough to zero user
folios and clear_user_(high)page() must be used. Otherwise, user data
will be corrupted.

Fix it by always clearing user folios with clear_user_(high)page() when
cpu_dcache_is_aliasing() is true or architecture is arc. Rename
alloc_zeroed() to alloc_need_zeroing() and invert the logic to clarify its
intend.

Fixes: 5708d96da20b ("mm: avoid zeroing user movable page twice with init_on_alloc=1")
Reported-by: Geert Uytterhoeven <geert+renesas@glider.be>
Closes: https://lore.kernel.org/linux-mm/CAMuHMdV1hRp_NtR5YnJo=HsfgKQeH91J537Gh4gKk3PFZhSkbA@mail.gmail.com/
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 include/linux/highmem.h |  8 +++++++-
 include/linux/mm.h      | 17 +++++++++++++++++
 mm/huge_memory.c        |  9 +++++----
 mm/internal.h           |  6 ------
 mm/memory.c             | 10 +++++-----
 5 files changed, 34 insertions(+), 16 deletions(-)

Comments

Mathieu Desnoyers Dec. 7, 2024, 3:31 p.m. UTC | #1
On 2024-12-06 12:42, Zi Yan wrote:
> For architectures setting cpu_dcache_is_aliasing() to true, which require
> flushing cache, and arc, which changes folio->flags after clearing a user
> folio, __GFP_ZERO using only clear_page() is not enough to zero user
> folios and clear_user_(high)page() must be used. Otherwise, user data
> will be corrupted.
> 
> Fix it by always clearing user folios with clear_user_(high)page() when
> cpu_dcache_is_aliasing() is true or architecture is arc. Rename
> alloc_zeroed() to alloc_need_zeroing() and invert the logic to clarify its
> intend.
> 
> Fixes: 5708d96da20b ("mm: avoid zeroing user movable page twice with init_on_alloc=1")
> Reported-by: Geert Uytterhoeven <geert+renesas@glider.be>
> Closes: https://lore.kernel.org/linux-mm/CAMuHMdV1hRp_NtR5YnJo=HsfgKQeH91J537Gh4gKk3PFZhSkbA@mail.gmail.com/
> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>   include/linux/highmem.h |  8 +++++++-
>   include/linux/mm.h      | 17 +++++++++++++++++
>   mm/huge_memory.c        |  9 +++++----
>   mm/internal.h           |  6 ------
>   mm/memory.c             | 10 +++++-----
>   5 files changed, 34 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index 6e452bd8e7e3..d9beb8371daa 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -224,7 +224,13 @@ static inline
>   struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
>   				   unsigned long vaddr)
>   {
> -	return vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr);
> +	struct folio *folio;
> +
> +	folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vaddr);
> +	if (folio && alloc_need_zeroing())
> +		clear_user_highpage(&folio->page, vaddr);
> +
> +	return folio;
>   }
>   #endif
>   
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c39c4945946c..ca8df5871213 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -31,6 +31,7 @@
>   #include <linux/kasan.h>
>   #include <linux/memremap.h>
>   #include <linux/slab.h>
> +#include <linux/cacheinfo.h>
>   
>   struct mempolicy;
>   struct anon_vma;
> @@ -4175,6 +4176,22 @@ static inline int do_mseal(unsigned long start, size_t len_in, unsigned long fla
>   }
>   #endif
>   
> +/*
> + * alloc_need_zeroing checks if a user folio from page allocator needs to be
> + * zeroed or not.
> + */
> +static inline bool alloc_need_zeroing(void)
> +{
> +	/*
> +	 * for user folios, arch with cache aliasing requires cache flush and
> +	 * arc changes folio->flags, so always return false to make caller use
> +	 * clear_user_page()/clear_user_highpage()
> +	 */
> +	return (cpu_dcache_is_aliasing() || IS_ENABLED(CONFIG_ARC)) ||

Nack.

Can we please not go back to re-introducing arch-specific
conditionals in generic mm code after the cleanup I did when
introducing cpu_dcache_is_aliasing() in commit 8690bbcf3b70 ?

Based on commit eacd0e950dc2, AFAIU what you appear to need here
is to introduce a:

cpu_icache_is_aliasing() -> note the "i" for instruction cache

It would typically be directly set to

#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()

except on architecture like ARC when the icache vs dcache
is aliasing, but not dcache vs dcache.

So for ARC it would be defined as:

#define cpu_dcache_is_aliasing() false
#define cpu_icache_is_aliasing() true

And the Kconfig ARCH_HAS_CPU_CACHE_ALIASING=y would be set for ARC
again.

I'm not entirely sure if we want to go for the wording "is_aliasing"
or "is_incoherent" when talking about icache vs dcache, so I'm open
to ideas here.

Thanks,

Mathieu

> +	       !static_branch_maybe(CONFIG_INIT_ON_ALLOC_DEFAULT_ON,
> +				   &init_on_alloc);
> +}
> +
>   int arch_get_shadow_stack_status(struct task_struct *t, unsigned long __user *status);
>   int arch_set_shadow_stack_status(struct task_struct *t, unsigned long status);
>   int arch_lock_shadow_stack_status(struct task_struct *t, unsigned long status);
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index ee335d96fc39..107130a5413a 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1176,11 +1176,12 @@ static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
>   	folio_throttle_swaprate(folio, gfp);
>   
>          /*
> -	* When a folio is not zeroed during allocation (__GFP_ZERO not used),
> -	* folio_zero_user() is used to make sure that the page corresponding
> -	* to the faulting address will be hot in the cache after zeroing.
> +	* When a folio is not zeroed during allocation (__GFP_ZERO not used)
> +	* or user folios require special handling, folio_zero_user() is used to
> +	* make sure that the page corresponding to the faulting address will be
> +	* hot in the cache after zeroing.
>   	*/
> -	if (!alloc_zeroed())
> +	if (alloc_need_zeroing())
>   		folio_zero_user(folio, addr);
>   	/*
>   	 * The memory barrier inside __folio_mark_uptodate makes sure that
> diff --git a/mm/internal.h b/mm/internal.h
> index cb8d8e8e3ffa..3bd08bafad04 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -1285,12 +1285,6 @@ void touch_pud(struct vm_area_struct *vma, unsigned long addr,
>   void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
>   	       pmd_t *pmd, bool write);
>   
> -static inline bool alloc_zeroed(void)
> -{
> -	return static_branch_maybe(CONFIG_INIT_ON_ALLOC_DEFAULT_ON,
> -			&init_on_alloc);
> -}
> -
>   /*
>    * Parses a string with mem suffixes into its order. Useful to parse kernel
>    * parameters.
> diff --git a/mm/memory.c b/mm/memory.c
> index 75c2dfd04f72..cf1611791856 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4733,12 +4733,12 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf)
>   			folio_throttle_swaprate(folio, gfp);
>   			/*
>   			 * When a folio is not zeroed during allocation
> -			 * (__GFP_ZERO not used), folio_zero_user() is used
> -			 * to make sure that the page corresponding to the
> -			 * faulting address will be hot in the cache after
> -			 * zeroing.
> +			 * (__GFP_ZERO not used) or user folios require special
> +			 * handling, folio_zero_user() is used to make sure
> +			 * that the page corresponding to the faulting address
> +			 * will be hot in the cache after zeroing.
>   			 */
> -			if (!alloc_zeroed())
> +			if (alloc_need_zeroing())
>   				folio_zero_user(folio, vmf->address);
>   			return folio;
>   		}
Zi Yan Dec. 7, 2024, 4:20 p.m. UTC | #2
On 7 Dec 2024, at 10:31, Mathieu Desnoyers wrote:

> On 2024-12-06 12:42, Zi Yan wrote:
>> For architectures setting cpu_dcache_is_aliasing() to true, which require
>> flushing cache, and arc, which changes folio->flags after clearing a user
>> folio, __GFP_ZERO using only clear_page() is not enough to zero user
>> folios and clear_user_(high)page() must be used. Otherwise, user data
>> will be corrupted.
>>
>> Fix it by always clearing user folios with clear_user_(high)page() when
>> cpu_dcache_is_aliasing() is true or architecture is arc. Rename
>> alloc_zeroed() to alloc_need_zeroing() and invert the logic to clarify its
>> intend.
>>
>> Fixes: 5708d96da20b ("mm: avoid zeroing user movable page twice with init_on_alloc=1")
>> Reported-by: Geert Uytterhoeven <geert+renesas@glider.be>
>> Closes: https://lore.kernel.org/linux-mm/CAMuHMdV1hRp_NtR5YnJo=HsfgKQeH91J537Gh4gKk3PFZhSkbA@mail.gmail.com/
>> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>>   include/linux/highmem.h |  8 +++++++-
>>   include/linux/mm.h      | 17 +++++++++++++++++
>>   mm/huge_memory.c        |  9 +++++----
>>   mm/internal.h           |  6 ------
>>   mm/memory.c             | 10 +++++-----
>>   5 files changed, 34 insertions(+), 16 deletions(-)
>>
>> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
>> index 6e452bd8e7e3..d9beb8371daa 100644
>> --- a/include/linux/highmem.h
>> +++ b/include/linux/highmem.h
>> @@ -224,7 +224,13 @@ static inline
>>   struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
>>   				   unsigned long vaddr)
>>   {
>> -	return vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr);
>> +	struct folio *folio;
>> +
>> +	folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vaddr);
>> +	if (folio && alloc_need_zeroing())
>> +		clear_user_highpage(&folio->page, vaddr);
>> +
>> +	return folio;
>>   }
>>   #endif
>>  diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index c39c4945946c..ca8df5871213 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -31,6 +31,7 @@
>>   #include <linux/kasan.h>
>>   #include <linux/memremap.h>
>>   #include <linux/slab.h>
>> +#include <linux/cacheinfo.h>
>>    struct mempolicy;
>>   struct anon_vma;
>> @@ -4175,6 +4176,22 @@ static inline int do_mseal(unsigned long start, size_t len_in, unsigned long fla
>>   }
>>   #endif
>>  +/*
>> + * alloc_need_zeroing checks if a user folio from page allocator needs to be
>> + * zeroed or not.
>> + */
>> +static inline bool alloc_need_zeroing(void)
>> +{
>> +	/*
>> +	 * for user folios, arch with cache aliasing requires cache flush and
>> +	 * arc changes folio->flags, so always return false to make caller use
>> +	 * clear_user_page()/clear_user_highpage()
>> +	 */
>> +	return (cpu_dcache_is_aliasing() || IS_ENABLED(CONFIG_ARC)) ||
>
> Nack.
>
> Can we please not go back to re-introducing arch-specific
> conditionals in generic mm code after the cleanup I did when
> introducing cpu_dcache_is_aliasing() in commit 8690bbcf3b70 ?

OK

>
> Based on commit eacd0e950dc2, AFAIU what you appear to need here
> is to introduce a:
>
> cpu_icache_is_aliasing() -> note the "i" for instruction cache
>
> It would typically be directly set to
>
> #define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()
>
> except on architecture like ARC when the icache vs dcache
> is aliasing, but not dcache vs dcache.
>
> So for ARC it would be defined as:
>
> #define cpu_dcache_is_aliasing() false
> #define cpu_icache_is_aliasing() true
>
> And the Kconfig ARCH_HAS_CPU_CACHE_ALIASING=y would be set for ARC
> again.
>

Sounds good to me.

> I'm not entirely sure if we want to go for the wording "is_aliasing"
> or "is_incoherent" when talking about icache vs dcache, so I'm open
> to ideas here.
>

Let me know if the code below looks good to you. I will use
(cpu_icache_is_aliasing() || cpu_dcache_is_aliasing())
instead of
(cpu_dcache_is_aliasing() || IS_ENABLED(CONFIG_ARC))
in my next version.

diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 5b2488142041..e96935373796 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -6,6 +6,7 @@
 config ARC
 	def_bool y
 	select ARC_TIMERS
+	select ARCH_HAS_CPU_CACHE_ALIASING
 	select ARCH_HAS_CACHE_LINE_SIZE
 	select ARCH_HAS_DEBUG_VM_PGTABLE
 	select ARCH_HAS_DMA_PREP_COHERENT
diff --git a/arch/arc/include/asm/cachetype.h b/arch/arc/include/asm/cachetype.h
new file mode 100644
index 000000000000..acd3b6cb4bf5
--- /dev/null
+++ b/arch/arc/include/asm/cachetype.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ARC_CACHETYPE_H
+#define __ASM_ARC_CACHETYPE_H
+
+#define cpu_dcache_is_aliasing()	false
+#define cpu_icache_is_aliasing()	true
+
+#endif
diff --git a/arch/arm/include/asm/cachetype.h b/arch/arm/include/asm/cachetype.h
index b9dbe1d4c8fe..80725010c38f 100644
--- a/arch/arm/include/asm/cachetype.h
+++ b/arch/arm/include/asm/cachetype.h
@@ -21,6 +21,7 @@ extern unsigned int cacheid;
 #define icache_is_pipt()		cacheid_is(CACHEID_PIPT)

 #define cpu_dcache_is_aliasing()	(cache_is_vivt() || cache_is_vipt_aliasing())
+#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()

 /*
  * __LINUX_ARM_ARCH__ is the minimum supported CPU architecture
diff --git a/arch/csky/include/asm/cachetype.h b/arch/csky/include/asm/cachetype.h
index 98cbe3af662f..838c7e708d19 100644
--- a/arch/csky/include/asm/cachetype.h
+++ b/arch/csky/include/asm/cachetype.h
@@ -5,5 +5,6 @@
 #include <linux/types.h>

 #define cpu_dcache_is_aliasing()	true
+#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()

 #endif
diff --git a/arch/m68k/include/asm/cachetype.h b/arch/m68k/include/asm/cachetype.h
index 7fad5d9ab8fe..4993846354f4 100644
--- a/arch/m68k/include/asm/cachetype.h
+++ b/arch/m68k/include/asm/cachetype.h
@@ -5,5 +5,6 @@
 #include <linux/types.h>

 #define cpu_dcache_is_aliasing()	true
+#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()

 #endif
diff --git a/arch/mips/include/asm/cachetype.h b/arch/mips/include/asm/cachetype.h
index 9f4ba2fe1155..aac25ee27cd1 100644
--- a/arch/mips/include/asm/cachetype.h
+++ b/arch/mips/include/asm/cachetype.h
@@ -5,5 +5,6 @@
 #include <asm/cpu-features.h>

 #define cpu_dcache_is_aliasing()	cpu_has_dc_aliases
+#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()

 #endif
diff --git a/arch/nios2/include/asm/cachetype.h b/arch/nios2/include/asm/cachetype.h
index eb9c416b8a1c..4b492fbb4a39 100644
--- a/arch/nios2/include/asm/cachetype.h
+++ b/arch/nios2/include/asm/cachetype.h
@@ -6,5 +6,6 @@
 #include <asm/cache.h>

 #define cpu_dcache_is_aliasing()	(NIOS2_DCACHE_SIZE > PAGE_SIZE)
+#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()

 #endif
diff --git a/arch/parisc/include/asm/cachetype.h b/arch/parisc/include/asm/cachetype.h
index e0868a1d3c47..53180ff9d09a 100644
--- a/arch/parisc/include/asm/cachetype.h
+++ b/arch/parisc/include/asm/cachetype.h
@@ -5,5 +5,6 @@
 #include <linux/types.h>

 #define cpu_dcache_is_aliasing()	true
+#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()

 #endif
diff --git a/arch/sh/include/asm/cachetype.h b/arch/sh/include/asm/cachetype.h
index a5fffe536068..f711631b9451 100644
--- a/arch/sh/include/asm/cachetype.h
+++ b/arch/sh/include/asm/cachetype.h
@@ -5,5 +5,6 @@
 #include <linux/types.h>

 #define cpu_dcache_is_aliasing()	true
+#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()

 #endif
diff --git a/arch/xtensa/include/asm/cachetype.h b/arch/xtensa/include/asm/cachetype.h
index 51bd49e2a1c5..89560b325006 100644
--- a/arch/xtensa/include/asm/cachetype.h
+++ b/arch/xtensa/include/asm/cachetype.h
@@ -6,5 +6,6 @@
 #include <asm/page.h>

 #define cpu_dcache_is_aliasing()	(DCACHE_WAY_SIZE > PAGE_SIZE)
+#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()

 #endif
diff --git a/include/linux/cacheinfo.h b/include/linux/cacheinfo.h
index 108060612bb8..a2dd3d00e080 100644
--- a/include/linux/cacheinfo.h
+++ b/include/linux/cacheinfo.h
@@ -155,6 +155,7 @@ static inline int get_cpu_cacheinfo_id(int cpu, int level)

 #ifndef CONFIG_ARCH_HAS_CPU_CACHE_ALIASING
 #define cpu_dcache_is_aliasing()	false
+#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()
 #else
 #include <asm/cachetype.h>
 #endif



Best Regards,
Yan, Zi
Zi Yan Dec. 7, 2024, 4:23 p.m. UTC | #3
On 7 Dec 2024, at 11:20, Zi Yan wrote:

> On 7 Dec 2024, at 10:31, Mathieu Desnoyers wrote:
>
>> On 2024-12-06 12:42, Zi Yan wrote:
>>> For architectures setting cpu_dcache_is_aliasing() to true, which require
>>> flushing cache, and arc, which changes folio->flags after clearing a user
>>> folio, __GFP_ZERO using only clear_page() is not enough to zero user
>>> folios and clear_user_(high)page() must be used. Otherwise, user data
>>> will be corrupted.
>>>
>>> Fix it by always clearing user folios with clear_user_(high)page() when
>>> cpu_dcache_is_aliasing() is true or architecture is arc. Rename
>>> alloc_zeroed() to alloc_need_zeroing() and invert the logic to clarify its
>>> intend.
>>>
>>> Fixes: 5708d96da20b ("mm: avoid zeroing user movable page twice with init_on_alloc=1")
>>> Reported-by: Geert Uytterhoeven <geert+renesas@glider.be>
>>> Closes: https://lore.kernel.org/linux-mm/CAMuHMdV1hRp_NtR5YnJo=HsfgKQeH91J537Gh4gKk3PFZhSkbA@mail.gmail.com/
>>> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>> ---
>>>   include/linux/highmem.h |  8 +++++++-
>>>   include/linux/mm.h      | 17 +++++++++++++++++
>>>   mm/huge_memory.c        |  9 +++++----
>>>   mm/internal.h           |  6 ------
>>>   mm/memory.c             | 10 +++++-----
>>>   5 files changed, 34 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
>>> index 6e452bd8e7e3..d9beb8371daa 100644
>>> --- a/include/linux/highmem.h
>>> +++ b/include/linux/highmem.h
>>> @@ -224,7 +224,13 @@ static inline
>>>   struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
>>>   				   unsigned long vaddr)
>>>   {
>>> -	return vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr);
>>> +	struct folio *folio;
>>> +
>>> +	folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vaddr);
>>> +	if (folio && alloc_need_zeroing())
>>> +		clear_user_highpage(&folio->page, vaddr);
>>> +
>>> +	return folio;
>>>   }
>>>   #endif
>>>  diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index c39c4945946c..ca8df5871213 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -31,6 +31,7 @@
>>>   #include <linux/kasan.h>
>>>   #include <linux/memremap.h>
>>>   #include <linux/slab.h>
>>> +#include <linux/cacheinfo.h>
>>>    struct mempolicy;
>>>   struct anon_vma;
>>> @@ -4175,6 +4176,22 @@ static inline int do_mseal(unsigned long start, size_t len_in, unsigned long fla
>>>   }
>>>   #endif
>>>  +/*
>>> + * alloc_need_zeroing checks if a user folio from page allocator needs to be
>>> + * zeroed or not.
>>> + */
>>> +static inline bool alloc_need_zeroing(void)
>>> +{
>>> +	/*
>>> +	 * for user folios, arch with cache aliasing requires cache flush and
>>> +	 * arc changes folio->flags, so always return false to make caller use
>>> +	 * clear_user_page()/clear_user_highpage()
>>> +	 */
>>> +	return (cpu_dcache_is_aliasing() || IS_ENABLED(CONFIG_ARC)) ||
>>
>> Nack.
>>
>> Can we please not go back to re-introducing arch-specific
>> conditionals in generic mm code after the cleanup I did when
>> introducing cpu_dcache_is_aliasing() in commit 8690bbcf3b70 ?
>
> OK
>
>>
>> Based on commit eacd0e950dc2, AFAIU what you appear to need here
>> is to introduce a:
>>
>> cpu_icache_is_aliasing() -> note the "i" for instruction cache
>>
>> It would typically be directly set to
>>
>> #define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()
>>
>> except on architecture like ARC when the icache vs dcache
>> is aliasing, but not dcache vs dcache.
>>
>> So for ARC it would be defined as:
>>
>> #define cpu_dcache_is_aliasing() false
>> #define cpu_icache_is_aliasing() true
>>
>> And the Kconfig ARCH_HAS_CPU_CACHE_ALIASING=y would be set for ARC
>> again.
>>
>
> Sounds good to me.
>
>> I'm not entirely sure if we want to go for the wording "is_aliasing"
>> or "is_incoherent" when talking about icache vs dcache, so I'm open
>> to ideas here.
>>
>
> Let me know if the code below looks good to you. I will use
> (cpu_icache_is_aliasing() || cpu_dcache_is_aliasing())
> instead of
> (cpu_dcache_is_aliasing() || IS_ENABLED(CONFIG_ARC))
> in my next version.

Or this one, which set cpu_icache_is_aliasing()’s default value
in the generic header:

diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 5b2488142041..e96935373796 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -6,6 +6,7 @@
 config ARC
 	def_bool y
 	select ARC_TIMERS
+	select ARCH_HAS_CPU_CACHE_ALIASING
 	select ARCH_HAS_CACHE_LINE_SIZE
 	select ARCH_HAS_DEBUG_VM_PGTABLE
 	select ARCH_HAS_DMA_PREP_COHERENT
diff --git a/arch/arc/include/asm/cachetype.h b/arch/arc/include/asm/cachetype.h
new file mode 100644
index 000000000000..acd3b6cb4bf5
--- /dev/null
+++ b/arch/arc/include/asm/cachetype.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ARC_CACHETYPE_H
+#define __ASM_ARC_CACHETYPE_H
+
+#define cpu_dcache_is_aliasing()	false
+#define cpu_icache_is_aliasing()	true
+
+#endif
diff --git a/include/linux/cacheinfo.h b/include/linux/cacheinfo.h
index 108060612bb8..61a46cdff1dc 100644
--- a/include/linux/cacheinfo.h
+++ b/include/linux/cacheinfo.h
@@ -155,8 +155,14 @@ static inline int get_cpu_cacheinfo_id(int cpu, int level)

 #ifndef CONFIG_ARCH_HAS_CPU_CACHE_ALIASING
 #define cpu_dcache_is_aliasing()	false
+#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()
 #else
 #include <asm/cachetype.h>
+
+#ifndef cpu_icache_is_aliasing
+#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()
+#endif
+
 #endif

 #endif /* _LINUX_CACHEINFO_H */


Best Regards,
Yan, Zi
Mathieu Desnoyers Dec. 7, 2024, 4:38 p.m. UTC | #4
On 2024-12-07 11:20, Zi Yan wrote:
> On 7 Dec 2024, at 10:31, Mathieu Desnoyers wrote:
[...]
>> I'm not entirely sure if we want to go for the wording "is_aliasing"
>> or "is_incoherent" when talking about icache vs dcache, so I'm open
>> to ideas here.
>>
> 
> Let me know if the code below looks good to you. I will use
> (cpu_icache_is_aliasing() || cpu_dcache_is_aliasing())
> instead of
> (cpu_dcache_is_aliasing() || IS_ENABLED(CONFIG_ARC))
> in my next version.

Yes, this is in line with what I have in mind.

And I prefer your second version with the generic
cpu_icache_is_aliasing rather than defining it for
each architecture.

Thanks,

Mathieu
Mathieu Desnoyers Dec. 7, 2024, 4:41 p.m. UTC | #5
On 2024-12-07 11:23, Zi Yan wrote:
> On 7 Dec 2024, at 11:20, Zi Yan wrote:
[...]
> 
> Or this one, which set cpu_icache_is_aliasing()’s default value
> in the generic header:

Yes, this looks good.

Just be careful to keep the same tabs vs spaces style
as the nearby code.

Thanks!

Mathieu

> 
> diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
> index 5b2488142041..e96935373796 100644
> --- a/arch/arc/Kconfig
> +++ b/arch/arc/Kconfig
> @@ -6,6 +6,7 @@
>   config ARC
>   	def_bool y
>   	select ARC_TIMERS
> +	select ARCH_HAS_CPU_CACHE_ALIASING
>   	select ARCH_HAS_CACHE_LINE_SIZE
>   	select ARCH_HAS_DEBUG_VM_PGTABLE
>   	select ARCH_HAS_DMA_PREP_COHERENT
> diff --git a/arch/arc/include/asm/cachetype.h b/arch/arc/include/asm/cachetype.h
> new file mode 100644
> index 000000000000..acd3b6cb4bf5
> --- /dev/null
> +++ b/arch/arc/include/asm/cachetype.h
> @@ -0,0 +1,8 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ASM_ARC_CACHETYPE_H
> +#define __ASM_ARC_CACHETYPE_H
> +
> +#define cpu_dcache_is_aliasing()	false
> +#define cpu_icache_is_aliasing()	true
> +
> +#endif
> diff --git a/include/linux/cacheinfo.h b/include/linux/cacheinfo.h
> index 108060612bb8..61a46cdff1dc 100644
> --- a/include/linux/cacheinfo.h
> +++ b/include/linux/cacheinfo.h
> @@ -155,8 +155,14 @@ static inline int get_cpu_cacheinfo_id(int cpu, int level)
> 
>   #ifndef CONFIG_ARCH_HAS_CPU_CACHE_ALIASING
>   #define cpu_dcache_is_aliasing()	false
> +#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()
>   #else
>   #include <asm/cachetype.h>
> +
> +#ifndef cpu_icache_is_aliasing
> +#define cpu_icache_is_aliasing() cpu_dcache_is_aliasing()
> +#endif
> +
>   #endif
> 
>   #endif /* _LINUX_CACHEINFO_H */
> 
> 
> Best Regards,
> Yan, Zi
diff mbox series

Patch

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 6e452bd8e7e3..d9beb8371daa 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -224,7 +224,13 @@  static inline
 struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
 				   unsigned long vaddr)
 {
-	return vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr);
+	struct folio *folio;
+
+	folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vaddr);
+	if (folio && alloc_need_zeroing())
+		clear_user_highpage(&folio->page, vaddr);
+
+	return folio;
 }
 #endif
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index c39c4945946c..ca8df5871213 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -31,6 +31,7 @@ 
 #include <linux/kasan.h>
 #include <linux/memremap.h>
 #include <linux/slab.h>
+#include <linux/cacheinfo.h>
 
 struct mempolicy;
 struct anon_vma;
@@ -4175,6 +4176,22 @@  static inline int do_mseal(unsigned long start, size_t len_in, unsigned long fla
 }
 #endif
 
+/*
+ * alloc_need_zeroing checks if a user folio from page allocator needs to be
+ * zeroed or not.
+ */
+static inline bool alloc_need_zeroing(void)
+{
+	/*
+	 * for user folios, arch with cache aliasing requires cache flush and
+	 * arc changes folio->flags, so always return false to make caller use
+	 * clear_user_page()/clear_user_highpage()
+	 */
+	return (cpu_dcache_is_aliasing() || IS_ENABLED(CONFIG_ARC)) ||
+	       !static_branch_maybe(CONFIG_INIT_ON_ALLOC_DEFAULT_ON,
+				   &init_on_alloc);
+}
+
 int arch_get_shadow_stack_status(struct task_struct *t, unsigned long __user *status);
 int arch_set_shadow_stack_status(struct task_struct *t, unsigned long status);
 int arch_lock_shadow_stack_status(struct task_struct *t, unsigned long status);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ee335d96fc39..107130a5413a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1176,11 +1176,12 @@  static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
 	folio_throttle_swaprate(folio, gfp);
 
        /*
-	* When a folio is not zeroed during allocation (__GFP_ZERO not used),
-	* folio_zero_user() is used to make sure that the page corresponding
-	* to the faulting address will be hot in the cache after zeroing.
+	* When a folio is not zeroed during allocation (__GFP_ZERO not used)
+	* or user folios require special handling, folio_zero_user() is used to
+	* make sure that the page corresponding to the faulting address will be
+	* hot in the cache after zeroing.
 	*/
-	if (!alloc_zeroed())
+	if (alloc_need_zeroing())
 		folio_zero_user(folio, addr);
 	/*
 	 * The memory barrier inside __folio_mark_uptodate makes sure that
diff --git a/mm/internal.h b/mm/internal.h
index cb8d8e8e3ffa..3bd08bafad04 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1285,12 +1285,6 @@  void touch_pud(struct vm_area_struct *vma, unsigned long addr,
 void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
 	       pmd_t *pmd, bool write);
 
-static inline bool alloc_zeroed(void)
-{
-	return static_branch_maybe(CONFIG_INIT_ON_ALLOC_DEFAULT_ON,
-			&init_on_alloc);
-}
-
 /*
  * Parses a string with mem suffixes into its order. Useful to parse kernel
  * parameters.
diff --git a/mm/memory.c b/mm/memory.c
index 75c2dfd04f72..cf1611791856 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4733,12 +4733,12 @@  static struct folio *alloc_anon_folio(struct vm_fault *vmf)
 			folio_throttle_swaprate(folio, gfp);
 			/*
 			 * When a folio is not zeroed during allocation
-			 * (__GFP_ZERO not used), folio_zero_user() is used
-			 * to make sure that the page corresponding to the
-			 * faulting address will be hot in the cache after
-			 * zeroing.
+			 * (__GFP_ZERO not used) or user folios require special
+			 * handling, folio_zero_user() is used to make sure
+			 * that the page corresponding to the faulting address
+			 * will be hot in the cache after zeroing.
 			 */
-			if (!alloc_zeroed())
+			if (alloc_need_zeroing())
 				folio_zero_user(folio, vmf->address);
 			return folio;
 		}