diff mbox series

[RFC,13/16] arm64: mm: Reset pkey in __tlb_remove_table()

Message ID 20241206101110.1646108-14-kevin.brodsky@arm.com (mailing list archive)
State New
Headers show
Series pkeys-based page table hardening | expand

Commit Message

Kevin Brodsky Dec. 6, 2024, 10:11 a.m. UTC
Page table pages are typically freed via tlb_remove_table() and
friends. Ensure that the linear mapping for those pages is reset to
the default pkey when CONFIG_KPKEYS_HARDENED_PGTABLES is enabled.

This patch is a no-op if CONFIG_KPKEYS_HARDENED_PGTABLES is disabled
(default).

Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
 arch/arm64/include/asm/tlb.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

Comments

Peter Zijlstra Dec. 9, 2024, 10:29 a.m. UTC | #1
On Fri, Dec 06, 2024 at 10:11:07AM +0000, Kevin Brodsky wrote:
> Page table pages are typically freed via tlb_remove_table() and
> friends. Ensure that the linear mapping for those pages is reset to
> the default pkey when CONFIG_KPKEYS_HARDENED_PGTABLES is enabled.
> 
> This patch is a no-op if CONFIG_KPKEYS_HARDENED_PGTABLES is disabled
> (default).
> 
> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
> ---
>  arch/arm64/include/asm/tlb.h | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
> index a947c6e784ed..d1611ffa6d91 100644
> --- a/arch/arm64/include/asm/tlb.h
> +++ b/arch/arm64/include/asm/tlb.h
> @@ -10,10 +10,14 @@
>  
>  #include <linux/pagemap.h>
>  #include <linux/swap.h>
> +#include <linux/kpkeys.h>
>  
>  static inline void __tlb_remove_table(void *_table)
>  {
> -	free_page_and_swap_cache((struct page *)_table);
> +	struct page *page = (struct page *)_table;
> +
> +	kpkeys_unprotect_pgtable_memory((unsigned long)page_address(page), 1);
> +	free_page_and_swap_cache(page);
>  }

Same as for the others, perhaps stick this in generic code instead of in
the arch code?
Kevin Brodsky Dec. 10, 2024, 9:28 a.m. UTC | #2
On 09/12/2024 11:29, Peter Zijlstra wrote:
> On Fri, Dec 06, 2024 at 10:11:07AM +0000, Kevin Brodsky wrote:
>> [...]
>>
>> diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
>> index a947c6e784ed..d1611ffa6d91 100644
>> --- a/arch/arm64/include/asm/tlb.h
>> +++ b/arch/arm64/include/asm/tlb.h
>> @@ -10,10 +10,14 @@
>>  
>>  #include <linux/pagemap.h>
>>  #include <linux/swap.h>
>> +#include <linux/kpkeys.h>
>>  
>>  static inline void __tlb_remove_table(void *_table)
>>  {
>> -	free_page_and_swap_cache((struct page *)_table);
>> +	struct page *page = (struct page *)_table;
>> +
>> +	kpkeys_unprotect_pgtable_memory((unsigned long)page_address(page), 1);
>> +	free_page_and_swap_cache(page);
>>  }
> Same as for the others, perhaps stick this in generic code instead of in
> the arch code?

This should be doable, with some refactoring. __tlb_remove_table() is
currently called from two functions in mm/mmu_gather.c, I suppose I
could create a wrapper there that calls
kpkeys_unprotect_pgtable_memory() and then __tlb_remove_table(). Like in
the p4d case I do however wonder how robust this is, as
__tlb_remove_table() could end up being called from other places.

- Kevin
Peter Zijlstra Dec. 10, 2024, 12:27 p.m. UTC | #3
On Tue, Dec 10, 2024 at 10:28:44AM +0100, Kevin Brodsky wrote:
> On 09/12/2024 11:29, Peter Zijlstra wrote:
> > On Fri, Dec 06, 2024 at 10:11:07AM +0000, Kevin Brodsky wrote:
> >> [...]
> >>
> >> diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
> >> index a947c6e784ed..d1611ffa6d91 100644
> >> --- a/arch/arm64/include/asm/tlb.h
> >> +++ b/arch/arm64/include/asm/tlb.h
> >> @@ -10,10 +10,14 @@
> >>  
> >>  #include <linux/pagemap.h>
> >>  #include <linux/swap.h>
> >> +#include <linux/kpkeys.h>
> >>  
> >>  static inline void __tlb_remove_table(void *_table)
> >>  {
> >> -	free_page_and_swap_cache((struct page *)_table);
> >> +	struct page *page = (struct page *)_table;
> >> +
> >> +	kpkeys_unprotect_pgtable_memory((unsigned long)page_address(page), 1);
> >> +	free_page_and_swap_cache(page);
> >>  }
> > Same as for the others, perhaps stick this in generic code instead of in
> > the arch code?
> 
> This should be doable, with some refactoring. __tlb_remove_table() is
> currently called from two functions in mm/mmu_gather.c, I suppose I
> could create a wrapper there that calls
> kpkeys_unprotect_pgtable_memory() and then __tlb_remove_table(). Like in
> the p4d case I do however wonder how robust this is, as
> __tlb_remove_table() could end up being called from other places.

I don't foresee other __tlb_remove_table() users, this is all rather
speicific code. But if there ever were to be new users, it is something
they would have to take into consideration.
Kevin Brodsky Dec. 11, 2024, 1:37 p.m. UTC | #4
On 10/12/2024 13:27, Peter Zijlstra wrote:
> On Tue, Dec 10, 2024 at 10:28:44AM +0100, Kevin Brodsky wrote:
>> On 09/12/2024 11:29, Peter Zijlstra wrote:
>>> On Fri, Dec 06, 2024 at 10:11:07AM +0000, Kevin Brodsky wrote:
>>>> [...]
>>>>
>>>> diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
>>>> index a947c6e784ed..d1611ffa6d91 100644
>>>> --- a/arch/arm64/include/asm/tlb.h
>>>> +++ b/arch/arm64/include/asm/tlb.h
>>>> @@ -10,10 +10,14 @@
>>>>  
>>>>  #include <linux/pagemap.h>
>>>>  #include <linux/swap.h>
>>>> +#include <linux/kpkeys.h>
>>>>  
>>>>  static inline void __tlb_remove_table(void *_table)
>>>>  {
>>>> -	free_page_and_swap_cache((struct page *)_table);
>>>> +	struct page *page = (struct page *)_table;
>>>> +
>>>> +	kpkeys_unprotect_pgtable_memory((unsigned long)page_address(page), 1);
>>>> +	free_page_and_swap_cache(page);
>>>>  }
>>> Same as for the others, perhaps stick this in generic code instead of in
>>> the arch code?
>> This should be doable, with some refactoring. __tlb_remove_table() is
>> currently called from two functions in mm/mmu_gather.c, I suppose I
>> could create a wrapper there that calls
>> kpkeys_unprotect_pgtable_memory() and then __tlb_remove_table(). Like in
>> the p4d case I do however wonder how robust this is, as
>> __tlb_remove_table() could end up being called from other places.
> I don't foresee other __tlb_remove_table() users, this is all rather
> speicific code. But if there ever were to be new users, it is something
> they would have to take into consideration.

Fair enough, I'll handle that in mm/mmu_gather.c then.

- Kevin
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
index a947c6e784ed..d1611ffa6d91 100644
--- a/arch/arm64/include/asm/tlb.h
+++ b/arch/arm64/include/asm/tlb.h
@@ -10,10 +10,14 @@ 
 
 #include <linux/pagemap.h>
 #include <linux/swap.h>
+#include <linux/kpkeys.h>
 
 static inline void __tlb_remove_table(void *_table)
 {
-	free_page_and_swap_cache((struct page *)_table);
+	struct page *page = (struct page *)_table;
+
+	kpkeys_unprotect_pgtable_memory((unsigned long)page_address(page), 1);
+	free_page_and_swap_cache(page);
 }
 
 #define tlb_flush tlb_flush