diff mbox series

[RFC,12/16] arm64: mm: Map p4d/pgd with privileged pkey

Message ID 20241206101110.1646108-13-kevin.brodsky@arm.com (mailing list archive)
State New
Headers show
Series pkeys-based page table hardening | expand

Commit Message

Kevin Brodsky Dec. 6, 2024, 10:11 a.m. UTC
If CONFIG_KPKEYS_HARDENED_PGTABLES is enabled, map p4d/pgd pages
using a privileged pkey (KPKEYS_PKEY_PGTABLES), so that they can
only be written under guard(kpkeys_hardened_pgtables).

The case where pgd is not page-sized is not currently handled -
this is pending support for pkeys in kmem_cache.

This patch is a no-op if CONFIG_KPKEYS_HARDENED_PGTABLES is disabled
(default).

Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
 arch/arm64/include/asm/pgalloc.h | 21 ++++++++++++++++++---
 arch/arm64/mm/pgd.c              | 30 ++++++++++++++++++++++++++++--
 2 files changed, 46 insertions(+), 5 deletions(-)

Comments

Peter Zijlstra Dec. 9, 2024, 10:24 a.m. UTC | #1
On Fri, Dec 06, 2024 at 10:11:06AM +0000, Kevin Brodsky wrote:
> If CONFIG_KPKEYS_HARDENED_PGTABLES is enabled, map p4d/pgd pages
> using a privileged pkey (KPKEYS_PKEY_PGTABLES), so that they can
> only be written under guard(kpkeys_hardened_pgtables).
> 
> The case where pgd is not page-sized is not currently handled -
> this is pending support for pkeys in kmem_cache.
> 
> This patch is a no-op if CONFIG_KPKEYS_HARDENED_PGTABLES is disabled
> (default).

Should not this live in pagetable_*_[cd]tor() in generic code?
Kevin Brodsky Dec. 10, 2024, 9:27 a.m. UTC | #2
On 09/12/2024 11:24, Peter Zijlstra wrote:
> On Fri, Dec 06, 2024 at 10:11:06AM +0000, Kevin Brodsky wrote:
>> If CONFIG_KPKEYS_HARDENED_PGTABLES is enabled, map p4d/pgd pages
>> using a privileged pkey (KPKEYS_PKEY_PGTABLES), so that they can
>> only be written under guard(kpkeys_hardened_pgtables).
>>
>> The case where pgd is not page-sized is not currently handled -
>> this is pending support for pkeys in kmem_cache.
>>
>> This patch is a no-op if CONFIG_KPKEYS_HARDENED_PGTABLES is disabled
>> (default).
> Should not this live in pagetable_*_[cd]tor() in generic code?

This would certainly be preferable but it doesn't look like such helpers
exist for p4d/pgd. For p4d, we could potentially handle this in the
generic __p4d_alloc(), but I'm not sure we can assume that
p4d_alloc_one() won't be called from somewhere else. pgd_alloc() is
entirely arch-specific so not much we can do there.

- Kevin
Peter Zijlstra Dec. 10, 2024, 12:23 p.m. UTC | #3
On Tue, Dec 10, 2024 at 10:27:56AM +0100, Kevin Brodsky wrote:
> On 09/12/2024 11:24, Peter Zijlstra wrote:
> > On Fri, Dec 06, 2024 at 10:11:06AM +0000, Kevin Brodsky wrote:
> >> If CONFIG_KPKEYS_HARDENED_PGTABLES is enabled, map p4d/pgd pages
> >> using a privileged pkey (KPKEYS_PKEY_PGTABLES), so that they can
> >> only be written under guard(kpkeys_hardened_pgtables).
> >>
> >> The case where pgd is not page-sized is not currently handled -
> >> this is pending support for pkeys in kmem_cache.
> >>
> >> This patch is a no-op if CONFIG_KPKEYS_HARDENED_PGTABLES is disabled
> >> (default).
> > Should not this live in pagetable_*_[cd]tor() in generic code?
> 
> This would certainly be preferable but it doesn't look like such helpers
> exist for p4d/pgd. For p4d, we could potentially handle this in the
> generic __p4d_alloc(), but I'm not sure we can assume that
> p4d_alloc_one() won't be called from somewhere else. pgd_alloc() is
> entirely arch-specific so not much we can do there.

Can't we add the missing pagetable_{p4d,pgd}_[cd]tor() functions. Yes,
it will mean touching a bunch of arch code, but it shouldn't be hard.
Kevin Brodsky Dec. 11, 2024, 1:35 p.m. UTC | #4
On 10/12/2024 13:23, Peter Zijlstra wrote:
> On Tue, Dec 10, 2024 at 10:27:56AM +0100, Kevin Brodsky wrote:
>> On 09/12/2024 11:24, Peter Zijlstra wrote:
>>> On Fri, Dec 06, 2024 at 10:11:06AM +0000, Kevin Brodsky wrote:
>>>> If CONFIG_KPKEYS_HARDENED_PGTABLES is enabled, map p4d/pgd pages
>>>> using a privileged pkey (KPKEYS_PKEY_PGTABLES), so that they can
>>>> only be written under guard(kpkeys_hardened_pgtables).
>>>>
>>>> The case where pgd is not page-sized is not currently handled -
>>>> this is pending support for pkeys in kmem_cache.
>>>>
>>>> This patch is a no-op if CONFIG_KPKEYS_HARDENED_PGTABLES is disabled
>>>> (default).
>>> Should not this live in pagetable_*_[cd]tor() in generic code?
>> This would certainly be preferable but it doesn't look like such helpers
>> exist for p4d/pgd. For p4d, we could potentially handle this in the
>> generic __p4d_alloc(), but I'm not sure we can assume that
>> p4d_alloc_one() won't be called from somewhere else. pgd_alloc() is
>> entirely arch-specific so not much we can do there.
> Can't we add the missing pagetable_{p4d,pgd}_[cd]tor() functions. Yes,
> it will mean touching a bunch of arch code, but it shouldn't be hard.

It does look doable. The p4d level shouldn't be an issue, it's unclear
why it doesn't follow the same pattern as pud already. pgd will be more
involved (no generic layer at all) but as you say it should just be
about some churn in arch code.

An extra complication is that the pgd level may be smaller than a page,
at least on arm64 (see pgd_alloc() in arch/arm64/mm/pgd.c). I suppose
affected architectures will have to define their own pgd_alloc_one().

I'll give it a try and post something separately if it looks sensible.

- Kevin
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index e75422864d1b..c006aecd6ba5 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -88,18 +88,33 @@  static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgdp, p4d_t *p4dp)
 static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr)
 {
 	gfp_t gfp = GFP_PGTABLE_USER;
+	int ret;
 
 	if (mm == &init_mm)
 		gfp = GFP_PGTABLE_KERNEL;
-	return (p4d_t *)get_zeroed_page(gfp);
+
+	addr = get_zeroed_page(gfp);
+	if (!addr)
+		return NULL;
+
+	ret = kpkeys_protect_pgtable_memory(addr, 1);
+	if (ret) {
+		free_page(addr);
+		return NULL;
+	}
+
+	return (p4d_t *)addr;
 }
 
 static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d)
 {
+	unsigned long addr = (unsigned long)p4d;
+
 	if (!pgtable_l5_enabled())
 		return;
-	BUG_ON((unsigned long)p4d & (PAGE_SIZE-1));
-	free_page((unsigned long)p4d);
+	BUG_ON(addr & (PAGE_SIZE-1));
+	kpkeys_unprotect_pgtable_memory(addr, 1);
+	free_page(addr);
 }
 
 #define __p4d_free_tlb(tlb, p4d, addr)  p4d_free((tlb)->mm, p4d)
diff --git a/arch/arm64/mm/pgd.c b/arch/arm64/mm/pgd.c
index 0c501cabc238..3577cc1821af 100644
--- a/arch/arm64/mm/pgd.c
+++ b/arch/arm64/mm/pgd.c
@@ -28,12 +28,38 @@  static bool pgdir_is_page_size(void)
 	return false;
 }
 
+static pgd_t *pgd_page_alloc(gfp_t gfp)
+{
+	unsigned long addr;
+	int ret;
+
+	addr = __get_free_page(gfp);
+	if (!addr)
+		return NULL;
+
+	ret = kpkeys_protect_pgtable_memory(addr, 1);
+	if (ret) {
+		free_page(addr);
+		return NULL;
+	}
+
+	return (pgd_t *)addr;
+}
+
+static void pgd_page_free(pgd_t *pgd)
+{
+	unsigned long addr = (unsigned long)pgd;
+
+	kpkeys_unprotect_pgtable_memory(addr, 1);
+	free_page(addr);
+}
+
 pgd_t *pgd_alloc(struct mm_struct *mm)
 {
 	gfp_t gfp = GFP_PGTABLE_USER;
 
 	if (pgdir_is_page_size())
-		return (pgd_t *)__get_free_page(gfp);
+		return pgd_page_alloc(gfp);
 	else
 		return kmem_cache_alloc(pgd_cache, gfp);
 }
@@ -41,7 +67,7 @@  pgd_t *pgd_alloc(struct mm_struct *mm)
 void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 {
 	if (pgdir_is_page_size())
-		free_page((unsigned long)pgd);
+		pgd_page_free(pgd);
 	else
 		kmem_cache_free(pgd_cache, pgd);
 }