Message ID | 20201119133953.15585-2-vladimir.murzin@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: Support Enhanced PAN | expand |
On Thu, Nov 19, 2020 at 01:39:52PM +0000, Vladimir Murzin wrote: > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index 4ff12a7..e4ab9e0 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -114,7 +114,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; > > #define pte_valid(pte) (!!(pte_val(pte) & PTE_VALID)) > #define pte_valid_not_user(pte) \ > - ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID) > + ((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN)) > #define pte_valid_young(pte) \ > ((pte_val(pte) & (PTE_VALID | PTE_AF)) == (PTE_VALID | PTE_AF)) > #define pte_valid_user(pte) \ I was wondering if pte_valid_user() needs changing as well. It currently checks for PTE_VALID | PTE_USER. In theory, a !PTE_UXN is also user-accessible but it's only used in gup_pte_range() via pte_access_permitted(). If "access" here means only read/write, we should be ok. Still parsing this code. Otherwise the patch is fine.
On Thu, Nov 19, 2020 at 01:39:52PM +0000, Vladimir Murzin wrote: > Enhanced Privileged Access Never (EPAN) allows Privileged Access Never > to be used with Execute-only mappings. > > Absence of such support was a reason for 24cecc377463 ("arm64: Revert > support for execute-only user mappings"). Thus now it can be revisited > and re-enabled. > > Cc: Kees Cook <keescook@chromium.org> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> > --- > arch/arm64/Kconfig | 17 +++++++++++++++++ > arch/arm64/include/asm/cpucaps.h | 5 +++-- > arch/arm64/include/asm/pgtable-prot.h | 5 +++-- > arch/arm64/include/asm/pgtable.h | 14 +++++++++++++- > arch/arm64/include/asm/sysreg.h | 1 + > arch/arm64/kernel/cpufeature.c | 21 +++++++++++++++++++++ > arch/arm64/mm/fault.c | 3 +++ > 7 files changed, 61 insertions(+), 5 deletions(-) > [...] > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > index 1ee9400..b93222e 100644 > --- a/arch/arm64/mm/fault.c > +++ b/arch/arm64/mm/fault.c > @@ -467,6 +467,9 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, > if (faulthandler_disabled() || !mm) > goto no_context; > > + if (cpus_have_const_cap(ARM64_HAS_EPAN)) > + vm_flags &= ~VM_EXEC; > + IIUC, this would be telling __do_page_fault() that the access would have succeeded with any kind of permissions except for write access, which doesn't seem right. Also, isn't vm_flags just overwritten by the code after the hunk? The logic in __do_page_fault() looks like might not have been written with the assumption that there might be more than a single set bit in vm_flags. But I'm not familiar with this code, and might be totally misunderstanding what's going on here... > if (user_mode(regs)) > mm_flags |= FAULT_FLAG_USER; [...] Cheers ---Dave
On Thu, Nov 19, 2020 at 06:52:05PM +0000, Dave P Martin wrote: > On Thu, Nov 19, 2020 at 01:39:52PM +0000, Vladimir Murzin wrote: > > Enhanced Privileged Access Never (EPAN) allows Privileged Access Never > > to be used with Execute-only mappings. > > > > Absence of such support was a reason for 24cecc377463 ("arm64: Revert > > support for execute-only user mappings"). Thus now it can be revisited > > and re-enabled. > > > > Cc: Kees Cook <keescook@chromium.org> > > Cc: Catalin Marinas <catalin.marinas@arm.com> > > Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> > > --- > > arch/arm64/Kconfig | 17 +++++++++++++++++ > > arch/arm64/include/asm/cpucaps.h | 5 +++-- > > arch/arm64/include/asm/pgtable-prot.h | 5 +++-- > > arch/arm64/include/asm/pgtable.h | 14 +++++++++++++- > > arch/arm64/include/asm/sysreg.h | 1 + > > arch/arm64/kernel/cpufeature.c | 21 +++++++++++++++++++++ > > arch/arm64/mm/fault.c | 3 +++ > > 7 files changed, 61 insertions(+), 5 deletions(-) > > > > [...] > > > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > > index 1ee9400..b93222e 100644 > > --- a/arch/arm64/mm/fault.c > > +++ b/arch/arm64/mm/fault.c > > @@ -467,6 +467,9 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, > > if (faulthandler_disabled() || !mm) > > goto no_context; > > > > + if (cpus_have_const_cap(ARM64_HAS_EPAN)) > > + vm_flags &= ~VM_EXEC; > > + > > IIUC, this would be telling __do_page_fault() that the access would have > succeeded with any kind of permissions except for write access, which > doesn't seem right. I always have trouble remembering what the vm_flags does. So __do_page_fault() checks vma->vm_flags & vm_flags and returns an error if the intersection is empty. We start with all rwx permission but modify it further down in the in do_page_fault(): if it was an exec fault, we set vm_flags to VM_EXEC only as that's what we want to check against vma->vm_flags; similarly, if it was a write fault, we want to check VM_WRITE only. If it's neither exec nor a write fault (i.e. a read), we leave it as rwx since both write and exec (prior to EPAN) imply read. With the EPAN patches, exec no longer implies read, so if it's neither an exec nor a write fault, we want vm_flags to be VM_READ|VM_WRITE since only write now implies read. > Also, isn't vm_flags just overwritten by the code after the hunk? > > The logic in __do_page_fault() looks like might not have been written > with the assumption that there might be more than a single set bit in > vm_flags. I think it was, it's checking the intersection. We could do with some comments in this code, otherwise next time someone asks I'll spend another 30 min reading the code ;).
On Thu, Nov 19, 2020 at 01:39:52PM +0000, Vladimir Murzin wrote: > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h > index 174817b..19147b6 100644 > --- a/arch/arm64/include/asm/sysreg.h > +++ b/arch/arm64/include/asm/sysreg.h > @@ -579,6 +579,7 @@ > #endif > > /* SCTLR_EL1 specific flags. */ > +#define SCTLR_EL1_EPAN (BIT(57)) > #define SCTLR_EL1_ATA0 (BIT(42)) > > #define SCTLR_EL1_TCF0_SHIFT 38 > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c > index dcc165b..540245c 100644 > --- a/arch/arm64/kernel/cpufeature.c > +++ b/arch/arm64/kernel/cpufeature.c > @@ -1602,6 +1602,14 @@ static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused) > } > #endif /* CONFIG_ARM64_PAN */ > > +#ifdef CONFIG_ARM64_EPAN > +static void cpu_enable_epan(const struct arm64_cpu_capabilities *__unused) > +{ > + sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_EPAN); > + local_flush_tlb_all(); > +} > +#endif /* CONFIG_ARM64_EPAN */ Thinking about this, can we not set the SCTLR_EL1.EPAN bit in proc.S directly, regardless of whether the system supports it or not (it should be write-ignored)? It would go in INIT_SCTLR_EL1_MMU_ON. We use the cpufeature entry only for detection, not enabling.
On 12/2/20 6:23 PM, Catalin Marinas wrote: > On Thu, Nov 19, 2020 at 01:39:52PM +0000, Vladimir Murzin wrote: >> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h >> index 174817b..19147b6 100644 >> --- a/arch/arm64/include/asm/sysreg.h >> +++ b/arch/arm64/include/asm/sysreg.h >> @@ -579,6 +579,7 @@ >> #endif >> >> /* SCTLR_EL1 specific flags. */ >> +#define SCTLR_EL1_EPAN (BIT(57)) >> #define SCTLR_EL1_ATA0 (BIT(42)) >> >> #define SCTLR_EL1_TCF0_SHIFT 38 >> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c >> index dcc165b..540245c 100644 >> --- a/arch/arm64/kernel/cpufeature.c >> +++ b/arch/arm64/kernel/cpufeature.c >> @@ -1602,6 +1602,14 @@ static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused) >> } >> #endif /* CONFIG_ARM64_PAN */ >> >> +#ifdef CONFIG_ARM64_EPAN >> +static void cpu_enable_epan(const struct arm64_cpu_capabilities *__unused) >> +{ >> + sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_EPAN); >> + local_flush_tlb_all(); >> +} >> +#endif /* CONFIG_ARM64_EPAN */ > > Thinking about this, can we not set the SCTLR_EL1.EPAN bit in proc.S > directly, regardless of whether the system supports it or not (it should > be write-ignored)? It would go in INIT_SCTLR_EL1_MMU_ON. We use the > cpufeature entry only for detection, not enabling. > I'll try to restructure that way. Cheers Vladimir
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1515f6f..6639244 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1056,6 +1056,9 @@ config ARCH_WANT_HUGE_PMD_SHARE config ARCH_HAS_CACHE_LINE_SIZE def_bool y +config ARCH_HAS_FILTER_PGPROT + def_bool y + config ARCH_ENABLE_SPLIT_PMD_PTLOCK def_bool y if PGTABLE_LEVELS > 2 @@ -1688,6 +1691,20 @@ config ARM64_MTE endmenu +menu "ARMv8.7 architectural features" + +config ARM64_EPAN + bool "Enable support for Enhanced Privileged Access Never (EPAN)" + default y + depends on ARM64_PAN + help + Enhanced Privileged Access Never (EPAN) allows Privileged + Access Never to be used with Execute-only mappings. + + The feature is detected at runtime, and will remain disabled + if the cpu does not implement the feature. +endmenu + config ARM64_SVE bool "ARM Scalable Vector Extension support" default y diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index e7d9899..3ea4fbdf 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -22,7 +22,7 @@ #define ARM64_WORKAROUND_CAVIUM_27456 12 #define ARM64_HAS_32BIT_EL0 13 #define ARM64_HARDEN_EL2_VECTORS 14 -#define ARM64_HAS_CNP 15 +#define ARM64_HAS_EPAN 15 #define ARM64_HAS_NO_FPSIMD 16 #define ARM64_WORKAROUND_REPEAT_TLBI 17 #define ARM64_WORKAROUND_QCOM_FALKOR_E1003 18 @@ -66,7 +66,8 @@ #define ARM64_HAS_TLB_RANGE 56 #define ARM64_MTE 57 #define ARM64_WORKAROUND_1508412 58 +#define ARM64_HAS_CNP 59 -#define ARM64_NCAPS 59 +#define ARM64_NCAPS 60 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 046be78..f91c2aa 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -88,12 +88,13 @@ extern bool arm64_use_ng_mappings; #define PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE) #define PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN) #define PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN) +#define PAGE_EXECONLY __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN) #define __P000 PAGE_NONE #define __P001 PAGE_READONLY #define __P010 PAGE_READONLY #define __P011 PAGE_READONLY -#define __P100 PAGE_READONLY_EXEC +#define __P100 PAGE_EXECONLY #define __P101 PAGE_READONLY_EXEC #define __P110 PAGE_READONLY_EXEC #define __P111 PAGE_READONLY_EXEC @@ -102,7 +103,7 @@ extern bool arm64_use_ng_mappings; #define __S001 PAGE_READONLY #define __S010 PAGE_SHARED #define __S011 PAGE_SHARED -#define __S100 PAGE_READONLY_EXEC +#define __S100 PAGE_EXECONLY #define __S101 PAGE_READONLY_EXEC #define __S110 PAGE_SHARED_EXEC #define __S111 PAGE_SHARED_EXEC diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 4ff12a7..e4ab9e0 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -114,7 +114,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; #define pte_valid(pte) (!!(pte_val(pte) & PTE_VALID)) #define pte_valid_not_user(pte) \ - ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID) + ((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN)) #define pte_valid_young(pte) \ ((pte_val(pte) & (PTE_VALID | PTE_AF)) == (PTE_VALID | PTE_AF)) #define pte_valid_user(pte) \ @@ -974,6 +974,18 @@ static inline bool arch_faults_on_old_pte(void) } #define arch_faults_on_old_pte arch_faults_on_old_pte +static inline pgprot_t arch_filter_pgprot(pgprot_t prot) +{ + if (cpus_have_const_cap(ARM64_HAS_EPAN)) + return prot; + + if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY)) + return prot; + + return PAGE_READONLY_EXEC; +} + + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_PGTABLE_H */ diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 174817b..19147b6 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -579,6 +579,7 @@ #endif /* SCTLR_EL1 specific flags. */ +#define SCTLR_EL1_EPAN (BIT(57)) #define SCTLR_EL1_ATA0 (BIT(42)) #define SCTLR_EL1_TCF0_SHIFT 38 diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index dcc165b..540245c 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1602,6 +1602,14 @@ static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused) } #endif /* CONFIG_ARM64_PAN */ +#ifdef CONFIG_ARM64_EPAN +static void cpu_enable_epan(const struct arm64_cpu_capabilities *__unused) +{ + sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_EPAN); + local_flush_tlb_all(); +} +#endif /* CONFIG_ARM64_EPAN */ + #ifdef CONFIG_ARM64_RAS_EXTN static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused) { @@ -1750,6 +1758,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .cpu_enable = cpu_enable_pan, }, #endif /* CONFIG_ARM64_PAN */ +#ifdef CONFIG_ARM64_EPAN + { + .desc = "Enhanced Privileged Access Never", + .capability = ARM64_HAS_EPAN, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_cpuid_feature, + .sys_reg = SYS_ID_AA64MMFR1_EL1, + .field_pos = ID_AA64MMFR1_PAN_SHIFT, + .sign = FTR_UNSIGNED, + .min_field_value = 3, + .cpu_enable = cpu_enable_epan, + }, +#endif /* CONFIG_ARM64_EPAN */ #ifdef CONFIG_ARM64_LSE_ATOMICS { .desc = "LSE atomic instructions", diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 1ee9400..b93222e 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -467,6 +467,9 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, if (faulthandler_disabled() || !mm) goto no_context; + if (cpus_have_const_cap(ARM64_HAS_EPAN)) + vm_flags &= ~VM_EXEC; + if (user_mode(regs)) mm_flags |= FAULT_FLAG_USER;
Enhanced Privileged Access Never (EPAN) allows Privileged Access Never to be used with Execute-only mappings. Absence of such support was a reason for 24cecc377463 ("arm64: Revert support for execute-only user mappings"). Thus now it can be revisited and re-enabled. Cc: Kees Cook <keescook@chromium.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> --- arch/arm64/Kconfig | 17 +++++++++++++++++ arch/arm64/include/asm/cpucaps.h | 5 +++-- arch/arm64/include/asm/pgtable-prot.h | 5 +++-- arch/arm64/include/asm/pgtable.h | 14 +++++++++++++- arch/arm64/include/asm/sysreg.h | 1 + arch/arm64/kernel/cpufeature.c | 21 +++++++++++++++++++++ arch/arm64/mm/fault.c | 3 +++ 7 files changed, 61 insertions(+), 5 deletions(-)