Message ID | 20220310111545.10852-4-bharata@amd.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | x86/AMD: Userspace address tagging | expand |
On 10/03/2022 11:15, Bharata B Rao wrote: > diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c > index f7a132eb794d..12615b1b4af5 100644 > --- a/arch/x86/kernel/setup.c > +++ b/arch/x86/kernel/setup.c > @@ -740,6 +740,12 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p) > return 0; > } > > +static inline void __init uai_enable(void) > +{ > + if (boot_cpu_has(X86_FEATURE_UAI)) > + msr_set_bit(MSR_EFER, _EFER_UAI); > +} > + > /* > * Determine if we were loaded by an EFI loader. If so, then we have also been > * passed the efi memmap, systab, etc., so we should use these data structures > @@ -1146,6 +1152,8 @@ void __init setup_arch(char **cmdline_p) > > x86_init.paging.pagetable_init(); > > + uai_enable(); I would think incredibly carefully before enabling UAI by default. Suffice it to say that Intel were talked down from 7 bits to 6, and apparently AMD didn't get the same memo from the original requesters. The problem is that UAI + LA57 means that all the poison pointers cease functioning as a defence-in-depth mechanism, and become legal pointers pointing at random positions in user or kernel space. ~Andrew
From: Andrew Cooper > Sent: 10 March 2022 19:47 > > On 10/03/2022 11:15, Bharata B Rao wrote: > > diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c > > index f7a132eb794d..12615b1b4af5 100644 > > --- a/arch/x86/kernel/setup.c > > +++ b/arch/x86/kernel/setup.c > > @@ -740,6 +740,12 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p) > > return 0; > > } > > > > +static inline void __init uai_enable(void) > > +{ > > + if (boot_cpu_has(X86_FEATURE_UAI)) > > + msr_set_bit(MSR_EFER, _EFER_UAI); > > +} > > + > > /* > > * Determine if we were loaded by an EFI loader. If so, then we have also been > > * passed the efi memmap, systab, etc., so we should use these data structures > > @@ -1146,6 +1152,8 @@ void __init setup_arch(char **cmdline_p) > > > > x86_init.paging.pagetable_init(); > > > > + uai_enable(); > > I would think incredibly carefully before enabling UAI by default. > > Suffice it to say that Intel were talked down from 7 bits to 6, and > apparently AMD didn't get the same memo from the original requesters. > > The problem is that UAI + LA57 means that all the poison pointers cease > functioning as a defence-in-depth mechanism, and become legal pointers > pointing at random positions in user or kernel space. Isn't that true regardless of how many bits are 'ignored'. AFAICT the only sane thing would be to have something in the cpu that verifies the 'ignored' bits match values set in the PTE. That could be used to ensure (well make it more likely) that stack access stay in the stack and pointers to mmap()ed data stay pointing to the correct pages. Just letting user address space be aliased a lot of times doesn't seem like a security feature to me. It must have some strange use case. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On 3/10/22 14:37, David Laight wrote: > Just letting user address space be aliased a lot of times doesn't > seem like a security feature to me. > It must have some strange use case. This should have been in the changelogs... sheesh... Right now, address sanitizers keep pointer metadata in various spots. But, it requires recompiling apps and libraries. These compiler-based things are also so slow that production use is rare. These masking things (ARM TBI, AMD UAI, Intel LAM) _theoretically_ let you plumb enough metadata around with pointers to do address sanitizer implementations in production. I think LAM is the most sane of the three, but I'm biased.
>diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c >index f7a132eb794d..12615b1b4af5 100644 >--- a/arch/x86/kernel/setup.c >+++ b/arch/x86/kernel/setup.c >@@ -740,6 +740,12 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p) > return 0; > } > >+static inline void __init uai_enable(void) >+{ >+ if (boot_cpu_has(X86_FEATURE_UAI)) cpu_feature_enabled >+ msr_set_bit(MSR_EFER, _EFER_UAI); >+}
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index a4a39c3e0f19..ce763952278f 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -30,6 +30,7 @@ #define _EFER_SVME 12 /* Enable virtualization */ #define _EFER_LMSLE 13 /* Long Mode Segment Limit Enable */ #define _EFER_FFXSR 14 /* Enable Fast FXSAVE/FXRSTOR */ +#define _EFER_UAI 20 /* Enable Upper Address Ignore */ #define EFER_SCE (1<<_EFER_SCE) #define EFER_LME (1<<_EFER_LME) @@ -38,6 +39,7 @@ #define EFER_SVME (1<<_EFER_SVME) #define EFER_LMSLE (1<<_EFER_LMSLE) #define EFER_FFXSR (1<<_EFER_FFXSR) +#define EFER_UAI (1<<_EFER_UAI) /* Intel MSRs. Some also available on other CPUs */ diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index f7a132eb794d..12615b1b4af5 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -740,6 +740,12 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p) return 0; } +static inline void __init uai_enable(void) +{ + if (boot_cpu_has(X86_FEATURE_UAI)) + msr_set_bit(MSR_EFER, _EFER_UAI); +} + /* * Determine if we were loaded by an EFI loader. If so, then we have also been * passed the efi memmap, systab, etc., so we should use these data structures @@ -1146,6 +1152,8 @@ void __init setup_arch(char **cmdline_p) x86_init.paging.pagetable_init(); + uai_enable(); + kasan_init(); /*
UAI feature is enabled by setting bit 20 in EFER MSR. Signed-off-by: Bharata B Rao <bharata@amd.com> --- arch/x86/include/asm/msr-index.h | 2 ++ arch/x86/kernel/setup.c | 8 ++++++++ 2 files changed, 10 insertions(+)