Message ID | 20220511022751.65540-10-kirill.shutemov@linux.intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Linear Address Masking enabling | expand |
On Wed, May 11 2022 at 05:27, Kirill A. Shutemov wrote: > LAM_U48 steals bits above 47-bit for tags and makes it impossible for > userspace to use full address space on 5-level paging machine. > Make these features mutually exclusive: whichever gets enabled first > blocks the othe one. So this patch prevents a mapping above 47bit when LAM48 is enabled, but I fail to spot how an already existing mapping above 47bit would prevent LAM48 from being enabled. Maybe I'm missing something which makes this magically mutually exclusive. Thanks, tglx
On Thu, May 12, 2022 at 03:36:31PM +0200, Thomas Gleixner wrote: > On Wed, May 11 2022 at 05:27, Kirill A. Shutemov wrote: > > LAM_U48 steals bits above 47-bit for tags and makes it impossible for > > userspace to use full address space on 5-level paging machine. > > > Make these features mutually exclusive: whichever gets enabled first > > blocks the othe one. > > So this patch prevents a mapping above 47bit when LAM48 is enabled, but > I fail to spot how an already existing mapping above 47bit would prevent > LAM48 from being enabled. > > Maybe I'm missing something which makes this magically mutually > exclusive. It is in 09/10. See lam_u48_allowed()
On Sat, May 14 2022 at 02:22, Kirill A. Shutemov wrote: > On Thu, May 12, 2022 at 03:36:31PM +0200, Thomas Gleixner wrote: >> On Wed, May 11 2022 at 05:27, Kirill A. Shutemov wrote: >> > LAM_U48 steals bits above 47-bit for tags and makes it impossible for >> > userspace to use full address space on 5-level paging machine. >> >> > Make these features mutually exclusive: whichever gets enabled first >> > blocks the othe one. >> >> So this patch prevents a mapping above 47bit when LAM48 is enabled, but >> I fail to spot how an already existing mapping above 47bit would prevent >> LAM48 from being enabled. >> >> Maybe I'm missing something which makes this magically mutually >> exclusive. > > It is in 09/10. See lam_u48_allowed() Sure, but that makes this changelog not any more correct.
On 5/11/2022 7:57 AM, Kirill A. Shutemov wrote: > LAM_U48 steals bits above 47-bit for tags and makes it impossible for > userspace to use full address space on 5-level paging machine. > > Make these features mutually exclusive: whichever gets enabled first > blocks the othe one. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > --- > arch/x86/include/asm/elf.h | 3 ++- > arch/x86/include/asm/mmu_context.h | 13 +++++++++++++ > arch/x86/kernel/sys_x86_64.c | 5 +++-- > arch/x86/mm/hugetlbpage.c | 6 ++++-- > arch/x86/mm/mmap.c | 9 ++++++++- > 5 files changed, 30 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h > index 29fea180a665..53b96b0c8cc3 100644 > --- a/arch/x86/include/asm/elf.h > +++ b/arch/x86/include/asm/elf.h > @@ -328,7 +328,8 @@ static inline int mmap_is_ia32(void) > extern unsigned long task_size_32bit(void); > extern unsigned long task_size_64bit(int full_addr_space); > extern unsigned long get_mmap_base(int is_legacy); > -extern bool mmap_address_hint_valid(unsigned long addr, unsigned long len); > +extern bool mmap_address_hint_valid(struct mm_struct *mm, > + unsigned long addr, unsigned long len); > extern unsigned long get_sigframe_size(void); > > #ifdef CONFIG_X86_32 > diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h > index 27516046117a..c8a6d80dfec3 100644 > --- a/arch/x86/include/asm/mmu_context.h > +++ b/arch/x86/include/asm/mmu_context.h > @@ -218,6 +218,19 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, > > unsigned long __get_current_cr3_fast(void); > > +#ifdef CONFIG_X86_5LEVEL > +static inline bool full_va_allowed(struct mm_struct *mm) > +{ > + /* LAM_U48 steals VA bits abouve 47-bit for tags */ > + return mm->context.lam != LAM_U48; > +} > +#else This is called from X86 common code but appears to be LAM-specific. What would mm->context.lam contain if X86_FEATURE_LAM isn't set? Regards, Bharata.
On Wed, May 18, 2022 at 02:13:06PM +0530, Bharata B Rao wrote: > On 5/11/2022 7:57 AM, Kirill A. Shutemov wrote: > > LAM_U48 steals bits above 47-bit for tags and makes it impossible for > > userspace to use full address space on 5-level paging machine. > > > > Make these features mutually exclusive: whichever gets enabled first > > blocks the othe one. > > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > > --- > > arch/x86/include/asm/elf.h | 3 ++- > > arch/x86/include/asm/mmu_context.h | 13 +++++++++++++ > > arch/x86/kernel/sys_x86_64.c | 5 +++-- > > arch/x86/mm/hugetlbpage.c | 6 ++++-- > > arch/x86/mm/mmap.c | 9 ++++++++- > > 5 files changed, 30 insertions(+), 6 deletions(-) > > > > diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h > > index 29fea180a665..53b96b0c8cc3 100644 > > --- a/arch/x86/include/asm/elf.h > > +++ b/arch/x86/include/asm/elf.h > > @@ -328,7 +328,8 @@ static inline int mmap_is_ia32(void) > > extern unsigned long task_size_32bit(void); > > extern unsigned long task_size_64bit(int full_addr_space); > > extern unsigned long get_mmap_base(int is_legacy); > > -extern bool mmap_address_hint_valid(unsigned long addr, unsigned long len); > > +extern bool mmap_address_hint_valid(struct mm_struct *mm, > > + unsigned long addr, unsigned long len); > > extern unsigned long get_sigframe_size(void); > > > > #ifdef CONFIG_X86_32 > > diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h > > index 27516046117a..c8a6d80dfec3 100644 > > --- a/arch/x86/include/asm/mmu_context.h > > +++ b/arch/x86/include/asm/mmu_context.h > > @@ -218,6 +218,19 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, > > > > unsigned long __get_current_cr3_fast(void); > > > > +#ifdef CONFIG_X86_5LEVEL > > +static inline bool full_va_allowed(struct mm_struct *mm) > > +{ > > + /* LAM_U48 steals VA bits abouve 47-bit for tags */ > > + return mm->context.lam != LAM_U48; > > +} > > +#else > > This is called from X86 common code but appears to be LAM-specific. > What would mm->context.lam contain if X86_FEATURE_LAM isn't set? 0. So full_va_allowed() will always return true.
diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h index 29fea180a665..53b96b0c8cc3 100644 --- a/arch/x86/include/asm/elf.h +++ b/arch/x86/include/asm/elf.h @@ -328,7 +328,8 @@ static inline int mmap_is_ia32(void) extern unsigned long task_size_32bit(void); extern unsigned long task_size_64bit(int full_addr_space); extern unsigned long get_mmap_base(int is_legacy); -extern bool mmap_address_hint_valid(unsigned long addr, unsigned long len); +extern bool mmap_address_hint_valid(struct mm_struct *mm, + unsigned long addr, unsigned long len); extern unsigned long get_sigframe_size(void); #ifdef CONFIG_X86_32 diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 27516046117a..c8a6d80dfec3 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -218,6 +218,19 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, unsigned long __get_current_cr3_fast(void); +#ifdef CONFIG_X86_5LEVEL +static inline bool full_va_allowed(struct mm_struct *mm) +{ + /* LAM_U48 steals VA bits abouve 47-bit for tags */ + return mm->context.lam != LAM_U48; +} +#else +static inline bool full_va_allowed(struct mm_struct *mm) +{ + return false; +} +#endif + #include <asm-generic/mmu_context.h> #endif /* _ASM_X86_MMU_CONTEXT_H */ diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index 660b78827638..4526e8fadfd2 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -21,6 +21,7 @@ #include <asm/elf.h> #include <asm/ia32.h> +#include <asm/mmu_context.h> /* * Align a virtual address to avoid aliasing in the I$ on AMD F15h. @@ -185,7 +186,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, /* requesting a specific address */ if (addr) { addr &= PAGE_MASK; - if (!mmap_address_hint_valid(addr, len)) + if (!mmap_address_hint_valid(mm, addr, len)) goto get_unmapped_area; vma = find_vma(mm, addr); @@ -206,7 +207,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, * !in_32bit_syscall() check to avoid high addresses for x32 * (and make it no op on native i386). */ - if (addr > DEFAULT_MAP_WINDOW && !in_32bit_syscall()) + if (addr > DEFAULT_MAP_WINDOW && !in_32bit_syscall() && full_va_allowed(mm)) info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW; info.align_mask = 0; diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index a0d023cb4292..9fdc8db42365 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -18,6 +18,7 @@ #include <asm/tlb.h> #include <asm/tlbflush.h> #include <asm/elf.h> +#include <asm/mmu_context.h> #if 0 /* This is just for testing */ struct page * @@ -103,6 +104,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); + struct mm_struct *mm = current->mm; struct vm_unmapped_area_info info; info.flags = VM_UNMAPPED_AREA_TOPDOWN; @@ -114,7 +116,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file, * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area * in the full address space. */ - if (addr > DEFAULT_MAP_WINDOW && !in_32bit_syscall()) + if (addr > DEFAULT_MAP_WINDOW && !in_32bit_syscall() && full_va_allowed(mm)) info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW; info.align_mask = PAGE_MASK & ~huge_page_mask(h); @@ -161,7 +163,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr, if (addr) { addr &= huge_page_mask(h); - if (!mmap_address_hint_valid(addr, len)) + if (!mmap_address_hint_valid(mm, addr, len)) goto get_unmapped_area; vma = find_vma(mm, addr); diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c index c90c20904a60..f9ca824729de 100644 --- a/arch/x86/mm/mmap.c +++ b/arch/x86/mm/mmap.c @@ -21,6 +21,7 @@ #include <linux/elf-randomize.h> #include <asm/elf.h> #include <asm/io.h> +#include <asm/mmu_context.h> #include "physaddr.h" @@ -35,6 +36,8 @@ unsigned long task_size_32bit(void) unsigned long task_size_64bit(int full_addr_space) { + if (!full_va_allowed(current->mm)) + return DEFAULT_MAP_WINDOW; return full_addr_space ? TASK_SIZE_MAX : DEFAULT_MAP_WINDOW; } @@ -206,11 +209,15 @@ const char *arch_vma_name(struct vm_area_struct *vma) * the failure of such a fixed mapping request, so the restriction is not * applied. */ -bool mmap_address_hint_valid(unsigned long addr, unsigned long len) +bool mmap_address_hint_valid(struct mm_struct *mm, + unsigned long addr, unsigned long len) { if (TASK_SIZE - len < addr) return false; + if (addr + len > DEFAULT_MAP_WINDOW && !full_va_allowed(mm)) + return false; + return (addr > DEFAULT_MAP_WINDOW) == (addr + len > DEFAULT_MAP_WINDOW); }
LAM_U48 steals bits above 47-bit for tags and makes it impossible for userspace to use full address space on 5-level paging machine. Make these features mutually exclusive: whichever gets enabled first blocks the othe one. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- arch/x86/include/asm/elf.h | 3 ++- arch/x86/include/asm/mmu_context.h | 13 +++++++++++++ arch/x86/kernel/sys_x86_64.c | 5 +++-- arch/x86/mm/hugetlbpage.c | 6 ++++-- arch/x86/mm/mmap.c | 9 ++++++++- 5 files changed, 30 insertions(+), 6 deletions(-)