mbox series

[v3,00/12] Cover a guard gap corner case

Message ID 20240312222843.2505560-1-rick.p.edgecombe@intel.com (mailing list archive)
Headers show
Series Cover a guard gap corner case | expand

Message

Rick Edgecombe March 12, 2024, 10:28 p.m. UTC
Hi,

For v3, the change is in the struct vm_unmapped_area_info zeroing patches. 
Per discussion[0], they are switched to a method of intializing the struct 
at the callers that also doesn't leave useless statements as cleanup, but 
is a bit easier to manually inspect for bugs. The arch's that acked the 
old versions are left separate. What's left after that happens in a 
treewide change. 

It seems like a more straightforward change now, but I would still 
appreciate if anyone can double check the treewide change.

Also, rebase to v6.8.

[0] https://lore.kernel.org/lkml/e617dea592ec336e991c4362e48cd8c648ba7b49.camel@intel.com/

v2:
https://lore.kernel.org/lkml/20240226190951.3240433-1-rick.p.edgecombe@intel.com/

v1:
https://lore.kernel.org/lkml/20240215231332.1556787-1-rick.p.edgecombe@intel.com/

=======

In working on x86’s shadow stack feature, I came across some limitations 
around the kernel’s handling of guard gaps. AFAICT these limitations are 
not too important for the traditional stack usage of guard gaps, but have 
bigger impact on shadow stack’s usage. And now in addition to x86, we have 
two other architectures implementing shadow stack like features that plan 
to use guard gaps. I wanted to see about addressing them, but I have not 
worked on mmap() placement related code before, so would greatly 
appreciate if people could take a look and point me in the right 
direction.

The nature of the limitations of concern is as follows. In order to ensure 
guard gaps between mappings, mmap() would need to consider two things:
 1. That the new mapping isn’t placed in an any existing mapping’s guard
    gap.
 2. That the new mapping isn’t placed such that any existing mappings are
    not in *its* guard gaps
Currently mmap never considers (2), and (1) is not considered in some 
situations.

When not passing an address hint, or passing one without 
MAP_FIXED_NOREPLACE, (1) is enforced. With MAP_FIXED_NOREPLACE, (1) is not 
enforced. With MAP_FIXED, (1) is not considered, but this seems to be 
expected since MAP_FIXED can already clobber existing mappings. For 
MAP_FIXED_NOREPLACE I would have guessed it should respect the guard gaps 
of existing mappings, but it is probably a little ambiguous.

In this series I just tried to add enforcement of (2) for the normal (no 
address hint) case and only for the newer shadow stack memory (not 
stacks). The reason is that with the no-address-hint situation, landing 
next to a guard gap could come up naturally and so be more influencable by 
attackers such that two shadow stacks could be adjacent without a guard 
gap. Where as the address-hint scenarios would require more control - 
being able to call mmap() with specific arguments. As for why not just fix 
the other corner cases anyway, I thought it might have some greater 
possibility of affecting existing apps.

Thanks,

Rick

Rick Edgecombe (12):
  mm: Switch mm->get_unmapped_area() to a flag
  mm: Introduce arch_get_unmapped_area_vmflags()
  mm: Use get_unmapped_area_vmflags()
  thp: Add thp_get_unmapped_area_vmflags()
  csky: Use initializer for struct vm_unmapped_area_info
  parisc: Use initializer for struct vm_unmapped_area_info
  powerpc: Use initializer for struct vm_unmapped_area_info
  treewide: Use initializer for struct vm_unmapped_area_info
  mm: Take placement mappings gap into account
  x86/mm: Implement HAVE_ARCH_UNMAPPED_AREA_VMFLAGS
  x86/mm: Care about shadow stack guard gap during placement
  selftests/x86: Add placement guard gap test for shstk

 arch/alpha/kernel/osf_sys.c                   |   5 +-
 arch/arc/mm/mmap.c                            |   4 +-
 arch/arm/mm/mmap.c                            |   5 +-
 arch/csky/abiv1/mmap.c                        |  12 +-
 arch/loongarch/mm/mmap.c                      |   3 +-
 arch/mips/mm/mmap.c                           |   3 +-
 arch/parisc/kernel/sys_parisc.c               |   6 +-
 arch/powerpc/mm/book3s64/slice.c              |  23 ++--
 arch/s390/mm/hugetlbpage.c                    |   9 +-
 arch/s390/mm/mmap.c                           |  15 +--
 arch/sh/mm/mmap.c                             |   5 +-
 arch/sparc/kernel/sys_sparc_32.c              |   3 +-
 arch/sparc/kernel/sys_sparc_64.c              |  20 ++--
 arch/sparc/mm/hugetlbpage.c                   |   9 +-
 arch/x86/include/asm/pgtable_64.h             |   1 +
 arch/x86/kernel/cpu/sgx/driver.c              |   2 +-
 arch/x86/kernel/sys_x86_64.c                  |  42 +++++--
 arch/x86/mm/hugetlbpage.c                     |   9 +-
 arch/x86/mm/mmap.c                            |   4 +-
 drivers/char/mem.c                            |   2 +-
 drivers/dax/device.c                          |   6 +-
 fs/hugetlbfs/inode.c                          |  11 +-
 fs/proc/inode.c                               |  15 +--
 fs/ramfs/file-mmu.c                           |   2 +-
 include/linux/huge_mm.h                       |  11 ++
 include/linux/mm.h                            |  12 +-
 include/linux/mm_types.h                      |   6 +-
 include/linux/sched/coredump.h                |   5 +-
 include/linux/sched/mm.h                      |  22 ++++
 io_uring/io_uring.c                           |   2 +-
 mm/debug.c                                    |   6 -
 mm/huge_memory.c                              |  26 +++--
 mm/mmap.c                                     | 106 +++++++++++++-----
 mm/shmem.c                                    |  11 +-
 mm/util.c                                     |   6 +-
 .../testing/selftests/x86/test_shadow_stack.c |  67 ++++++++++-
 36 files changed, 319 insertions(+), 177 deletions(-)