Message ID | 20201103175841.3495947-4-elver@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KFENCE: A low-overhead sampling-based memory safety error detector | expand |
On Tue, Nov 3, 2020 at 6:59 PM Marco Elver <elver@google.com> wrote: > Add architecture specific implementation details for KFENCE and enable > KFENCE for the arm64 architecture. In particular, this implements the > required interface in <asm/kfence.h>. > > KFENCE requires that attributes for pages from its memory pool can > individually be set. Therefore, force the entire linear map to be mapped > at page granularity. Doing so may result in extra memory allocated for > page tables in case rodata=full is not set; however, currently > CONFIG_RODATA_FULL_DEFAULT_ENABLED=y is the default, and the common case > is therefore not affected by this change. > > Reviewed-by: Dmitry Vyukov <dvyukov@google.com> > Co-developed-by: Alexander Potapenko <glider@google.com> > Signed-off-by: Alexander Potapenko <glider@google.com> > Signed-off-by: Marco Elver <elver@google.com> Reviewed-by: Jann Horn <jannh@google.com>
On Tue, Nov 03, 2020 at 06:58:35PM +0100, Marco Elver wrote: > Add architecture specific implementation details for KFENCE and enable > KFENCE for the arm64 architecture. In particular, this implements the > required interface in <asm/kfence.h>. > > KFENCE requires that attributes for pages from its memory pool can > individually be set. Therefore, force the entire linear map to be mapped > at page granularity. Doing so may result in extra memory allocated for > page tables in case rodata=full is not set; however, currently > CONFIG_RODATA_FULL_DEFAULT_ENABLED=y is the default, and the common case > is therefore not affected by this change. > > Reviewed-by: Dmitry Vyukov <dvyukov@google.com> > Co-developed-by: Alexander Potapenko <glider@google.com> > Signed-off-by: Alexander Potapenko <glider@google.com> > Signed-off-by: Marco Elver <elver@google.com> Thanks for dilligently handling all the review feedback. This looks good to me now, so FWIW: Reviewed-by: Mark Rutland <mark.rutland@arm.com> There is one thing that I thing we should improve as a subsequent cleanup, but I don't think that should block this as-is. > +#define KFENCE_SKIP_ARCH_FAULT_HANDLER "el1_sync" IIUC, the core kfence code is using this to figure out where to trace from when there's a fault taken on an access to a protected page. It would be better if the arch code passed the exception's pt_regs into the kfence fault handler, and the kfence began the trace began from there. That would also allow for dumping the exception registers which can help with debugging (e.g. figuring out how the address was derived when it's calculated from multiple source registers). That would also be a bit more robust to changes in an architectures' exception handling code. Thanks, Mark.
On Wed, 4 Nov 2020 at 14:06, Mark Rutland <mark.rutland@arm.com> wrote: > On Tue, Nov 03, 2020 at 06:58:35PM +0100, Marco Elver wrote: > > Add architecture specific implementation details for KFENCE and enable > > KFENCE for the arm64 architecture. In particular, this implements the > > required interface in <asm/kfence.h>. > > > > KFENCE requires that attributes for pages from its memory pool can > > individually be set. Therefore, force the entire linear map to be mapped > > at page granularity. Doing so may result in extra memory allocated for > > page tables in case rodata=full is not set; however, currently > > CONFIG_RODATA_FULL_DEFAULT_ENABLED=y is the default, and the common case > > is therefore not affected by this change. > > > > Reviewed-by: Dmitry Vyukov <dvyukov@google.com> > > Co-developed-by: Alexander Potapenko <glider@google.com> > > Signed-off-by: Alexander Potapenko <glider@google.com> > > Signed-off-by: Marco Elver <elver@google.com> > > Thanks for dilligently handling all the review feedback. This looks good > to me now, so FWIW: > > Reviewed-by: Mark Rutland <mark.rutland@arm.com> Thank you! > There is one thing that I thing we should improve as a subsequent > cleanup, but I don't think that should block this as-is. > > > +#define KFENCE_SKIP_ARCH_FAULT_HANDLER "el1_sync" > > IIUC, the core kfence code is using this to figure out where to trace > from when there's a fault taken on an access to a protected page. Correct. > It would be better if the arch code passed the exception's pt_regs into > the kfence fault handler, and the kfence began the trace began from > there. That would also allow for dumping the exception registers which > can help with debugging (e.g. figuring out how the address was derived > when it's calculated from multiple source registers). That would also be > a bit more robust to changes in an architectures' exception handling > code. Good idea, thanks. I guess there's no reason to not want to always skip to instruction_pointer(regs)? In which case I can prepare a patch to make this change. If this should go into a v8, please let me know. But it'd be easier as a subsequent patch as you say, given it'll be easier to review and these patches are in -mm now. Thanks, -- Marco
On Wed, Nov 04, 2020 at 03:23:48PM +0100, Marco Elver wrote: > On Wed, 4 Nov 2020 at 14:06, Mark Rutland <mark.rutland@arm.com> wrote: > > On Tue, Nov 03, 2020 at 06:58:35PM +0100, Marco Elver wrote: > > There is one thing that I thing we should improve as a subsequent > > cleanup, but I don't think that should block this as-is. > > > > > +#define KFENCE_SKIP_ARCH_FAULT_HANDLER "el1_sync" > > > > IIUC, the core kfence code is using this to figure out where to trace > > from when there's a fault taken on an access to a protected page. > > Correct. > > > It would be better if the arch code passed the exception's pt_regs into > > the kfence fault handler, and the kfence began the trace began from > > there. That would also allow for dumping the exception registers which > > can help with debugging (e.g. figuring out how the address was derived > > when it's calculated from multiple source registers). That would also be > > a bit more robust to changes in an architectures' exception handling > > code. > > Good idea, thanks. I guess there's no reason to not want to always > skip to instruction_pointer(regs)? I don't think we need the exception handling gunk in the trace, but note that you'd need to use stack_trace_save_regs(regs, ...) directly, rather than using stack_trace_save() and skipping based on instruction_pointer(regs). Otherwise, if the fault was somewhere in an exception handler, and we invoked the same function on the path to the kfence fault handler we might cut the trace at the wrong point. > In which case I can prepare a patch to make this change. If this > should go into a v8, please let me know. But it'd be easier as a > subsequent patch as you say, given it'll be easier to review and these > patches are in -mm now. I think it'd make more sense as a subsequent change, since it's liable to need a cycle or two of review, and I don't think it should block the rest of the series. Thanks, Mark.
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1d466addb078..e524c07c3eda 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -135,6 +135,7 @@ config ARM64 select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN + select HAVE_ARCH_KFENCE select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h new file mode 100644 index 000000000000..5ac0f599cc9a --- /dev/null +++ b/arch/arm64/include/asm/kfence.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_KFENCE_H +#define __ASM_KFENCE_H + +#include <asm/cacheflush.h> + +#define KFENCE_SKIP_ARCH_FAULT_HANDLER "el1_sync" + +static inline bool arch_kfence_init_pool(void) { return true; } + +static inline bool kfence_protect_page(unsigned long addr, bool protect) +{ + set_memory_valid(addr, 1, !protect); + + return true; +} + +#endif /* __ASM_KFENCE_H */ diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 1ee94002801f..2d60204b4ed2 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -10,6 +10,7 @@ #include <linux/acpi.h> #include <linux/bitfield.h> #include <linux/extable.h> +#include <linux/kfence.h> #include <linux/signal.h> #include <linux/mm.h> #include <linux/hardirq.h> @@ -322,6 +323,9 @@ static void __do_kernel_fault(unsigned long addr, unsigned int esr, } else if (addr < PAGE_SIZE) { msg = "NULL pointer dereference"; } else { + if (kfence_handle_page_fault(addr)) + return; + msg = "paging request"; } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 1c0f3e02f731..86be6d1a78ab 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1449,7 +1449,12 @@ int arch_add_memory(int nid, u64 start, u64 size, { int ret, flags = 0; - if (rodata_full || debug_pagealloc_enabled()) + /* + * KFENCE requires linear map to be mapped at page granularity, so that + * it is possible to protect/unprotect single pages in the KFENCE pool. + */ + if (rodata_full || debug_pagealloc_enabled() || + IS_ENABLED(CONFIG_KFENCE)) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),