Message ID | 20200929133814.2834621-4-elver@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KFENCE: A low-overhead sampling-based memory safety error detector | expand |
On Tue, Sep 29, 2020 at 3:38 PM Marco Elver <elver@google.com> wrote: > Add architecture specific implementation details for KFENCE and enable > KFENCE for the arm64 architecture. In particular, this implements the > required interface in <asm/kfence.h>. Currently, the arm64 version does > not yet use a statically allocated memory pool, at the cost of a pointer > load for each is_kfence_address(). [...] > diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h [...] > +static inline bool arch_kfence_initialize_pool(void) > +{ > + const unsigned int num_pages = ilog2(roundup_pow_of_two(KFENCE_POOL_SIZE / PAGE_SIZE)); > + struct page *pages = alloc_pages(GFP_KERNEL, num_pages); > + > + if (!pages) > + return false; > + > + __kfence_pool = page_address(pages); > + return true; > +} If you're going to do "virt_to_page(meta->addr)->slab_cache = cache;" on these pages in kfence_guarded_alloc(), and pass them into kfree(), you'd better mark these pages as non-compound - something like alloc_pages_exact() or split_page() may help. Otherwise, I think when SLUB's kfree() does virt_to_head_page() right at the start, that will return a pointer to the first page of the entire __kfence_pool, and then when it loads page->slab_cache, it gets some random cache and stuff blows up. Kinda surprising that you haven't run into that during your testing, maybe I'm missing something... Also, this kinda feels like it should be the "generic" version of arch_kfence_initialize_pool() and live in mm/kfence/core.c ?
On Fri, 2 Oct 2020 at 08:48, Jann Horn <jannh@google.com> wrote: > > On Tue, Sep 29, 2020 at 3:38 PM Marco Elver <elver@google.com> wrote: > > Add architecture specific implementation details for KFENCE and enable > > KFENCE for the arm64 architecture. In particular, this implements the > > required interface in <asm/kfence.h>. Currently, the arm64 version does > > not yet use a statically allocated memory pool, at the cost of a pointer > > load for each is_kfence_address(). > [...] > > diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h > [...] > > +static inline bool arch_kfence_initialize_pool(void) > > +{ > > + const unsigned int num_pages = ilog2(roundup_pow_of_two(KFENCE_POOL_SIZE / PAGE_SIZE)); > > + struct page *pages = alloc_pages(GFP_KERNEL, num_pages); > > + > > + if (!pages) > > + return false; > > + > > + __kfence_pool = page_address(pages); > > + return true; > > +} > > If you're going to do "virt_to_page(meta->addr)->slab_cache = cache;" > on these pages in kfence_guarded_alloc(), and pass them into kfree(), > you'd better mark these pages as non-compound - something like > alloc_pages_exact() or split_page() may help. Otherwise, I think when > SLUB's kfree() does virt_to_head_page() right at the start, that will > return a pointer to the first page of the entire __kfence_pool, and > then when it loads page->slab_cache, it gets some random cache and > stuff blows up. Kinda surprising that you haven't run into that during > your testing, maybe I'm missing something... I added a WARN_ON() check in kfence_initialize_pool() to check if our pages are compound or not; they are not. In slub.c, __GFP_COMP is passed to alloc_pages(), which causes them to have a compound head I believe. > Also, this kinda feels like it should be the "generic" version of > arch_kfence_initialize_pool() and live in mm/kfence/core.c ? Done for v5. Thanks, -- Marco
On Fri, Oct 2, 2020 at 4:19 PM Marco Elver <elver@google.com> wrote: > > On Fri, 2 Oct 2020 at 08:48, Jann Horn <jannh@google.com> wrote: > > > > On Tue, Sep 29, 2020 at 3:38 PM Marco Elver <elver@google.com> wrote: > > > Add architecture specific implementation details for KFENCE and enable > > > KFENCE for the arm64 architecture. In particular, this implements the > > > required interface in <asm/kfence.h>. Currently, the arm64 version does > > > not yet use a statically allocated memory pool, at the cost of a pointer > > > load for each is_kfence_address(). > > [...] > > > diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h > > [...] > > > +static inline bool arch_kfence_initialize_pool(void) > > > +{ > > > + const unsigned int num_pages = ilog2(roundup_pow_of_two(KFENCE_POOL_SIZE / PAGE_SIZE)); > > > + struct page *pages = alloc_pages(GFP_KERNEL, num_pages); > > > + > > > + if (!pages) > > > + return false; > > > + > > > + __kfence_pool = page_address(pages); > > > + return true; > > > +} > > > > If you're going to do "virt_to_page(meta->addr)->slab_cache = cache;" > > on these pages in kfence_guarded_alloc(), and pass them into kfree(), > > you'd better mark these pages as non-compound - something like > > alloc_pages_exact() or split_page() may help. Otherwise, I think when > > SLUB's kfree() does virt_to_head_page() right at the start, that will > > return a pointer to the first page of the entire __kfence_pool, and > > then when it loads page->slab_cache, it gets some random cache and > > stuff blows up. Kinda surprising that you haven't run into that during > > your testing, maybe I'm missing something... > > I added a WARN_ON() check in kfence_initialize_pool() to check if our > pages are compound or not; they are not. > > In slub.c, __GFP_COMP is passed to alloc_pages(), which causes them to > have a compound head I believe. Aah, I mixed up high-order pages and compound pages. Sorry for the noise.
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6d232837cbee..1acc6b2877c3 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -132,6 +132,7 @@ config ARM64 select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN + select HAVE_ARCH_KFENCE if (!ARM64_16K_PAGES && !ARM64_64K_PAGES) select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h new file mode 100644 index 000000000000..608dde80e5ca --- /dev/null +++ b/arch/arm64/include/asm/kfence.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_KFENCE_H +#define __ASM_KFENCE_H + +#include <linux/kfence.h> +#include <linux/log2.h> +#include <linux/mm.h> + +#include <asm/cacheflush.h> + +#define KFENCE_SKIP_ARCH_FAULT_HANDLER "el1_sync" + +/* + * FIXME: Support HAVE_ARCH_KFENCE_STATIC_POOL: Use the statically allocated + * __kfence_pool, to avoid the extra pointer load for is_kfence_address(). By + * default, however, we do not have struct pages for static allocations. + */ + +static inline bool arch_kfence_initialize_pool(void) +{ + const unsigned int num_pages = ilog2(roundup_pow_of_two(KFENCE_POOL_SIZE / PAGE_SIZE)); + struct page *pages = alloc_pages(GFP_KERNEL, num_pages); + + if (!pages) + return false; + + __kfence_pool = page_address(pages); + return true; +} + +static inline bool kfence_protect_page(unsigned long addr, bool protect) +{ + set_memory_valid(addr, 1, !protect); + + return true; +} + +#endif /* __ASM_KFENCE_H */ diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index f07333e86c2f..d5b72ecbeeea 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -10,6 +10,7 @@ #include <linux/acpi.h> #include <linux/bitfield.h> #include <linux/extable.h> +#include <linux/kfence.h> #include <linux/signal.h> #include <linux/mm.h> #include <linux/hardirq.h> @@ -310,6 +311,9 @@ static void __do_kernel_fault(unsigned long addr, unsigned int esr, "Ignoring spurious kernel translation fault at virtual address %016lx\n", addr)) return; + if (kfence_handle_page_fault(addr)) + return; + if (is_el1_permission_fault(addr, esr, regs)) { if (esr & ESR_ELx_WNR) msg = "write to read-only memory";