Message ID | 808cc6516f47d5f5e811d2c237983767952f3743.1738686764.git.maciej.wieczor-retman@intel.com (mailing list archive) |
---|---|
State | Handled Elsewhere |
Headers | show |
Series | kasan: x86: arm64: risc-v: KASAN tag-based mode for x86 | expand |
Context | Check | Description |
---|---|---|
bjorn/pre-ci_am | fail | Failed to apply series |
On Tue, Feb 4, 2025 at 6:34 PM Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com> wrote: > > Tag-based KASAN (on arm64) works by generating a random 8-bit tag and > putting it in both the top byte of the pointer (that points to the > allocated memory) and into all bytes of shadow memory that correspond to > the chunk of allocated regular memory. Each byte of shadow memory covers > a 16 byte chunk of allocated memory - a value called KASAN granularity. > This means that out-of-bounds memory accesses that happen inside the 16 > bytes can't be caught. > > The dense mode offers reducing the tag width from 8 to 4 bits and > storing two tags in one byte of shadow memory - one in the upper 4 bits > of the byte and one in the lower 4. This way one byte of shadow memory > can cover 32 bytes of allocated memory while still keeping the "16 bytes > per one tag" granularity. The lower 4 bits of each shadow byte map bytes > of memory with offsets 0-15 and the upper 4 bits map offsets 16-31. > > Example: > The example below shows how the shadow memory looks like after > allocating 48 bytes of memory in both normal tag-based mode and the > dense mode. The contents of shadow memory are overlaid onto address > offsets that they relate to in the allocated kernel memory. Each cell > | | symbolizes one byte of shadow memory. > > = The regular tag based mode: > - Randomly generated 8-bit tag equals 0xAB. > - 0xFE is the tag that symbolizes unallocated memory. > > Shadow memory contents: | 0xAB | 0xAB | 0xAB | 0xFE | > Shadow memory address offsets: 0 1 2 3 4 > Allocated memory address offsets: 0 16 32 48 64 > > = The dense tag based mode: > - Randomly generated 4-bit tag equals 0xC. > - 0xE is the tag that symbolizes unallocated memory. > > Shadow memory contents: |0xC 0xC |0xC 0xE |0xE 0xE |0xE 0xE | > Shadow memory address offsets: 0 1 2 3 4 > Allocated memory address offsets: 0 32 64 96 128 > > Add a new config option and defines that can override the standard > system of one tag per one shadow byte. > > Add alternative version of the kasan_poison() that deals with tags not > being aligned to byte size in shadow memory. > > Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com> > --- > include/linux/kasan.h | 18 ++++++++++++++++++ > lib/Kconfig.kasan | 21 +++++++++++++++++++++ > mm/kasan/kasan.h | 4 +--- > mm/kasan/shadow.c | 33 ++++++++++++++++++++++++++++++--- > 4 files changed, 70 insertions(+), 6 deletions(-) > > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > index 03b440658817..ea0f5acd875b 100644 > --- a/include/linux/kasan.h > +++ b/include/linux/kasan.h > @@ -35,6 +35,24 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t; > > /* Software KASAN implementations use shadow memory. */ > > +#ifdef CONFIG_KASAN_SW_TAGS_DENSE > +#define KASAN_GRANULE_SHIFT (KASAN_SHADOW_SCALE_SHIFT - 1) > +#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) > +static inline u8 kasan_dense_tag(u8 tag) > +{ > + return (tag << KASAN_TAG_WIDTH | tag); > +} > +#else > +#define KASAN_GRANULE_SHIFT KASAN_SHADOW_SCALE_SHIFT > +#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_GRANULE_SHIFT) > +static inline u8 kasan_dense_tag(u8 tag) > +{ > + return tag; > +} > +#endif > + > +#define KASAN_GRANULE_SIZE (1UL << KASAN_GRANULE_SHIFT) > + Is there a reason these definitions are added to include/linux/kasan.h? At least within this patch, they are only used within mm/kasan, so let's keep them in mm/kasan/kasan.h. > #ifdef CONFIG_KASAN_SW_TAGS > /* This matches KASAN_TAG_INVALID. */ > #define KASAN_SHADOW_INIT 0xFE > diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan > index 98016e137b7f..d08b4e9bf477 100644 > --- a/lib/Kconfig.kasan > +++ b/lib/Kconfig.kasan > @@ -19,6 +19,13 @@ config ARCH_DISABLE_KASAN_INLINE > Disables both inline and stack instrumentation. Selected by > architectures that do not support these instrumentation types. > > +config ARCH_HAS_KASAN_SW_TAGS_DENSE > + bool > + help > + Enables option to compile tag-based KASAN with densely packed tags - > + two 4-bit tags per one byte of shadow memory. Set on architectures > + that have 4-bit tag macros. > + > config CC_HAS_KASAN_GENERIC > def_bool $(cc-option, -fsanitize=kernel-address) > > @@ -223,4 +230,18 @@ config KASAN_EXTRA_INFO > boot parameter, it will add 8 * stack_ring_size bytes of additional > memory consumption. > > +config KASAN_SW_TAGS_DENSE > + bool "Two 4-bit tags in one shadow memory byte" > + depends on KASAN_SW_TAGS > + depends on ARCH_HAS_KASAN_SW_TAGS_DENSE I think this should also depend on KASAN_OUTLINE: Clang/GCC aren't aware of the dense mode. > + help > + Enables packing two tags into one shadow byte to half the memory usage > + compared to normal tag-based mode. But adds some performance impact? > + > + After setting this option, tag width macro is set to 4 and size macros > + are adjusted based on used KASAN_SHADOW_SCALE_SHIFT. I think this paragraph is an implementation detail and we can drop it. > + > + ARCH_HAS_KASAN_SW_TAGS_DENSE is needed for this option since the > + special tag macros need to be properly set for 4-bit wide tags. > + > endif # KASAN > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 72da5ddcceaa..0e04c5e2c405 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -128,9 +128,7 @@ static inline bool kasan_requires_meta(void) > > #endif /* CONFIG_KASAN_GENERIC */ > > -#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) > -#define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) > -#else > +#ifdef CONFIG_KASAN_HW_TAGS > #include <asm/mte-kasan.h> > #define KASAN_GRANULE_SIZE MTE_GRANULE_SIZE > #endif > diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c > index d6210ca48dda..368503f54b87 100644 > --- a/mm/kasan/shadow.c > +++ b/mm/kasan/shadow.c > @@ -123,7 +123,8 @@ EXPORT_SYMBOL(__hwasan_memcpy); > > void kasan_poison(const void *addr, size_t size, u8 value, bool init) > { > - void *shadow_start, *shadow_end; > + u8 *shadow_start, *shadow_end, *shadow_start_aligned, *shadow_end_aligned, tag; > + u64 addr64, addr_start_aligned, addr_end_aligned; > > if (!kasan_arch_is_ready()) > return; > @@ -134,16 +135,42 @@ void kasan_poison(const void *addr, size_t size, u8 value, bool init) > * addresses to this function. > */ > addr = kasan_reset_tag(addr); > + addr64 = (u64)addr; > > - if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > + if (WARN_ON(addr64 & KASAN_GRANULE_MASK)) > return; > if (WARN_ON(size & KASAN_GRANULE_MASK)) > return; > > shadow_start = kasan_mem_to_shadow(addr); > shadow_end = kasan_mem_to_shadow(addr + size); > + addr_start_aligned = round_up(addr64, KASAN_SHADOW_SCALE_SIZE); > + addr_end_aligned = round_down(addr64 + size, KASAN_SHADOW_SCALE_SIZE); > + shadow_start_aligned = kasan_mem_to_shadow((void *)addr_start_aligned); > + shadow_end_aligned = kasan_mem_to_shadow((void *)addr_end_aligned); > + > + /* If size is empty just return. */ > + if (!size) > + return; > > - __memset(shadow_start, value, shadow_end - shadow_start); > + /* Memset the first unaligned tag in shadow memory. */ > + if (addr64 % KASAN_SHADOW_SCALE_SIZE) { So this is required, because KASAN_SHADOW_SCALE_SIZE is 32 but minimal slab alignment is still KASAN_GRANULE_SIZE == 16... We should at least hide this check is under IS_ENABLED(KASAN_SW_TAGS_DENSE). > + tag = *shadow_start & KASAN_TAG_MASK; > + tag |= value << KASAN_TAG_WIDTH; > + *shadow_start = tag; > + } > + > + /* Memset the middle aligned part in shadow memory. */ > + tag = kasan_dense_tag(value); > + __memset(shadow_start_aligned, tag, shadow_end_aligned - shadow_start_aligned); > + > + /* Memset the last unaligned tag in shadow memory. */ > + if ((addr64 + size) % KASAN_SHADOW_SCALE_SIZE) { Would it be possible to move this part to kasan_poison_last_granule()? That functions seems to be serving a similar purpose but for the Generic mode. It might also be cleaner to add a kasan_poison_first_granule() that contains the if (addr64 % KASAN_SHADOW_SCALE_SIZE) check. > + tag = KASAN_TAG_MASK << KASAN_TAG_WIDTH; > + tag &= *shadow_end; > + tag |= value; > + *shadow_end = tag; > + } > } > EXPORT_SYMBOL_GPL(kasan_poison); > > -- > 2.47.1 >
On 2025-02-06 at 00:43:46 +0100, Andrey Konovalov wrote: >On Tue, Feb 4, 2025 at 6:34 PM Maciej Wieczor-Retman ><maciej.wieczor-retman@intel.com> wrote: >> >> Tag-based KASAN (on arm64) works by generating a random 8-bit tag and >> putting it in both the top byte of the pointer (that points to the >> allocated memory) and into all bytes of shadow memory that correspond to >> the chunk of allocated regular memory. Each byte of shadow memory covers >> a 16 byte chunk of allocated memory - a value called KASAN granularity. >> This means that out-of-bounds memory accesses that happen inside the 16 >> bytes can't be caught. >> >> The dense mode offers reducing the tag width from 8 to 4 bits and >> storing two tags in one byte of shadow memory - one in the upper 4 bits >> of the byte and one in the lower 4. This way one byte of shadow memory >> can cover 32 bytes of allocated memory while still keeping the "16 bytes >> per one tag" granularity. The lower 4 bits of each shadow byte map bytes >> of memory with offsets 0-15 and the upper 4 bits map offsets 16-31. >> >> Example: >> The example below shows how the shadow memory looks like after >> allocating 48 bytes of memory in both normal tag-based mode and the >> dense mode. The contents of shadow memory are overlaid onto address >> offsets that they relate to in the allocated kernel memory. Each cell >> | | symbolizes one byte of shadow memory. >> >> = The regular tag based mode: >> - Randomly generated 8-bit tag equals 0xAB. >> - 0xFE is the tag that symbolizes unallocated memory. >> >> Shadow memory contents: | 0xAB | 0xAB | 0xAB | 0xFE | >> Shadow memory address offsets: 0 1 2 3 4 >> Allocated memory address offsets: 0 16 32 48 64 >> >> = The dense tag based mode: >> - Randomly generated 4-bit tag equals 0xC. >> - 0xE is the tag that symbolizes unallocated memory. >> >> Shadow memory contents: |0xC 0xC |0xC 0xE |0xE 0xE |0xE 0xE | >> Shadow memory address offsets: 0 1 2 3 4 >> Allocated memory address offsets: 0 32 64 96 128 >> >> Add a new config option and defines that can override the standard >> system of one tag per one shadow byte. >> >> Add alternative version of the kasan_poison() that deals with tags not >> being aligned to byte size in shadow memory. >> >> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com> >> --- >> include/linux/kasan.h | 18 ++++++++++++++++++ >> lib/Kconfig.kasan | 21 +++++++++++++++++++++ >> mm/kasan/kasan.h | 4 +--- >> mm/kasan/shadow.c | 33 ++++++++++++++++++++++++++++++--- >> 4 files changed, 70 insertions(+), 6 deletions(-) >> >> diff --git a/include/linux/kasan.h b/include/linux/kasan.h >> index 03b440658817..ea0f5acd875b 100644 >> --- a/include/linux/kasan.h >> +++ b/include/linux/kasan.h >> @@ -35,6 +35,24 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t; >> >> /* Software KASAN implementations use shadow memory. */ >> >> +#ifdef CONFIG_KASAN_SW_TAGS_DENSE >> +#define KASAN_GRANULE_SHIFT (KASAN_SHADOW_SCALE_SHIFT - 1) >> +#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) >> +static inline u8 kasan_dense_tag(u8 tag) >> +{ >> + return (tag << KASAN_TAG_WIDTH | tag); >> +} >> +#else >> +#define KASAN_GRANULE_SHIFT KASAN_SHADOW_SCALE_SHIFT >> +#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_GRANULE_SHIFT) >> +static inline u8 kasan_dense_tag(u8 tag) >> +{ >> + return tag; >> +} >> +#endif >> + >> +#define KASAN_GRANULE_SIZE (1UL << KASAN_GRANULE_SHIFT) >> + > >Is there a reason these definitions are added to >include/linux/kasan.h? At least within this patch, they are only used >within mm/kasan, so let's keep them in mm/kasan/kasan.h. Parts of x86 arch use these later (minimal slab alignment, kasan shadow start address) so I thought it was convenient to already have it in place here? Since I'll be reordering patches I can just move these changes together. > >> #ifdef CONFIG_KASAN_SW_TAGS >> /* This matches KASAN_TAG_INVALID. */ >> #define KASAN_SHADOW_INIT 0xFE >> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan >> index 98016e137b7f..d08b4e9bf477 100644 >> --- a/lib/Kconfig.kasan >> +++ b/lib/Kconfig.kasan >> @@ -19,6 +19,13 @@ config ARCH_DISABLE_KASAN_INLINE >> Disables both inline and stack instrumentation. Selected by >> architectures that do not support these instrumentation types. >> >> +config ARCH_HAS_KASAN_SW_TAGS_DENSE >> + bool >> + help >> + Enables option to compile tag-based KASAN with densely packed tags - >> + two 4-bit tags per one byte of shadow memory. Set on architectures >> + that have 4-bit tag macros. >> + >> config CC_HAS_KASAN_GENERIC >> def_bool $(cc-option, -fsanitize=kernel-address) >> >> @@ -223,4 +230,18 @@ config KASAN_EXTRA_INFO >> boot parameter, it will add 8 * stack_ring_size bytes of additional >> memory consumption. >> >> +config KASAN_SW_TAGS_DENSE >> + bool "Two 4-bit tags in one shadow memory byte" >> + depends on KASAN_SW_TAGS >> + depends on ARCH_HAS_KASAN_SW_TAGS_DENSE > >I think this should also depend on KASAN_OUTLINE: Clang/GCC aren't >aware of the dense mode. I wasn't sure I fully understood how inline/outline interacts with clang/gcc on x86 (especially that I think some parts are still missing in x86 clang for tag-based KASAN). So I understand that compiling with inline doesn't do anything? If so, is it not doing anything because of missing compiler code or something in the kernel? > >> + help >> + Enables packing two tags into one shadow byte to half the memory usage >> + compared to normal tag-based mode. > >But adds some performance impact? I tried to measure the performance impact of dense/non-dense but didn't see much more than noise in my tests. But I'll mention that there is some small performance impact due to more bit shifts. > >> + >> + After setting this option, tag width macro is set to 4 and size macros >> + are adjusted based on used KASAN_SHADOW_SCALE_SHIFT. > >I think this paragraph is an implementation detail and we can drop it. Okay, will do. > >> + >> + ARCH_HAS_KASAN_SW_TAGS_DENSE is needed for this option since the >> + special tag macros need to be properly set for 4-bit wide tags. >> + >> endif # KASAN >> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h >> index 72da5ddcceaa..0e04c5e2c405 100644 >> --- a/mm/kasan/kasan.h >> +++ b/mm/kasan/kasan.h >> @@ -128,9 +128,7 @@ static inline bool kasan_requires_meta(void) >> >> #endif /* CONFIG_KASAN_GENERIC */ >> >> -#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) >> -#define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) >> -#else >> +#ifdef CONFIG_KASAN_HW_TAGS >> #include <asm/mte-kasan.h> >> #define KASAN_GRANULE_SIZE MTE_GRANULE_SIZE >> #endif >> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c >> index d6210ca48dda..368503f54b87 100644 >> --- a/mm/kasan/shadow.c >> +++ b/mm/kasan/shadow.c >> @@ -123,7 +123,8 @@ EXPORT_SYMBOL(__hwasan_memcpy); >> >> void kasan_poison(const void *addr, size_t size, u8 value, bool init) >> { >> - void *shadow_start, *shadow_end; >> + u8 *shadow_start, *shadow_end, *shadow_start_aligned, *shadow_end_aligned, tag; >> + u64 addr64, addr_start_aligned, addr_end_aligned; >> >> if (!kasan_arch_is_ready()) >> return; >> @@ -134,16 +135,42 @@ void kasan_poison(const void *addr, size_t size, u8 value, bool init) >> * addresses to this function. >> */ >> addr = kasan_reset_tag(addr); >> + addr64 = (u64)addr; >> >> - if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) >> + if (WARN_ON(addr64 & KASAN_GRANULE_MASK)) >> return; >> if (WARN_ON(size & KASAN_GRANULE_MASK)) >> return; >> >> shadow_start = kasan_mem_to_shadow(addr); >> shadow_end = kasan_mem_to_shadow(addr + size); >> + addr_start_aligned = round_up(addr64, KASAN_SHADOW_SCALE_SIZE); >> + addr_end_aligned = round_down(addr64 + size, KASAN_SHADOW_SCALE_SIZE); >> + shadow_start_aligned = kasan_mem_to_shadow((void *)addr_start_aligned); >> + shadow_end_aligned = kasan_mem_to_shadow((void *)addr_end_aligned); >> + >> + /* If size is empty just return. */ >> + if (!size) >> + return; >> >> - __memset(shadow_start, value, shadow_end - shadow_start); >> + /* Memset the first unaligned tag in shadow memory. */ >> + if (addr64 % KASAN_SHADOW_SCALE_SIZE) { > >So this is required, because KASAN_SHADOW_SCALE_SIZE is 32 but minimal >slab alignment is still KASAN_GRANULE_SIZE == 16... We should at least >hide this check is under IS_ENABLED(KASAN_SW_TAGS_DENSE). ... > >> + tag = *shadow_start & KASAN_TAG_MASK; >> + tag |= value << KASAN_TAG_WIDTH; >> + *shadow_start = tag; >> + } >> + >> + /* Memset the middle aligned part in shadow memory. */ >> + tag = kasan_dense_tag(value); >> + __memset(shadow_start_aligned, tag, shadow_end_aligned - shadow_start_aligned); >> + >> + /* Memset the last unaligned tag in shadow memory. */ >> + if ((addr64 + size) % KASAN_SHADOW_SCALE_SIZE) { > >Would it be possible to move this part to kasan_poison_last_granule()? >That functions seems to be serving a similar purpose but for the >Generic mode. > >It might also be cleaner to add a kasan_poison_first_granule() that >contains the if (addr64 % KASAN_SHADOW_SCALE_SIZE) check. ... sure, I'll try to move these checks to kasan_poison_first/last_granule. > >> + tag = KASAN_TAG_MASK << KASAN_TAG_WIDTH; >> + tag &= *shadow_end; >> + tag |= value; >> + *shadow_end = tag; >> + } >> } >> EXPORT_SYMBOL_GPL(kasan_poison); >> >> -- >> 2.47.1 >>
On Thu, Feb 6, 2025 at 1:58 PM Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com> wrote: > > >Is there a reason these definitions are added to > >include/linux/kasan.h? At least within this patch, they are only used > >within mm/kasan, so let's keep them in mm/kasan/kasan.h. > > Parts of x86 arch use these later (minimal slab alignment, kasan shadow start > address) so I thought it was convenient to already have it in place here? AFAICT, KASAN_SHADOW_START only relies on KASAN_SHADOW_SCALE_SHIFT, which is defined arch/x86/include/asm/kasan.h anyway. And ARCH_SLAB_MINALIGN is defined in asm headers, so the definitions from include/linux/kasan.h shouldn't be visible to it? I think that we need to do is to define KASAN_GRANULE_SHIFT next to KASAN_SHADOW_SCALE_SHIFT for x86 and then use it in mm/kasan/kasan.h to define KASAN_GRANULE_SIZE for SW_TAGS. (Similarly as with arm64, where ARCH_SLAB_MINALIGN depends on either KASAN_SHADOW_SCALE_SHIFT or MTE_GRANULE_SIZE, both of which are defined in arm64 asm headers.) Btw, I think ARCH_SLAB_MINALIGN needs to be defined in include/asm/cache.h: at least all other architectures have it there. > Since I'll be reordering patches I can just move these changes together. Otherwise, if you need to expose something new in include/linux/kasan.h, please do it together with the change that uses it. Or you can even put it into a separate patch with an explanation of why it's required - at least from the review perspective having separate smaller patches is often better. In general, if something doesn't need to get exposed to the rest of the kernel, keep it in mm/kasan/kasan.h. > >I think this should also depend on KASAN_OUTLINE: Clang/GCC aren't > >aware of the dense mode. > > I wasn't sure I fully understood how inline/outline interacts with clang/gcc on > x86 (especially that I think some parts are still missing in x86 clang for > tag-based KASAN). So I understand that compiling with inline doesn't do > anything? If so, is it not doing anything because of missing compiler code or > something in the kernel? With inline instrumentation, the compiler directly embeds the instructions to calculate the shadow address and check the shadow value. Since the compiler assumes that one shadow byte corresponds to 16 bytes of memory and not 32, the generated instructions won't be compatible with the dense mode. With outline instrumentation, the compiler just adds function calls and thus all the shadow calculations are performed by the C code. Or did the dense mode work for you with KASAN_INLINE enabled? I would expect this not to work. Or maybe the inline instrumentation somehow got auto-disabled... > >Would it be possible to move this part to kasan_poison_last_granule()? > >That functions seems to be serving a similar purpose but for the > >Generic mode. > > > >It might also be cleaner to add a kasan_poison_first_granule() that > >contains the if (addr64 % KASAN_SHADOW_SCALE_SIZE) check. > ... > sure, I'll try to move these checks to kasan_poison_first/last_granule. For kasan_poison_last_granule(), I think the change makes sense. For kasan_poison_first_granule(), please check whether it gives any readability benefit - if kasan_poison() is the only caller, maybe adding another function is not worth it.
diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 03b440658817..ea0f5acd875b 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -35,6 +35,24 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t; /* Software KASAN implementations use shadow memory. */ +#ifdef CONFIG_KASAN_SW_TAGS_DENSE +#define KASAN_GRANULE_SHIFT (KASAN_SHADOW_SCALE_SHIFT - 1) +#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) +static inline u8 kasan_dense_tag(u8 tag) +{ + return (tag << KASAN_TAG_WIDTH | tag); +} +#else +#define KASAN_GRANULE_SHIFT KASAN_SHADOW_SCALE_SHIFT +#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_GRANULE_SHIFT) +static inline u8 kasan_dense_tag(u8 tag) +{ + return tag; +} +#endif + +#define KASAN_GRANULE_SIZE (1UL << KASAN_GRANULE_SHIFT) + #ifdef CONFIG_KASAN_SW_TAGS /* This matches KASAN_TAG_INVALID. */ #define KASAN_SHADOW_INIT 0xFE diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan index 98016e137b7f..d08b4e9bf477 100644 --- a/lib/Kconfig.kasan +++ b/lib/Kconfig.kasan @@ -19,6 +19,13 @@ config ARCH_DISABLE_KASAN_INLINE Disables both inline and stack instrumentation. Selected by architectures that do not support these instrumentation types. +config ARCH_HAS_KASAN_SW_TAGS_DENSE + bool + help + Enables option to compile tag-based KASAN with densely packed tags - + two 4-bit tags per one byte of shadow memory. Set on architectures + that have 4-bit tag macros. + config CC_HAS_KASAN_GENERIC def_bool $(cc-option, -fsanitize=kernel-address) @@ -223,4 +230,18 @@ config KASAN_EXTRA_INFO boot parameter, it will add 8 * stack_ring_size bytes of additional memory consumption. +config KASAN_SW_TAGS_DENSE + bool "Two 4-bit tags in one shadow memory byte" + depends on KASAN_SW_TAGS + depends on ARCH_HAS_KASAN_SW_TAGS_DENSE + help + Enables packing two tags into one shadow byte to half the memory usage + compared to normal tag-based mode. + + After setting this option, tag width macro is set to 4 and size macros + are adjusted based on used KASAN_SHADOW_SCALE_SHIFT. + + ARCH_HAS_KASAN_SW_TAGS_DENSE is needed for this option since the + special tag macros need to be properly set for 4-bit wide tags. + endif # KASAN diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 72da5ddcceaa..0e04c5e2c405 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -128,9 +128,7 @@ static inline bool kasan_requires_meta(void) #endif /* CONFIG_KASAN_GENERIC */ -#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) -#define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) -#else +#ifdef CONFIG_KASAN_HW_TAGS #include <asm/mte-kasan.h> #define KASAN_GRANULE_SIZE MTE_GRANULE_SIZE #endif diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index d6210ca48dda..368503f54b87 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -123,7 +123,8 @@ EXPORT_SYMBOL(__hwasan_memcpy); void kasan_poison(const void *addr, size_t size, u8 value, bool init) { - void *shadow_start, *shadow_end; + u8 *shadow_start, *shadow_end, *shadow_start_aligned, *shadow_end_aligned, tag; + u64 addr64, addr_start_aligned, addr_end_aligned; if (!kasan_arch_is_ready()) return; @@ -134,16 +135,42 @@ void kasan_poison(const void *addr, size_t size, u8 value, bool init) * addresses to this function. */ addr = kasan_reset_tag(addr); + addr64 = (u64)addr; - if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) + if (WARN_ON(addr64 & KASAN_GRANULE_MASK)) return; if (WARN_ON(size & KASAN_GRANULE_MASK)) return; shadow_start = kasan_mem_to_shadow(addr); shadow_end = kasan_mem_to_shadow(addr + size); + addr_start_aligned = round_up(addr64, KASAN_SHADOW_SCALE_SIZE); + addr_end_aligned = round_down(addr64 + size, KASAN_SHADOW_SCALE_SIZE); + shadow_start_aligned = kasan_mem_to_shadow((void *)addr_start_aligned); + shadow_end_aligned = kasan_mem_to_shadow((void *)addr_end_aligned); + + /* If size is empty just return. */ + if (!size) + return; - __memset(shadow_start, value, shadow_end - shadow_start); + /* Memset the first unaligned tag in shadow memory. */ + if (addr64 % KASAN_SHADOW_SCALE_SIZE) { + tag = *shadow_start & KASAN_TAG_MASK; + tag |= value << KASAN_TAG_WIDTH; + *shadow_start = tag; + } + + /* Memset the middle aligned part in shadow memory. */ + tag = kasan_dense_tag(value); + __memset(shadow_start_aligned, tag, shadow_end_aligned - shadow_start_aligned); + + /* Memset the last unaligned tag in shadow memory. */ + if ((addr64 + size) % KASAN_SHADOW_SCALE_SIZE) { + tag = KASAN_TAG_MASK << KASAN_TAG_WIDTH; + tag &= *shadow_end; + tag |= value; + *shadow_end = tag; + } } EXPORT_SYMBOL_GPL(kasan_poison);
Tag-based KASAN (on arm64) works by generating a random 8-bit tag and putting it in both the top byte of the pointer (that points to the allocated memory) and into all bytes of shadow memory that correspond to the chunk of allocated regular memory. Each byte of shadow memory covers a 16 byte chunk of allocated memory - a value called KASAN granularity. This means that out-of-bounds memory accesses that happen inside the 16 bytes can't be caught. The dense mode offers reducing the tag width from 8 to 4 bits and storing two tags in one byte of shadow memory - one in the upper 4 bits of the byte and one in the lower 4. This way one byte of shadow memory can cover 32 bytes of allocated memory while still keeping the "16 bytes per one tag" granularity. The lower 4 bits of each shadow byte map bytes of memory with offsets 0-15 and the upper 4 bits map offsets 16-31. Example: The example below shows how the shadow memory looks like after allocating 48 bytes of memory in both normal tag-based mode and the dense mode. The contents of shadow memory are overlaid onto address offsets that they relate to in the allocated kernel memory. Each cell | | symbolizes one byte of shadow memory. = The regular tag based mode: - Randomly generated 8-bit tag equals 0xAB. - 0xFE is the tag that symbolizes unallocated memory. Shadow memory contents: | 0xAB | 0xAB | 0xAB | 0xFE | Shadow memory address offsets: 0 1 2 3 4 Allocated memory address offsets: 0 16 32 48 64 = The dense tag based mode: - Randomly generated 4-bit tag equals 0xC. - 0xE is the tag that symbolizes unallocated memory. Shadow memory contents: |0xC 0xC |0xC 0xE |0xE 0xE |0xE 0xE | Shadow memory address offsets: 0 1 2 3 4 Allocated memory address offsets: 0 32 64 96 128 Add a new config option and defines that can override the standard system of one tag per one shadow byte. Add alternative version of the kasan_poison() that deals with tags not being aligned to byte size in shadow memory. Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com> --- include/linux/kasan.h | 18 ++++++++++++++++++ lib/Kconfig.kasan | 21 +++++++++++++++++++++ mm/kasan/kasan.h | 4 +--- mm/kasan/shadow.c | 33 ++++++++++++++++++++++++++++++--- 4 files changed, 70 insertions(+), 6 deletions(-)