Message ID | 20231122231202.121277-1-andrey.konovalov@linux.dev (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [mm] slub, kasan: improve interaction of KASAN and slub_debug poisoning | expand |
On Thu, Nov 23, 2023 at 8:12 AM <andrey.konovalov@linux.dev> wrote: > > From: Andrey Konovalov <andreyknvl@google.com> > > When both KASAN and slub_debug are enabled, when a free object is being > prepared in setup_object, slub_debug poisons the object data before KASAN > initializes its per-object metadata. > > Right now, in setup_object, KASAN only initializes the alloc metadata, > which is always stored outside of the object. slub_debug is aware of > this and it skips poisoning and checking that memory area. > > However, with the following patch in this series, KASAN also starts > initializing its free medata in setup_object. As this metadata might be > stored within the object, this initialization might overwrite the > slub_debug poisoning. This leads to slub_debug reports. > > Thus, skip checking slub_debug poisoning of the object data area that > overlaps with the in-object KASAN free metadata. > > Also make slub_debug poisoning of tail kmalloc redzones more precise when > KASAN is enabled: slub_debug can still poison and check the tail kmalloc > allocation area that comes after the KASAN free metadata. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Thank you for looking at this quickly! Unfortunately the problem isn't fixed yet with the patch. I applied this on top of linux-next and built a kernel with the same config, it is still stuck at boot. [dmesg] [ 0.000000] Linux version 6.7.0-rc2-next-20231122-00001-gfc1613c2f6f3 (hyeyoo@localhost.localdomain) (gcc (GCC) 11.33 [ 0.000000] Command line: console=ttyS0 root=/dev/sda1 nokaslr [ 0.000000] RIP: 0010:setup_arch (arch/x86/kernel/setup.c:443 arch/x86/kernel/setup.c:665 arch/x86/kernel/setup.c:81 [ 0.000000] Code: b6 0a 08 00 48 89 c5 48 85 c0 0f 84 58 13 00 00 48 c1 e8 03 48 83 05 4e a9 66 00 01 80 3c 18 00 0f3 Code starting with the faulting instruction =========================================== 0: b6 0a mov $0xa,%dh 2: 08 00 or %al,(%rax) 4: 48 89 c5 mov %rax,%rbp 7: 48 85 c0 test %rax,%rax a: 0f 84 58 13 00 00 je 0x1368 10: 48 c1 e8 03 shr $0x3,%rax 14: 48 83 05 4e a9 66 00 addq $0x1,0x66a94e(%rip) # 0x66a96a 1b: 01 1c: 80 3c 18 00 cmpb $0x0,(%rax,%rbx,1) 20: f3 repz [ 0.000000] RSP: 0000:ffffffff86207e00 EFLAGS: 00010046 ORIG_RAX: 0000000000000009 [ 0.000000] RAX: 1fffffffffe40069 RBX: dffffc0000000000 RCX: 1ffffffff1230a30 [ 0.000000] RDX: 0000000000000000 RSI: 0107d62120059000 RDI: ffffffff89185180 [ 0.000000] RBP: ffffffffff200348 R08: 8000000000000163 R09: 1ffffffff1230a28 [ 0.000000] R10: ffffffff89194150 R11: 0000000000000000 R12: 0000000000000010 [ 0.000000] R13: ffffffffff200354 R14: 0107d62120058348 R15: 0107d62120058348 [ 0.000000] FS: 0000000000000000(0000) GS:ffffffff88f75000(0000) knlGS:0000000000000000 [ 0.000000] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 0.000000] CR2: ffffffffff200348 CR3: 0000000009128000 CR4: 00000000000000b0 [ 0.000000] Call Trace: [ 0.000000] <TASK> [ 0.000000] ? show_regs (arch/x86/kernel/dumpstack.c:478) [ 0.000000] ? early_fixup_exception (arch/x86/mm/extable.c:364) [ 0.000000] ? do_early_exception (arch/x86/kernel/head64.c:423) [ 0.000000] ? early_idt_handler_common (arch/x86/kernel/head_64.S:555) [ 0.000000] ? setup_arch (arch/x86/kernel/setup.c:443 arch/x86/kernel/setup.c:665 arch/x86/kernel/setup.c:814) [ 0.000000] ? __pfx_setup_arch (arch/x86/kernel/setup.c:728) [ 0.000000] ? vprintk_default (kernel/printk/printk.c:2318) [ 0.000000] ? vprintk (kernel/printk/printk_safe.c:45) [ 0.000000] ? _printk (kernel/printk/printk.c:2328) [ 0.000000] ? __pfx__printk (kernel/printk/printk.c:2323) [ 0.000000] ? init_cgroup_root (kernel/cgroup/cgroup.c:2054) [ 0.000000] ? cgroup_init_early (kernel/cgroup/cgroup.c:6077 (discriminator 13)) [ 0.000000] ? start_kernel (init/main.c:897 (discriminator 3)) [ 0.000000] ? x86_64_start_reservations (arch/x86/kernel/head64.c:543) [ 0.000000] ? x86_64_start_kernel (arch/x86/kernel/head64.c:536) [ 0.000000] ? secondary_startup_64_no_verify (arch/x86/kernel/head_64.S:432) [ 0.000000] </TASK>
On Thu, Nov 23, 2023 at 1:39 AM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote: > > On Thu, Nov 23, 2023 at 8:12 AM <andrey.konovalov@linux.dev> wrote: > > > > From: Andrey Konovalov <andreyknvl@google.com> > > > > When both KASAN and slub_debug are enabled, when a free object is being > > prepared in setup_object, slub_debug poisons the object data before KASAN > > initializes its per-object metadata. > > > > Right now, in setup_object, KASAN only initializes the alloc metadata, > > which is always stored outside of the object. slub_debug is aware of > > this and it skips poisoning and checking that memory area. > > > > However, with the following patch in this series, KASAN also starts > > initializing its free medata in setup_object. As this metadata might be > > stored within the object, this initialization might overwrite the > > slub_debug poisoning. This leads to slub_debug reports. > > > > Thus, skip checking slub_debug poisoning of the object data area that > > overlaps with the in-object KASAN free metadata. > > > > Also make slub_debug poisoning of tail kmalloc redzones more precise when > > KASAN is enabled: slub_debug can still poison and check the tail kmalloc > > allocation area that comes after the KASAN free metadata. > > > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > > Thank you for looking at this quickly! > Unfortunately the problem isn't fixed yet with the patch. > > I applied this on top of linux-next and built a kernel with the same config, > it is still stuck at boot. Ah, this is caused by a buggy version of "kasan: improve free meta storage in Generic KASAN", which made its way into linux-next. Reverting that patch should fix the issue. My patch that you bisected to exposes the buggy behavior. Thanks!
On Thu, Nov 23, 2023 at 11:31 AM Andrey Konovalov <andreyknvl@gmail.com> wrote: > > On Thu, Nov 23, 2023 at 1:39 AM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote: > > > > On Thu, Nov 23, 2023 at 8:12 AM <andrey.konovalov@linux.dev> wrote: > > > > > > From: Andrey Konovalov <andreyknvl@google.com> > > > > > > When both KASAN and slub_debug are enabled, when a free object is being > > > prepared in setup_object, slub_debug poisons the object data before KASAN > > > initializes its per-object metadata. > > > > > > Right now, in setup_object, KASAN only initializes the alloc metadata, > > > which is always stored outside of the object. slub_debug is aware of > > > this and it skips poisoning and checking that memory area. > > > > > > However, with the following patch in this series, KASAN also starts > > > initializing its free medata in setup_object. As this metadata might be > > > stored within the object, this initialization might overwrite the > > > slub_debug poisoning. This leads to slub_debug reports. > > > > > > Thus, skip checking slub_debug poisoning of the object data area that > > > overlaps with the in-object KASAN free metadata. > > > > > > Also make slub_debug poisoning of tail kmalloc redzones more precise when > > > KASAN is enabled: slub_debug can still poison and check the tail kmalloc > > > allocation area that comes after the KASAN free metadata. > > > > > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > > > > Thank you for looking at this quickly! > > Unfortunately the problem isn't fixed yet with the patch. > > > > I applied this on top of linux-next and built a kernel with the same config, > > it is still stuck at boot. > > Ah, this is caused by a buggy version of "kasan: improve free meta > storage in Generic KASAN", which made its way into linux-next. > Reverting that patch should fix the issue. My patch that you bisected > to exposes the buggy behavior. 1. I reverted the commit "kasan: improve free meta storage in Generic KASAN", on top of linux-next (next-20231122), and it is still stuck at boot. 2. I reverted the commit "kasan: improve free meta storage in Generic KASAN", on top of linux-next (next-20231122), _and_ applied this patch on top of it, now it boots fine! -- Hyeonggon
On Thu, Nov 23, 2023 at 3:58 AM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote: > > 1. I reverted the commit "kasan: improve free meta storage in Generic KASAN", > on top of linux-next (next-20231122), and it is still stuck at boot. This is expected: the patch you bisected to still requires this fix that I posted. > 2. I reverted the commit "kasan: improve free meta storage in Generic KASAN", > on top of linux-next (next-20231122), > _and_ applied this patch on top of it, now it boots fine! Great! Thank you for testing!
Hi Andrey, On Thu, Nov 23, 2023 at 07:12:02AM +0800, andrey.konovalov@linux.dev wrote: > From: Andrey Konovalov <andreyknvl@google.com> > > When both KASAN and slub_debug are enabled, when a free object is being > prepared in setup_object, slub_debug poisons the object data before KASAN > initializes its per-object metadata. > > Right now, in setup_object, KASAN only initializes the alloc metadata, > which is always stored outside of the object. slub_debug is aware of > this and it skips poisoning and checking that memory area. > > However, with the following patch in this series, KASAN also starts > initializing its free medata in setup_object. As this metadata might be > stored within the object, this initialization might overwrite the > slub_debug poisoning. This leads to slub_debug reports. > > Thus, skip checking slub_debug poisoning of the object data area that > overlaps with the in-object KASAN free metadata. > > Also make slub_debug poisoning of tail kmalloc redzones more precise when > KASAN is enabled: slub_debug can still poison and check the tail kmalloc > allocation area that comes after the KASAN free metadata. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > > --- > > Andrew, please put this patch right before "kasan: use stack_depot_put > for Generic mode". > --- > mm/slub.c | 41 ++++++++++++++++++++++++++--------------- > 1 file changed, 26 insertions(+), 15 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 63d281dfacdb..782bd8a6bd34 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -870,20 +870,20 @@ static inline void set_orig_size(struct kmem_cache *s, > void *object, unsigned int orig_size) > { > void *p = kasan_reset_tag(object); > + unsigned int kasan_meta_size; > > if (!slub_debug_orig_size(s)) > return; > > -#ifdef CONFIG_KASAN_GENERIC > /* > - * KASAN could save its free meta data in object's data area at > - * offset 0, if the size is larger than 'orig_size', it will > - * overlap the data redzone in [orig_size+1, object_size], and > - * the check should be skipped. > + * KASAN can save its free meta data inside of the object at offset 0. > + * If this meta data size is larger than 'orig_size', it will overlap > + * the data redzone in [orig_size+1, object_size]. Thus, we adjust > + * 'orig_size' to be as at least as big as KASAN's meta data. > */ > - if (kasan_metadata_size(s, true) > orig_size) > - orig_size = s->object_size; > -#endif > + kasan_meta_size = kasan_metadata_size(s, true); > + if (kasan_meta_size > orig_size) > + orig_size = kasan_meta_size; 'orig_size' is to save the orignal request size for kmalloc object, and its main purpose is to detect the memory wastage of kmalloc objects, see commit 6edf2576a6cc "mm/slub: enable debugging memory wasting of kmalloc" Setting "orig_size = s->object_size" was to skip the wastage check and the redzone sanity check for this 'wasted space'. So it's better not to set 'kasan_meta_size' to orig_size. And from the below code, IIUC, the orig_size is not used in fixing the boot problem found by Hyeonggon? Thanks, Feng > > p += get_info_end(s); > p += sizeof(struct track) * 2; > @@ -1192,7 +1192,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab, > { > u8 *p = object; > u8 *endobject = object + s->object_size; > - unsigned int orig_size; > + unsigned int orig_size, kasan_meta_size; > > if (s->flags & SLAB_RED_ZONE) { > if (!check_bytes_and_report(s, slab, object, "Left Redzone", > @@ -1222,12 +1222,23 @@ static int check_object(struct kmem_cache *s, struct slab *slab, > } > > if (s->flags & SLAB_POISON) { > - if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON) && > - (!check_bytes_and_report(s, slab, p, "Poison", p, > - POISON_FREE, s->object_size - 1) || > - !check_bytes_and_report(s, slab, p, "End Poison", > - p + s->object_size - 1, POISON_END, 1))) > - return 0; > + if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON)) { > + /* > + * KASAN can save its free meta data inside of the > + * object at offset 0. Thus, skip checking the part of > + * the redzone that overlaps with the meta data. > + */ > + kasan_meta_size = kasan_metadata_size(s, true); > + if (kasan_meta_size < s->object_size - 1 && > + !check_bytes_and_report(s, slab, p, "Poison", > + p + kasan_meta_size, POISON_FREE, > + s->object_size - kasan_meta_size - 1)) > + return 0; > + if (kasan_meta_size < s->object_size && > + !check_bytes_and_report(s, slab, p, "End Poison", > + p + s->object_size - 1, POISON_END, 1)) > + return 0; > + } > /* > * check_pad_bytes cleans up on its own. > */ > -- > 2.25.1 >
On Thu, Nov 23, 2023 at 02:26:13PM +0800, Tang, Feng wrote: [...] > > -#ifdef CONFIG_KASAN_GENERIC > > /* > > - * KASAN could save its free meta data in object's data area at > > - * offset 0, if the size is larger than 'orig_size', it will > > - * overlap the data redzone in [orig_size+1, object_size], and > > - * the check should be skipped. > > + * KASAN can save its free meta data inside of the object at offset 0. > > + * If this meta data size is larger than 'orig_size', it will overlap > > + * the data redzone in [orig_size+1, object_size]. Thus, we adjust > > + * 'orig_size' to be as at least as big as KASAN's meta data. > > */ > > - if (kasan_metadata_size(s, true) > orig_size) > > - orig_size = s->object_size; > > -#endif > > + kasan_meta_size = kasan_metadata_size(s, true); > > + if (kasan_meta_size > orig_size) > > + orig_size = kasan_meta_size; > > 'orig_size' is to save the orignal request size for kmalloc object, > and its main purpose is to detect the memory wastage of kmalloc > objects, see commit 6edf2576a6cc "mm/slub: enable debugging memory > wasting of kmalloc" > > Setting "orig_size = s->object_size" was to skip the wastage check > and the redzone sanity check for this 'wasted space'. > > So it's better not to set 'kasan_meta_size' to orig_size. > > And from the below code, IIUC, the orig_size is not used in fixing > the boot problem found by Hyeonggon? I just tried Hyeonggon's reproducing method [1], and confirmed the below change of check_object() itself can fix the problem. [1]. https://lore.kernel.org/lkml/CAB=+i9RnOz0jDockOfw3oNageCUF5gmF+nzOzPpoTxtr7eqn7g@mail.gmail.com/ Thanks, Feng > > Thanks, > Feng > > > > > p += get_info_end(s); > > p += sizeof(struct track) * 2; > > @@ -1192,7 +1192,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab, > > { > > u8 *p = object; > > u8 *endobject = object + s->object_size; > > - unsigned int orig_size; > > + unsigned int orig_size, kasan_meta_size; > > > > if (s->flags & SLAB_RED_ZONE) { > > if (!check_bytes_and_report(s, slab, object, "Left Redzone", > > @@ -1222,12 +1222,23 @@ static int check_object(struct kmem_cache *s, struct slab *slab, > > } > > > > if (s->flags & SLAB_POISON) { > > - if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON) && > > - (!check_bytes_and_report(s, slab, p, "Poison", p, > > - POISON_FREE, s->object_size - 1) || > > - !check_bytes_and_report(s, slab, p, "End Poison", > > - p + s->object_size - 1, POISON_END, 1))) > > - return 0; > > + if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON)) { > > + /* > > + * KASAN can save its free meta data inside of the > > + * object at offset 0. Thus, skip checking the part of > > + * the redzone that overlaps with the meta data. > > + */ > > + kasan_meta_size = kasan_metadata_size(s, true); > > + if (kasan_meta_size < s->object_size - 1 && > > + !check_bytes_and_report(s, slab, p, "Poison", > > + p + kasan_meta_size, POISON_FREE, > > + s->object_size - kasan_meta_size - 1)) > > + return 0; > > + if (kasan_meta_size < s->object_size && > > + !check_bytes_and_report(s, slab, p, "End Poison", > > + p + s->object_size - 1, POISON_END, 1)) > > + return 0; > > + } > > /* > > * check_pad_bytes cleans up on its own. > > */ > > -- > > 2.25.1 > >
On Thu, Nov 23, 2023 at 7:35 AM Feng Tang <feng.tang@intel.com> wrote: > Hi Feng, > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -870,20 +870,20 @@ static inline void set_orig_size(struct kmem_cache *s, > > void *object, unsigned int orig_size) > > { > > void *p = kasan_reset_tag(object); > > + unsigned int kasan_meta_size; > > > > if (!slub_debug_orig_size(s)) > > return; > > > > -#ifdef CONFIG_KASAN_GENERIC > > /* > > - * KASAN could save its free meta data in object's data area at > > - * offset 0, if the size is larger than 'orig_size', it will > > - * overlap the data redzone in [orig_size+1, object_size], and > > - * the check should be skipped. > > + * KASAN can save its free meta data inside of the object at offset 0. > > + * If this meta data size is larger than 'orig_size', it will overlap > > + * the data redzone in [orig_size+1, object_size]. Thus, we adjust > > + * 'orig_size' to be as at least as big as KASAN's meta data. > > */ > > - if (kasan_metadata_size(s, true) > orig_size) > > - orig_size = s->object_size; > > -#endif > > + kasan_meta_size = kasan_metadata_size(s, true); > > + if (kasan_meta_size > orig_size) > > + orig_size = kasan_meta_size; > > 'orig_size' is to save the orignal request size for kmalloc object, > and its main purpose is to detect the memory wastage of kmalloc > objects, see commit 6edf2576a6cc "mm/slub: enable debugging memory > wasting of kmalloc" > > Setting "orig_size = s->object_size" was to skip the wastage check > and the redzone sanity check for this 'wasted space'. Yes, I get that. The point of my change was to allow slub_debug detecting overwrites in the [kasan_meta_size, object_size) range when KASAN stores its free meta in the [0, kasan_meta_size) range. If orig_size is set to object_size, writes to that area will not be detected. I also thought that using kasan_meta_size instead of object_size for orig_size might give the reader better understanding of the memory layout. > So it's better not to set 'kasan_meta_size' to orig_size. I don't have a strong preference here: slub_debug and KASAN are not really meant to be used together anyway. So if you prefer, I can revert this change and keep using object_size as before. > And from the below code, IIUC, the orig_size is not used in fixing > the boot problem found by Hyeonggon? No, this is a just a partially-related clean up. It just seemed natural to include it into the fix, as it also touches the code around a kasan_metadata_size call. Thanks!
Hi Andrey, On Thu, Nov 23, 2023 at 05:12:08PM +0100, Andrey Konovalov wrote: > On Thu, Nov 23, 2023 at 7:35 AM Feng Tang <feng.tang@intel.com> wrote: > > > > Hi Feng, > > > > --- a/mm/slub.c > > > +++ b/mm/slub.c > > > @@ -870,20 +870,20 @@ static inline void set_orig_size(struct kmem_cache *s, > > > void *object, unsigned int orig_size) > > > { > > > void *p = kasan_reset_tag(object); > > > + unsigned int kasan_meta_size; > > > > > > if (!slub_debug_orig_size(s)) > > > return; > > > > > > -#ifdef CONFIG_KASAN_GENERIC > > > /* > > > - * KASAN could save its free meta data in object's data area at > > > - * offset 0, if the size is larger than 'orig_size', it will > > > - * overlap the data redzone in [orig_size+1, object_size], and > > > - * the check should be skipped. > > > + * KASAN can save its free meta data inside of the object at offset 0. > > > + * If this meta data size is larger than 'orig_size', it will overlap > > > + * the data redzone in [orig_size+1, object_size]. Thus, we adjust > > > + * 'orig_size' to be as at least as big as KASAN's meta data. > > > */ > > > - if (kasan_metadata_size(s, true) > orig_size) > > > - orig_size = s->object_size; > > > -#endif > > > + kasan_meta_size = kasan_metadata_size(s, true); > > > + if (kasan_meta_size > orig_size) > > > + orig_size = kasan_meta_size; > > > > 'orig_size' is to save the orignal request size for kmalloc object, > > and its main purpose is to detect the memory wastage of kmalloc > > objects, see commit 6edf2576a6cc "mm/slub: enable debugging memory > > wasting of kmalloc" > > > > Setting "orig_size = s->object_size" was to skip the wastage check > > and the redzone sanity check for this 'wasted space'. > > Yes, I get that. > > The point of my change was to allow slub_debug detecting overwrites in > the [kasan_meta_size, object_size) range when KASAN stores its free > meta in the [0, kasan_meta_size) range. If orig_size is set to > object_size, writes to that area will not be detected. I also thought > that using kasan_meta_size instead of object_size for orig_size might > give the reader better understanding of the memory layout. > > > So it's better not to set 'kasan_meta_size' to orig_size. > > I don't have a strong preference here: slub_debug and KASAN are not > really meant to be used together anyway. So if you prefer, I can > revert this change and keep using object_size as before. Thanks for the explanation! I got your point now. I'm fine with either way, as this change can help to enforce the redzone check for all kmalloc objects, while can make some debug wastage info less accurate. Thanks, Feng > > > And from the below code, IIUC, the orig_size is not used in fixing > > the boot problem found by Hyeonggon? > > No, this is a just a partially-related clean up. It just seemed > natural to include it into the fix, as it also touches the code around > a kasan_metadata_size call. > > Thanks!
On 11/23/23 00:12, andrey.konovalov@linux.dev wrote: > From: Andrey Konovalov <andreyknvl@google.com> > > When both KASAN and slub_debug are enabled, when a free object is being > prepared in setup_object, slub_debug poisons the object data before KASAN > initializes its per-object metadata. > > Right now, in setup_object, KASAN only initializes the alloc metadata, > which is always stored outside of the object. slub_debug is aware of > this and it skips poisoning and checking that memory area. > > However, with the following patch in this series, KASAN also starts > initializing its free medata in setup_object. As this metadata might be > stored within the object, this initialization might overwrite the > slub_debug poisoning. This leads to slub_debug reports. > > Thus, skip checking slub_debug poisoning of the object data area that > overlaps with the in-object KASAN free metadata. > > Also make slub_debug poisoning of tail kmalloc redzones more precise when > KASAN is enabled: slub_debug can still poison and check the tail kmalloc > allocation area that comes after the KASAN free metadata. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Thanks. > --- > > Andrew, please put this patch right before "kasan: use stack_depot_put > for Generic mode". > --- > mm/slub.c | 41 ++++++++++++++++++++++++++--------------- > 1 file changed, 26 insertions(+), 15 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 63d281dfacdb..782bd8a6bd34 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -870,20 +870,20 @@ static inline void set_orig_size(struct kmem_cache *s, > void *object, unsigned int orig_size) > { > void *p = kasan_reset_tag(object); > + unsigned int kasan_meta_size; > > if (!slub_debug_orig_size(s)) > return; > > -#ifdef CONFIG_KASAN_GENERIC > /* > - * KASAN could save its free meta data in object's data area at > - * offset 0, if the size is larger than 'orig_size', it will > - * overlap the data redzone in [orig_size+1, object_size], and > - * the check should be skipped. > + * KASAN can save its free meta data inside of the object at offset 0. > + * If this meta data size is larger than 'orig_size', it will overlap > + * the data redzone in [orig_size+1, object_size]. Thus, we adjust > + * 'orig_size' to be as at least as big as KASAN's meta data. > */ > - if (kasan_metadata_size(s, true) > orig_size) > - orig_size = s->object_size; > -#endif > + kasan_meta_size = kasan_metadata_size(s, true); > + if (kasan_meta_size > orig_size) > + orig_size = kasan_meta_size; > > p += get_info_end(s); > p += sizeof(struct track) * 2; > @@ -1192,7 +1192,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab, > { > u8 *p = object; > u8 *endobject = object + s->object_size; > - unsigned int orig_size; > + unsigned int orig_size, kasan_meta_size; > > if (s->flags & SLAB_RED_ZONE) { > if (!check_bytes_and_report(s, slab, object, "Left Redzone", > @@ -1222,12 +1222,23 @@ static int check_object(struct kmem_cache *s, struct slab *slab, > } > > if (s->flags & SLAB_POISON) { > - if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON) && > - (!check_bytes_and_report(s, slab, p, "Poison", p, > - POISON_FREE, s->object_size - 1) || > - !check_bytes_and_report(s, slab, p, "End Poison", > - p + s->object_size - 1, POISON_END, 1))) > - return 0; > + if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON)) { > + /* > + * KASAN can save its free meta data inside of the > + * object at offset 0. Thus, skip checking the part of > + * the redzone that overlaps with the meta data. > + */ > + kasan_meta_size = kasan_metadata_size(s, true); > + if (kasan_meta_size < s->object_size - 1 && > + !check_bytes_and_report(s, slab, p, "Poison", > + p + kasan_meta_size, POISON_FREE, > + s->object_size - kasan_meta_size - 1)) > + return 0; > + if (kasan_meta_size < s->object_size && > + !check_bytes_and_report(s, slab, p, "End Poison", > + p + s->object_size - 1, POISON_END, 1)) > + return 0; > + } > /* > * check_pad_bytes cleans up on its own. > */
diff --git a/mm/slub.c b/mm/slub.c index 63d281dfacdb..782bd8a6bd34 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -870,20 +870,20 @@ static inline void set_orig_size(struct kmem_cache *s, void *object, unsigned int orig_size) { void *p = kasan_reset_tag(object); + unsigned int kasan_meta_size; if (!slub_debug_orig_size(s)) return; -#ifdef CONFIG_KASAN_GENERIC /* - * KASAN could save its free meta data in object's data area at - * offset 0, if the size is larger than 'orig_size', it will - * overlap the data redzone in [orig_size+1, object_size], and - * the check should be skipped. + * KASAN can save its free meta data inside of the object at offset 0. + * If this meta data size is larger than 'orig_size', it will overlap + * the data redzone in [orig_size+1, object_size]. Thus, we adjust + * 'orig_size' to be as at least as big as KASAN's meta data. */ - if (kasan_metadata_size(s, true) > orig_size) - orig_size = s->object_size; -#endif + kasan_meta_size = kasan_metadata_size(s, true); + if (kasan_meta_size > orig_size) + orig_size = kasan_meta_size; p += get_info_end(s); p += sizeof(struct track) * 2; @@ -1192,7 +1192,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab, { u8 *p = object; u8 *endobject = object + s->object_size; - unsigned int orig_size; + unsigned int orig_size, kasan_meta_size; if (s->flags & SLAB_RED_ZONE) { if (!check_bytes_and_report(s, slab, object, "Left Redzone", @@ -1222,12 +1222,23 @@ static int check_object(struct kmem_cache *s, struct slab *slab, } if (s->flags & SLAB_POISON) { - if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON) && - (!check_bytes_and_report(s, slab, p, "Poison", p, - POISON_FREE, s->object_size - 1) || - !check_bytes_and_report(s, slab, p, "End Poison", - p + s->object_size - 1, POISON_END, 1))) - return 0; + if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON)) { + /* + * KASAN can save its free meta data inside of the + * object at offset 0. Thus, skip checking the part of + * the redzone that overlaps with the meta data. + */ + kasan_meta_size = kasan_metadata_size(s, true); + if (kasan_meta_size < s->object_size - 1 && + !check_bytes_and_report(s, slab, p, "Poison", + p + kasan_meta_size, POISON_FREE, + s->object_size - kasan_meta_size - 1)) + return 0; + if (kasan_meta_size < s->object_size && + !check_bytes_and_report(s, slab, p, "End Poison", + p + s->object_size - 1, POISON_END, 1)) + return 0; + } /* * check_pad_bytes cleans up on its own. */