Message ID | 20230308022057.151078-1-haibo.li@mediatek.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kcsan:fix alignment_fault when read unaligned instrumented memory | expand |
On Wed, 8 Mar 2023 at 03:21, 'Haibo Li' via kasan-dev <kasan-dev@googlegroups.com> wrote: > > After enable kcsan on arm64+linux-5.15,it reports alignment_fault > when access unaligned address. Is this KCSAN's fault or the fault of the code being instrumented? I.e. if you disable KCSAN, is there still an alignment fault reported? Because as-is, I don't understand how the instrumentation alone will cause an alignment fault, because for every normal memory access there is a corresponding instrumented access - therefore, that'd suggest that the real access was also unaligned. Note that the compiler inserts instrumentation _before_ the actual access, so if there's a problem, that problem will manifest inside KCSAN. Can you provide more information about what's going on (type of access, size of access, etc.)? > Here is the oops log: > " > Trying to unpack rootfs image as initramfs..... > Unable to handle kernel paging request at virtual address > ffffff802a0d8d7171 > Mem abort info:o: > ESR = 0x9600002121 > EC = 0x25: DABT (current EL), IL = 32 bitsts > SET = 0, FnV = 0 0 > EA = 0, S1PTW = 0 0 > FSC = 0x21: alignment fault > Data abort info:o: > ISV = 0, ISS = 0x0000002121 > CM = 0, WnR = 0 0 > swapper pgtable: 4k pages, 39-bit VAs, pgdp=000000002835200000 > [ffffff802a0d8d71] pgd=180000005fbf9003, p4d=180000005fbf9003, > pud=180000005fbf9003, pmd=180000005fbe8003, pte=006800002a0d8707 > Internal error: Oops: 96000021 [#1] PREEMPT SMP > Modules linked in: > CPU: 2 PID: 45 Comm: kworker/u8:2 Not tainted > 5.15.78-android13-8-g63561175bbda-dirty #1 > ... > pc : kcsan_setup_watchpoint+0x26c/0x6bc > lr : kcsan_setup_watchpoint+0x88/0x6bc > sp : ffffffc00ab4b7f0 > x29: ffffffc00ab4b800 x28: ffffff80294fe588 x27: 0000000000000001 > x26: 0000000000000019 x25: 0000000000000001 x24: ffffff80294fdb80 > x23: 0000000000000000 x22: ffffffc00a70fb68 x21: ffffff802a0d8d71 > x20: 0000000000000002 x19: 0000000000000000 x18: ffffffc00a9bd060 > x17: 0000000000000001 x16: 0000000000000000 x15: ffffffc00a59f000 > x14: 0000000000000001 x13: 0000000000000000 x12: ffffffc00a70faa0 > x11: 00000000aaaaaaab x10: 0000000000000054 x9 : ffffffc00839adf8 > x8 : ffffffc009b4cf00 x7 : 0000000000000000 x6 : 0000000000000007 > x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffffffc00a70fb70 > x2 : 0005ff802a0d8d71 x1 : 0000000000000000 x0 : 0000000000000000 > Call trace: > kcsan_setup_watchpoint+0x26c/0x6bc > __tsan_read2+0x1f0/0x234 > inflate_fast+0x498/0x750 ^^ is it possible that an access in "inflate_fast" is unaligned? > zlib_inflate+0x1304/0x2384 > __gunzip+0x3a0/0x45c > gunzip+0x20/0x30 > unpack_to_rootfs+0x2a8/0x3fc > do_populate_rootfs+0xe8/0x11c > async_run_entry_fn+0x58/0x1bc > process_one_work+0x3ec/0x738 > worker_thread+0x4c4/0x838 > kthread+0x20c/0x258 > ret_from_fork+0x10/0x20 > Code: b8bfc2a8 2a0803f7 14000007 d503249f (78bfc2a8) ) > ---[ end trace 613a943cb0a572b6 ]----- > " > > After checking linux 6.3-rc1 on QEMU arm64,it still has the possibility > to read unaligned address in read_instrumented_memory(qemu can not > emulate alignment fault) > > To fix alignment fault and read the value of instrumented memory > more effective,bypass the unaligned access in read_instrumented_memory. > > Signed-off-by: Haibo Li <haibo.li@mediatek.com> > --- > kernel/kcsan/core.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c > index 54d077e1a2dc..88e75d7d85d2 100644 > --- a/kernel/kcsan/core.c > +++ b/kernel/kcsan/core.c > @@ -337,6 +337,11 @@ static void delay_access(int type) > */ > static __always_inline u64 read_instrumented_memory(const volatile void *ptr, size_t size) > { > + bool aligned_read = (size == 1) || IS_ALIGNED((unsigned long)ptr, size); (size==1) check is redundant because IS_ALIGNED(.., 1) should always return true. And this will also penalize other architectures which can do unaligned accesses. So this check probably wants to be guarded by "IS_ENABLED(CONFIG_ARM64)" or something. > + if (!aligned_read) > + return 0; > + > switch (size) { > case 1: return READ_ONCE(*(const u8 *)ptr); > case 2: return READ_ONCE(*(const u16 *)ptr);
> On Wed, 8 Mar 2023 at 03:21, 'Haibo Li' via kasan-dev > <kasan-dev@googlegroups.com> wrote: > > > > After enable kcsan on arm64+linux-5.15,it reports alignment_fault when > > access unaligned address. > > Is this KCSAN's fault or the fault of the code being instrumented? > I.e. if you disable KCSAN, is there still an alignment fault reported? If disable KCSAN,the alignment fault will not occur. > > Because as-is, I don't understand how the instrumentation alone will cause an > alignment fault, because for every normal memory access there is a > corresponding instrumented access - therefore, that'd suggest that the real > access was also unaligned. Note that the compiler inserts instrumentation > _before_ the actual access, so if there's a problem, that problem will manifest > inside KCSAN. > > Can you provide more information about what's going on (type of access, size > of access, etc.)? > Here is the source code of inflate_fast+0x498(lib/zlib_inflate/inffast.c +258): if (dist > 2) { unsigned short *sfrom; sfrom = (unsigned short *)(from); loops = len >> 1; do { if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) >*sout++ = *sfrom++; it reads 2 bytes from sfrom and sfrom is unaligned. > > Here is the oops log: > > " > > Trying to unpack rootfs image as initramfs..... > > Unable to handle kernel paging request at virtual address > > ffffff802a0d8d7171 > > Mem abort info:o: > > ESR = 0x9600002121 > > EC = 0x25: DABT (current EL), IL = 32 bitsts > > SET = 0, FnV = 0 0 > > EA = 0, S1PTW = 0 0 > > FSC = 0x21: alignment fault > > Data abort info:o: > > ISV = 0, ISS = 0x0000002121 > > CM = 0, WnR = 0 0 > > swapper pgtable: 4k pages, 39-bit VAs, pgdp=000000002835200000 > > [ffffff802a0d8d71] pgd=180000005fbf9003, p4d=180000005fbf9003, > > pud=180000005fbf9003, pmd=180000005fbe8003, pte=006800002a0d8707 > > Internal error: Oops: 96000021 [#1] PREEMPT SMP Modules linked in: > > CPU: 2 PID: 45 Comm: kworker/u8:2 Not tainted > > 5.15.78-android13-8-g63561175bbda-dirty #1 ... > > pc : kcsan_setup_watchpoint+0x26c/0x6bc > > lr : kcsan_setup_watchpoint+0x88/0x6bc sp : ffffffc00ab4b7f0 > > x29: ffffffc00ab4b800 x28: ffffff80294fe588 x27: 0000000000000001 > > x26: 0000000000000019 x25: 0000000000000001 x24: ffffff80294fdb80 > > x23: 0000000000000000 x22: ffffffc00a70fb68 x21: ffffff802a0d8d71 > > x20: 0000000000000002 x19: 0000000000000000 x18: ffffffc00a9bd060 > > x17: 0000000000000001 x16: 0000000000000000 x15: ffffffc00a59f000 > > x14: 0000000000000001 x13: 0000000000000000 x12: ffffffc00a70faa0 > > x11: 00000000aaaaaaab x10: 0000000000000054 x9 : ffffffc00839adf8 > > x8 : ffffffc009b4cf00 x7 : 0000000000000000 x6 : 0000000000000007 > > x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffffffc00a70fb70 > > x2 : 0005ff802a0d8d71 x1 : 0000000000000000 x0 : 0000000000000000 Call > > trace: > > kcsan_setup_watchpoint+0x26c/0x6bc > > __tsan_read2+0x1f0/0x234 > > inflate_fast+0x498/0x750 > > ^^ is it possible that an access in "inflate_fast" is unaligned? Here is the instruction for inflate_fast+0x498: ffffffc008948980 <inflate_fast>: ... ffffffc008948e10: e0 03 1c aa mov x0, x28 ffffffc008948e14: 06 3a e9 97 bl 0xffffffc00839762c <__tsan_unaligned_read2> ffffffc008948e18: e0 03 17 aa mov x0, x23 >ffffffc008948e1c: 9a 27 40 78 ldrh w26, [x28], #2 And the instruction for kcsan_setup_watchpoint+0x26c: ffffffc00839ab90 <kcsan_setup_watchpoint>: ... >ffffffc00839adfc: a8 fe df 48 ldarh w8, [x21] The instruction is different.READ_ONCE uses ldarh,which requires the access address is aligned. As ARM v8 arm said: " Load-Acquire, Load-AcquirePC and Store-Release, other than Load-Acquire Exclusive Pair and Store-Release-Exclusive Pair, access only a single data element. This access is single-copy atomic. The address of the data object must be aligned to the size of the data element being accessed, otherwise the access generates an Alignment fault." while ldrh accepts unaligned address. That's why it is ok while disable KCSAN. > > > zlib_inflate+0x1304/0x2384 > > __gunzip+0x3a0/0x45c > > gunzip+0x20/0x30 > > unpack_to_rootfs+0x2a8/0x3fc > > do_populate_rootfs+0xe8/0x11c > > async_run_entry_fn+0x58/0x1bc > > process_one_work+0x3ec/0x738 > > worker_thread+0x4c4/0x838 > > kthread+0x20c/0x258 > > ret_from_fork+0x10/0x20 > > Code: b8bfc2a8 2a0803f7 14000007 d503249f (78bfc2a8) ) ---[ end trace > > 613a943cb0a572b6 ]----- " > > > > After checking linux 6.3-rc1 on QEMU arm64,it still has the > > possibility to read unaligned address in read_instrumented_memory(qemu > > can not emulate alignment fault) > > > > To fix alignment fault and read the value of instrumented memory more > > effective,bypass the unaligned access in read_instrumented_memory. > > > > Signed-off-by: Haibo Li <haibo.li@mediatek.com> > > --- > > kernel/kcsan/core.c | 5 +++++ > > 1 file changed, 5 insertions(+) > > > > diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index > > 54d077e1a2dc..88e75d7d85d2 100644 > > --- a/kernel/kcsan/core.c > > +++ b/kernel/kcsan/core.c > > @@ -337,6 +337,11 @@ static void delay_access(int type) > > */ > > static __always_inline u64 read_instrumented_memory(const volatile > > void *ptr, size_t size) { > > + bool aligned_read = (size == 1) || IS_ALIGNED((unsigned > > + long)ptr, size); > > (size==1) check is redundant because IS_ALIGNED(.., 1) should always return > true. Thanks.I will drop it. > > And this will also penalize other architectures which can do unaligned accesses. > So this check probably wants to be guarded by "IS_ENABLED(CONFIG_ARM64)" > or something. Agree.Is it acceptable to use IS_ENABLED(CONFIG_ARM64) here? > > > + if (!aligned_read) > > + return 0; > > + > > switch (size) { > > case 1: return READ_ONCE(*(const u8 *)ptr); > > case 2: return READ_ONCE(*(const u16 *)ptr);
On Wed, Mar 08, 2023 at 05:41PM +0800, Haibo Li wrote: [...] > > > x2 : 0005ff802a0d8d71 x1 : 0000000000000000 x0 : 0000000000000000 Call > > > trace: > > > kcsan_setup_watchpoint+0x26c/0x6bc > > > __tsan_read2+0x1f0/0x234 > > > inflate_fast+0x498/0x750 > > > > ^^ is it possible that an access in "inflate_fast" is unaligned? > Here is the instruction for inflate_fast+0x498: > ffffffc008948980 <inflate_fast>: > ... > ffffffc008948e10: e0 03 1c aa mov x0, x28 > ffffffc008948e14: 06 3a e9 97 bl 0xffffffc00839762c <__tsan_unaligned_read2> > ffffffc008948e18: e0 03 17 aa mov x0, x23 > >ffffffc008948e1c: 9a 27 40 78 ldrh w26, [x28], #2 > > And the instruction for kcsan_setup_watchpoint+0x26c: > ffffffc00839ab90 <kcsan_setup_watchpoint>: > ... > >ffffffc00839adfc: a8 fe df 48 ldarh w8, [x21] > > The instruction is different.READ_ONCE uses ldarh,which requires the access address is aligned. > As ARM v8 arm said: > " > Load-Acquire, Load-AcquirePC and Store-Release, other than Load-Acquire Exclusive Pair and > Store-Release-Exclusive Pair, access only a single data element. This access is single-copy atomic. The address of the data object must be aligned to the size of the data element being accessed, otherwise the access generates an > Alignment fault." > > while ldrh accepts unaligned address. > That's why it is ok while disable KCSAN. I understand now what's going on, thanks for the analysis. Can you test the below patch, I think it is the correct solution for this - compared to your approach of opting out unaligned accesses, with the below there is no loss of functionality. Thanks, -- Marco ------ >8 ------ From 889e9d5ce61592a18c90a9c57495337d5827bbc2 Mon Sep 17 00:00:00 2001 From: Marco Elver <elver@google.com> Date: Wed, 8 Mar 2023 11:21:06 +0100 Subject: [PATCH] kcsan: Avoid READ_ONCE() in read_instrumented_memory() Haibo Li reported: | Unable to handle kernel paging request at virtual address | ffffff802a0d8d7171 | Mem abort info:o: | ESR = 0x9600002121 | EC = 0x25: DABT (current EL), IL = 32 bitsts | SET = 0, FnV = 0 0 | EA = 0, S1PTW = 0 0 | FSC = 0x21: alignment fault | Data abort info:o: | ISV = 0, ISS = 0x0000002121 | CM = 0, WnR = 0 0 | swapper pgtable: 4k pages, 39-bit VAs, pgdp=000000002835200000 | [ffffff802a0d8d71] pgd=180000005fbf9003, p4d=180000005fbf9003, | pud=180000005fbf9003, pmd=180000005fbe8003, pte=006800002a0d8707 | Internal error: Oops: 96000021 [#1] PREEMPT SMP | Modules linked in: | CPU: 2 PID: 45 Comm: kworker/u8:2 Not tainted | 5.15.78-android13-8-g63561175bbda-dirty #1 | ... | pc : kcsan_setup_watchpoint+0x26c/0x6bc | lr : kcsan_setup_watchpoint+0x88/0x6bc | sp : ffffffc00ab4b7f0 | x29: ffffffc00ab4b800 x28: ffffff80294fe588 x27: 0000000000000001 | x26: 0000000000000019 x25: 0000000000000001 x24: ffffff80294fdb80 | x23: 0000000000000000 x22: ffffffc00a70fb68 x21: ffffff802a0d8d71 | x20: 0000000000000002 x19: 0000000000000000 x18: ffffffc00a9bd060 | x17: 0000000000000001 x16: 0000000000000000 x15: ffffffc00a59f000 | x14: 0000000000000001 x13: 0000000000000000 x12: ffffffc00a70faa0 | x11: 00000000aaaaaaab x10: 0000000000000054 x9 : ffffffc00839adf8 | x8 : ffffffc009b4cf00 x7 : 0000000000000000 x6 : 0000000000000007 | x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffffffc00a70fb70 | x2 : 0005ff802a0d8d71 x1 : 0000000000000000 x0 : 0000000000000000 | Call trace: | kcsan_setup_watchpoint+0x26c/0x6bc | __tsan_read2+0x1f0/0x234 | inflate_fast+0x498/0x750 | zlib_inflate+0x1304/0x2384 | __gunzip+0x3a0/0x45c | gunzip+0x20/0x30 | unpack_to_rootfs+0x2a8/0x3fc | do_populate_rootfs+0xe8/0x11c | async_run_entry_fn+0x58/0x1bc | process_one_work+0x3ec/0x738 | worker_thread+0x4c4/0x838 | kthread+0x20c/0x258 | ret_from_fork+0x10/0x20 | Code: b8bfc2a8 2a0803f7 14000007 d503249f (78bfc2a8) ) | ---[ end trace 613a943cb0a572b6 ]----- The reason for this is that on certain arm64 configuration since e35123d83ee3 ("arm64: lto: Strengthen READ_ONCE() to acquire when CONFIG_LTO=y"), READ_ONCE() may be promoted to a full atomic acquire instruction which cannot be used on unaligned addresses. Fix it by avoiding READ_ONCE() in read_instrumented_memory(), and simply forcing the compiler to do the required access by casting to the appropriate volatile type. In terms of generated code this currently only affects architectures that do not use the default READ_ONCE() implementation. The only downside is that we are not guaranteed atomicity of the access itself, although on most architectures a plain load up to machine word size should still be atomic (a fact the default READ_ONCE() still relies on itself). Reported-by: Haibo Li <haibo.li@mediatek.com> Cc: <stable@vger.kernel.org> Signed-off-by: Marco Elver <elver@google.com> --- kernel/kcsan/core.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 54d077e1a2dc..5a60cc52adc0 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -337,11 +337,20 @@ static void delay_access(int type) */ static __always_inline u64 read_instrumented_memory(const volatile void *ptr, size_t size) { + /* + * In the below we don't necessarily need the read of the location to + * be atomic, and we don't use READ_ONCE(), since all we need for race + * detection is to observe 2 different values. + * + * Furthermore, on certain architectures (such as arm64), READ_ONCE() + * may turn into more complex instructions than a plain load that cannot + * do unaligned accesses. + */ switch (size) { - case 1: return READ_ONCE(*(const u8 *)ptr); - case 2: return READ_ONCE(*(const u16 *)ptr); - case 4: return READ_ONCE(*(const u32 *)ptr); - case 8: return READ_ONCE(*(const u64 *)ptr); + case 1: return *(const volatile u8 *)ptr; + case 2: return *(const volatile u16 *)ptr; + case 4: return *(const volatile u32 *)ptr; + case 8: return *(const volatile u64 *)ptr; default: return 0; /* Ignore; we do not diff the values. */ } }
> On Wed, Mar 08, 2023 at 05:41PM +0800, Haibo Li wrote: > [...] > > > > x2 : 0005ff802a0d8d71 x1 : 0000000000000000 x0 : 0000000000000000 > Call > > > > trace: > > > > kcsan_setup_watchpoint+0x26c/0x6bc > > > > __tsan_read2+0x1f0/0x234 > > > > inflate_fast+0x498/0x750 > > > > > > ^^ is it possible that an access in "inflate_fast" is unaligned? > > Here is the instruction for inflate_fast+0x498: > > ffffffc008948980 <inflate_fast>: > > ... > > ffffffc008948e10: e0 03 1c aa mov x0, x28 > > ffffffc008948e14: 06 3a e9 97 bl 0xffffffc00839762c > <__tsan_unaligned_read2> > > ffffffc008948e18: e0 03 17 aa mov x0, x23 > > >ffffffc008948e1c: 9a 27 40 78 ldrh w26, [x28], #2 > > > > And the instruction for kcsan_setup_watchpoint+0x26c: > > ffffffc00839ab90 <kcsan_setup_watchpoint>: > > ... > > >ffffffc00839adfc: a8 fe df 48 ldarh w8, [x21] > > > > The instruction is different.READ_ONCE uses ldarh,which requires the access > address is aligned. > > As ARM v8 arm said: > > " > > Load-Acquire, Load-AcquirePC and Store-Release, other than Load-Acquire > Exclusive Pair and > > Store-Release-Exclusive Pair, access only a single data element. This access is > single-copy atomic. The address of the data object must be aligned to the size > of the data element being accessed, otherwise the access generates an > > Alignment fault." > > > > while ldrh accepts unaligned address. > > That's why it is ok while disable KCSAN. > > I understand now what's going on, thanks for the analysis. > > Can you test the below patch, I think it is the correct solution for > this - compared to your approach of opting out unaligned accesses, with > the below there is no loss of functionality. > > Thanks, > -- Marco > The below patch works well on linux-5.15+arm64. > ------ >8 ------ > > > From 889e9d5ce61592a18c90a9c57495337d5827bbc2 Mon Sep 17 00:00:00 > 2001 > From: Marco Elver <elver@google.com> > Date: Wed, 8 Mar 2023 11:21:06 +0100 > Subject: [PATCH] kcsan: Avoid READ_ONCE() in read_instrumented_memory() > > Haibo Li reported: > > | Unable to handle kernel paging request at virtual address > | ffffff802a0d8d7171 > | Mem abort info:o: > | ESR = 0x9600002121 > | EC = 0x25: DABT (current EL), IL = 32 bitsts > | SET = 0, FnV = 0 0 > | EA = 0, S1PTW = 0 0 > | FSC = 0x21: alignment fault > | Data abort info:o: > | ISV = 0, ISS = 0x0000002121 > | CM = 0, WnR = 0 0 > | swapper pgtable: 4k pages, 39-bit VAs, pgdp=000000002835200000 > | [ffffff802a0d8d71] pgd=180000005fbf9003, p4d=180000005fbf9003, > | pud=180000005fbf9003, pmd=180000005fbe8003, pte=006800002a0d8707 > | Internal error: Oops: 96000021 [#1] PREEMPT SMP > | Modules linked in: > | CPU: 2 PID: 45 Comm: kworker/u8:2 Not tainted > | 5.15.78-android13-8-g63561175bbda-dirty #1 > | ... > | pc : kcsan_setup_watchpoint+0x26c/0x6bc > | lr : kcsan_setup_watchpoint+0x88/0x6bc > | sp : ffffffc00ab4b7f0 > | x29: ffffffc00ab4b800 x28: ffffff80294fe588 x27: 0000000000000001 > | x26: 0000000000000019 x25: 0000000000000001 x24: ffffff80294fdb80 > | x23: 0000000000000000 x22: ffffffc00a70fb68 x21: ffffff802a0d8d71 > | x20: 0000000000000002 x19: 0000000000000000 x18: ffffffc00a9bd060 > | x17: 0000000000000001 x16: 0000000000000000 x15: ffffffc00a59f000 > | x14: 0000000000000001 x13: 0000000000000000 x12: ffffffc00a70faa0 > | x11: 00000000aaaaaaab x10: 0000000000000054 x9 : ffffffc00839adf8 > | x8 : ffffffc009b4cf00 x7 : 0000000000000000 x6 : 0000000000000007 > | x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffffffc00a70fb70 > | x2 : 0005ff802a0d8d71 x1 : 0000000000000000 x0 : 0000000000000000 > | Call trace: > | kcsan_setup_watchpoint+0x26c/0x6bc > | __tsan_read2+0x1f0/0x234 > | inflate_fast+0x498/0x750 > | zlib_inflate+0x1304/0x2384 > | __gunzip+0x3a0/0x45c > | gunzip+0x20/0x30 > | unpack_to_rootfs+0x2a8/0x3fc > | do_populate_rootfs+0xe8/0x11c > | async_run_entry_fn+0x58/0x1bc > | process_one_work+0x3ec/0x738 > | worker_thread+0x4c4/0x838 > | kthread+0x20c/0x258 > | ret_from_fork+0x10/0x20 > | Code: b8bfc2a8 2a0803f7 14000007 d503249f (78bfc2a8) ) > | ---[ end trace 613a943cb0a572b6 ]----- > > The reason for this is that on certain arm64 configuration since > e35123d83ee3 ("arm64: lto: Strengthen READ_ONCE() to acquire when > CONFIG_LTO=y"), READ_ONCE() may be promoted to a full atomic acquire > instruction which cannot be used on unaligned addresses. > > Fix it by avoiding READ_ONCE() in read_instrumented_memory(), and simply > forcing the compiler to do the required access by casting to the > appropriate volatile type. In terms of generated code this currently > only affects architectures that do not use the default READ_ONCE() > implementation. > > The only downside is that we are not guaranteed atomicity of the access > itself, although on most architectures a plain load up to machine word > size should still be atomic (a fact the default READ_ONCE() still relies > on itself). > > Reported-by: Haibo Li <haibo.li@mediatek.com> > Cc: <stable@vger.kernel.org> > Signed-off-by: Marco Elver <elver@google.com> > --- > kernel/kcsan/core.c | 17 +++++++++++++---- > 1 file changed, 13 insertions(+), 4 deletions(-) > > diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c > index 54d077e1a2dc..5a60cc52adc0 100644 > --- a/kernel/kcsan/core.c > +++ b/kernel/kcsan/core.c > @@ -337,11 +337,20 @@ static void delay_access(int type) > */ > static __always_inline u64 read_instrumented_memory(const volatile void > *ptr, size_t size) > { > + /* > + * In the below we don't necessarily need the read of the location to > + * be atomic, and we don't use READ_ONCE(), since all we need for race > + * detection is to observe 2 different values. > + * > + * Furthermore, on certain architectures (such as arm64), READ_ONCE() > + * may turn into more complex instructions than a plain load that cannot > + * do unaligned accesses. > + */ > switch (size) { > - case 1: return READ_ONCE(*(const u8 *)ptr); > - case 2: return READ_ONCE(*(const u16 *)ptr); > - case 4: return READ_ONCE(*(const u32 *)ptr); > - case 8: return READ_ONCE(*(const u64 *)ptr); > + case 1: return *(const volatile u8 *)ptr; > + case 2: return *(const volatile u16 *)ptr; > + case 4: return *(const volatile u32 *)ptr; > + case 8: return *(const volatile u64 *)ptr; > default: return 0; /* Ignore; we do not diff the values. */ > } > } > -- > 2.40.0.rc0.216.gc4246ad0f0-goog
On Thu, 9 Mar 2023 at 01:58, 'Haibo Li' via kasan-dev <kasan-dev@googlegroups.com> wrote: [...] > > The below patch works well on linux-5.15+arm64. Thank you, glad to hear - may I add your Tested-by?
> > On Thu, 9 Mar 2023 at 01:58, 'Haibo Li' via kasan-dev > <kasan-dev@googlegroups.com> wrote: > [...] > > > > The below patch works well on linux-5.15+arm64. > > Thank you, glad to hear - may I add your Tested-by? Sure.Appreciated.
diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 54d077e1a2dc..88e75d7d85d2 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -337,6 +337,11 @@ static void delay_access(int type) */ static __always_inline u64 read_instrumented_memory(const volatile void *ptr, size_t size) { + bool aligned_read = (size == 1) || IS_ALIGNED((unsigned long)ptr, size); + + if (!aligned_read) + return 0; + switch (size) { case 1: return READ_ONCE(*(const u8 *)ptr); case 2: return READ_ONCE(*(const u16 *)ptr);
After enable kcsan on arm64+linux-5.15,it reports alignment_fault when access unaligned address. Here is the oops log: " Trying to unpack rootfs image as initramfs..... Unable to handle kernel paging request at virtual address ffffff802a0d8d7171 Mem abort info:o: ESR = 0x9600002121 EC = 0x25: DABT (current EL), IL = 32 bitsts SET = 0, FnV = 0 0 EA = 0, S1PTW = 0 0 FSC = 0x21: alignment fault Data abort info:o: ISV = 0, ISS = 0x0000002121 CM = 0, WnR = 0 0 swapper pgtable: 4k pages, 39-bit VAs, pgdp=000000002835200000 [ffffff802a0d8d71] pgd=180000005fbf9003, p4d=180000005fbf9003, pud=180000005fbf9003, pmd=180000005fbe8003, pte=006800002a0d8707 Internal error: Oops: 96000021 [#1] PREEMPT SMP Modules linked in: CPU: 2 PID: 45 Comm: kworker/u8:2 Not tainted 5.15.78-android13-8-g63561175bbda-dirty #1 ... pc : kcsan_setup_watchpoint+0x26c/0x6bc lr : kcsan_setup_watchpoint+0x88/0x6bc sp : ffffffc00ab4b7f0 x29: ffffffc00ab4b800 x28: ffffff80294fe588 x27: 0000000000000001 x26: 0000000000000019 x25: 0000000000000001 x24: ffffff80294fdb80 x23: 0000000000000000 x22: ffffffc00a70fb68 x21: ffffff802a0d8d71 x20: 0000000000000002 x19: 0000000000000000 x18: ffffffc00a9bd060 x17: 0000000000000001 x16: 0000000000000000 x15: ffffffc00a59f000 x14: 0000000000000001 x13: 0000000000000000 x12: ffffffc00a70faa0 x11: 00000000aaaaaaab x10: 0000000000000054 x9 : ffffffc00839adf8 x8 : ffffffc009b4cf00 x7 : 0000000000000000 x6 : 0000000000000007 x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffffffc00a70fb70 x2 : 0005ff802a0d8d71 x1 : 0000000000000000 x0 : 0000000000000000 Call trace: kcsan_setup_watchpoint+0x26c/0x6bc __tsan_read2+0x1f0/0x234 inflate_fast+0x498/0x750 zlib_inflate+0x1304/0x2384 __gunzip+0x3a0/0x45c gunzip+0x20/0x30 unpack_to_rootfs+0x2a8/0x3fc do_populate_rootfs+0xe8/0x11c async_run_entry_fn+0x58/0x1bc process_one_work+0x3ec/0x738 worker_thread+0x4c4/0x838 kthread+0x20c/0x258 ret_from_fork+0x10/0x20 Code: b8bfc2a8 2a0803f7 14000007 d503249f (78bfc2a8) ) ---[ end trace 613a943cb0a572b6 ]----- " After checking linux 6.3-rc1 on QEMU arm64,it still has the possibility to read unaligned address in read_instrumented_memory(qemu can not emulate alignment fault) To fix alignment fault and read the value of instrumented memory more effective,bypass the unaligned access in read_instrumented_memory. Signed-off-by: Haibo Li <haibo.li@mediatek.com> --- kernel/kcsan/core.c | 5 +++++ 1 file changed, 5 insertions(+)