Message ID | 678f31f8-5890-47fa-972e-df966aeb783d@I-love.SAKURA.ne.jp (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | x86: disable non-instrumented version of copy_page when KMSAN is enabled | expand |
diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index cc6b8e087192..f13bba3a9dab 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -58,7 +58,16 @@ static inline void clear_page(void *page) : "cc", "memory", "rax", "rcx"); } +#ifdef CONFIG_KMSAN +/* Use of non-instrumented assembly version confuses KMSAN. */ +void *memcpy(void *to, const void *from, __kernel_size_t len); +static inline void copy_page(void *to, void *from) +{ + memcpy(to, from, PAGE_SIZE); +} +#else void copy_page(void *to, void *from); +#endif #ifdef CONFIG_X86_5LEVEL /*
I found that commit afb2d666d025 ("zsmalloc: use copy_page for full page copy") caused KMSAN warning. We need to fallback to instrumented version when KMSAN is enabled. [ 50.030627][ T2974] BUG: KMSAN: use-after-free in obj_malloc+0x6cc/0x7b0 [ 50.165956][ T2974] Uninit was stored to memory at: [ 50.170819][ T2974] obj_malloc+0x70a/0x7b0 [ 50.328931][ T2974] Uninit was created at: [ 50.341845][ T2974] free_unref_page_prepare+0x130/0xfc0 Since the destination page likely already holds previously written value (i.e. KMSAN considers that the page was already initialized), whether to globally enforce an instrumented version when KMSAN is enabled might be questionable. But since finding why KMSAN considers that value is not initialized is difficult (developers tend to choose optimized version without knowing KMSAN), let's choose human-friendly version. That is, since arch/x86/include/asm/page_32.h implements copy_page() using memcpy(), let arch/x86/include/asm/page_64.h implement copy_page() using memcpy() when KMSAN is enabled. Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> --- arch/x86/include/asm/page_64.h | 9 +++++++++ 1 file changed, 9 insertions(+)