From patchwork Wed Jan 15 06:37:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11333433 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C96E514B4 for ; Wed, 15 Jan 2020 06:38:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A50E02187F for ; Wed, 15 Jan 2020 06:38:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Co+6MpJt"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="jF35eyuW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A50E02187F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VIMFMeGUpqJV4Qk28XarIIJ6P2zDtbrQobfuOKZfyUM=; b=Co+6MpJt2frfYE PMrFpiq98rlC2NzdpZtkAD7jUVvVq0tTxazA3Xf3EyzwuIeqQ3coT+0XMmSNzin02DO0m+Q7GKltI h8cThTX4r9EVnVNippGZq9eJe3IppfTYFVuCt2na0iS3wZEsNAJVwDJBV+aiOTx0tTmqVSGk2vOXz 853Bt5xNbvy9k0KKDcolsWTU4+ArK6U9JWn2IDGhLCHNqysHFEexySfF4xlXr++snsO4pskLb4Hpn 9kmChGGWL+TDGSau9ZvQ6KY/YjaxmyWBd7iVplWmCYZoLtKL29Grdw3p2FOKc9amrfjJtWCUk0rcc Cw/d2dVirSNH+GI2vb+g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1ircJW-0001st-Uj; Wed, 15 Jan 2020 06:38:02 +0000 Received: from mail-pf1-x441.google.com ([2607:f8b0:4864:20::441]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1ircJA-0001TL-DQ for linux-arm-kernel@lists.infradead.org; Wed, 15 Jan 2020 06:37:42 +0000 Received: by mail-pf1-x441.google.com with SMTP id n9so8007795pff.13 for ; Tue, 14 Jan 2020 22:37:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=47TfoOm/Ot7psKePVSna5lKaeSaaEjfGGPpf3dZ7N5k=; b=jF35eyuWBN+5R1Rq+f+ebxqPyJArZ9bw56rDE8nbSY76GHueh4IYQciu4cbqyfQ4Bm kC6/zrX5CRxJEhTWQm4rfPECurS2cNPcHYFTBXWEUFtg5yZvdjDzYNC6FJF3sOwlruPA jbAIkc4nTUVQHR+lYHdWrhGgGfR//TLD1Fe/A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=47TfoOm/Ot7psKePVSna5lKaeSaaEjfGGPpf3dZ7N5k=; b=VunxYAT1MFrLPWNCyq6TDlOI5hHyNz7W7J12aeD2iRPi25b7ExkLsGPfgWvTz0G5PV Qyq1twCJfeiczEfvOyvAPQOK3KjKx9RGXB1NgY++xEeffiOUqYo3i8HjV24C+6FTGzSE 0TmLTiAE1n8XB7Gyo1PxJvJKP9oaLh8C63l5fhS/mTr+XET+97ZLGIUUqgr9JrMNn446 776jFRlLUWBlr09PRmGybDQZlPFvvN4CUQscr2HPhB1LF1LWFNqN1aOclpfvY0rcrtq7 1tnFAPxWP6USRUkf0mxi6AU3ubF9tDETDjgZpRWP0rovKvYn9ohIsOeJjuTMCqXbM2TG cZ+A== X-Gm-Message-State: APjAAAXtZ6K8gv5LUvFsaVQ31KgpLQyuP1aEjtxVFH01GSIPWSp2wdfP XxGtpbDsbzoMGgGjvOfeOECLbQ== X-Google-Smtp-Source: APXvYqwrA047OiGN01leTeQJEPi9HR1JszE7+fLAtll4c/AfBINDMmfQZBIJZ5a+LsZZlhu1h4Y4aA== X-Received: by 2002:a62:868f:: with SMTP id x137mr29859489pfd.228.1579070259533; Tue, 14 Jan 2020 22:37:39 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-8d73-bc9d-5592-cfd7.static.ipv6.internode.on.net. [2001:44b8:1113:6700:8d73:bc9d:5592:cfd7]) by smtp.gmail.com with ESMTPSA id v13sm20960031pgc.54.2020.01.14.22.37.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Jan 2020 22:37:38 -0800 (PST) From: Daniel Axtens To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com Subject: [PATCH 2/2] string.h: fix incompatibility between FORTIFY_SOURCE and KASAN Date: Wed, 15 Jan 2020 17:37:10 +1100 Message-Id: <20200115063710.15796-3-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200115063710.15796-1-dja@axtens.net> References: <20200115063710.15796-1-dja@axtens.net> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200114_223740_472728_D65BE298 X-CRM114-Status: GOOD ( 26.52 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:441 listed in] [list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-s390@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org, Daniel Micay , Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, Daniel Axtens Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The memcmp KASAN self-test fails on a kernel with both KASAN and FORTIFY_SOURCE. When FORTIFY_SOURCE is on, a number of functions are replaced with fortified versions, which attempt to check the sizes of the operands. However, these functions often directly invoke __builtin_foo() once they have performed the fortify check. Using __builtins may bypass KASAN checks if the compiler decides to inline it's own implementation as sequence of instructions, rather than emit a function call that goes out to a KASAN-instrumented implementation. Why is only memcmp affected? ============================ Of the string and string-like functions that kasan_test tests, only memcmp is replaced by an inline sequence of instructions in my testing on x86 with gcc version 9.2.1 20191008 (Ubuntu 9.2.1-9ubuntu2). I believe this is due to compiler heuristics. For example, if I annotate kmalloc calls with the alloc_size annotation (and disable some fortify compile-time checking!), the compiler will replace every memset except the one in kmalloc_uaf_memset with inline instructions. (I have some WIP patches to add this annotation.) Does this affect other functions in string.h? ============================================= Yes. Anything that uses __builtin_* rather than __real_* could be affected. This looks like: - strncpy - strcat - strlen - strlcpy maybe, under some circumstances? - strncat under some circumstances - memset - memcpy - memmove - memcmp (as noted) - memchr - strcpy Whether a function call is emitted always depends on the compiler. Most bugs should get caught by FORTIFY_SOURCE, but the missed memcmp test shows that this is not always the case. Isn't FORTIFY_SOURCE disabled with KASAN? ========================================- The string headers on all arches supporting KASAN disable fortify with kasan, but only when address sanitisation is _also_ disabled. For example from x86: #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) /* * For files that are not instrumented (e.g. mm/slub.c) we * should use not instrumented version of mem* functions. */ #define memcpy(dst, src, len) __memcpy(dst, src, len) #define memmove(dst, src, len) __memmove(dst, src, len) #define memset(s, c, n) __memset(s, c, n) #ifndef __NO_FORTIFY #define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */ #endif #endif This comes from commit 6974f0c4555e ("include/linux/string.h: add the option of fortified string.h functions"), and doesn't work when KASAN is enabled and the file is supposed to be sanitised - as with test_kasan.c I'm pretty sure this is backwards: we shouldn't be using __builtin_memcpy when we have a KASAN instrumented file, but we can use __builtin_* - and in many cases all fortification - in files where we don't have instrumentation. What is correct behaviour? ========================== Firstly, there is some overlap between fortification and KASAN: both provide some level of _runtime_ checking. Only fortify provides compile-time checking. KASAN and fortify can pick up different things at runtime: - Some fortify functions, notably the string functions, could easily be modified to consider sub-object sizes (e.g. members within a struct), and I have some WIP patches to do this. KASAN cannot detect these because it cannot insert poision between members of a struct. - KASAN can detect many over-reads/over-writes when the sizes of both operands are unknown, which fortify cannot. So there are a couple of options: 1) Flip the test: disable fortify in santised files and enable it in unsanitised files. This at least stops us missing KASAN checking, but we lose the fortify checking. 2) Make the fortify code always call out to real versions. Do this only for KASAN, for fear of losing the inlining opportunities we get from __builtin_*. (We can't use kasan_check_{read,write}: because the fortify functions are _extern inline_, you can't include _static_ inline functions without a compiler warning. kasan_check_{read,write} are static inline so we can't use them even when they would otherwise be suitable.) Take approach 2 and call out to real versions when KASAN is enabled. Use __underlying_foo to distinguish from __real_foo: __real_foo always refers to the kernel's implementation of foo, __underlying_foo could be either the kernel implementation or the __builtin_foo implementation. Remove all the attempted disablement code in arch string headers. This makes all the tests succeed with FORTIFY_SOURCE enabled. Cc: Daniel Micay Cc: Andrey Ryabinin Cc: Alexander Potapenko Cc: Dmitry Vyukov Fixes: 6974f0c4555e ("include/linux/string.h: add the option of fortified string.h functions") Signed-off-by: Daniel Axtens Reported-by: kbuild test robot --- Dmitry, this might cause a few new syzkaller splats - I first picked it up building from a syskaller config. Or it might not, it just depends what gets replaced with an inline sequence of instructions. checkpatch complains about some over-long lines, happy to change the format if anyone has better ideas for how to lay it out. --- arch/arm64/include/asm/string.h | 4 --- arch/powerpc/include/asm/string.h | 4 --- arch/s390/include/asm/string.h | 4 --- arch/x86/include/asm/string_64.h | 4 --- arch/xtensa/include/asm/string.h | 3 -- include/linux/string.h | 49 +++++++++++++++++++++++-------- 6 files changed, 37 insertions(+), 31 deletions(-) diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h index b31e8e87a0db..eafb2c4771fc 100644 --- a/arch/arm64/include/asm/string.h +++ b/arch/arm64/include/asm/string.h @@ -59,10 +59,6 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt); #define memmove(dst, src, len) __memmove(dst, src, len) #define memset(s, c, n) __memset(s, c, n) -#ifndef __NO_FORTIFY -#define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */ -#endif - #endif #endif diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h index b72692702f35..952c5934596b 100644 --- a/arch/powerpc/include/asm/string.h +++ b/arch/powerpc/include/asm/string.h @@ -43,10 +43,6 @@ void *__memmove(void *to, const void *from, __kernel_size_t n); #define memmove(dst, src, len) __memmove(dst, src, len) #define memset(s, c, n) __memset(s, c, n) -#ifndef __NO_FORTIFY -#define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */ -#endif - #endif #ifdef CONFIG_PPC64 diff --git a/arch/s390/include/asm/string.h b/arch/s390/include/asm/string.h index 4c0690fc5167..e0b66d8c89a1 100644 --- a/arch/s390/include/asm/string.h +++ b/arch/s390/include/asm/string.h @@ -75,10 +75,6 @@ extern void *__memmove(void *dest, const void *src, size_t n); #define __no_sanitize_prefix_strfunc(x) __##x -#ifndef __NO_FORTIFY -#define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */ -#endif - #else #define __no_sanitize_prefix_strfunc(x) x #endif /* defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) */ diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h index 75314c3dbe47..ec63d11e1f04 100644 --- a/arch/x86/include/asm/string_64.h +++ b/arch/x86/include/asm/string_64.h @@ -76,10 +76,6 @@ int strcmp(const char *cs, const char *ct); #define memmove(dst, src, len) __memmove(dst, src, len) #define memset(s, c, n) __memset(s, c, n) -#ifndef __NO_FORTIFY -#define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */ -#endif - #endif #define __HAVE_ARCH_MEMCPY_MCSAFE 1 diff --git a/arch/xtensa/include/asm/string.h b/arch/xtensa/include/asm/string.h index 89b51a0c752f..8cf04c5a33fb 100644 --- a/arch/xtensa/include/asm/string.h +++ b/arch/xtensa/include/asm/string.h @@ -132,9 +132,6 @@ extern void *__memmove(void *__dest, __const__ void *__src, size_t __n); #define memmove(dst, src, len) __memmove(dst, src, len) #define memset(s, c, n) __memset(s, c, n) -#ifndef __NO_FORTIFY -#define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */ -#endif #endif #endif /* _XTENSA_STRING_H */ diff --git a/include/linux/string.h b/include/linux/string.h index 3b8e8b12dd37..4364c106355e 100644 --- a/include/linux/string.h +++ b/include/linux/string.h @@ -317,6 +317,31 @@ void __read_overflow3(void) __compiletime_error("detected read beyond size of ob void __write_overflow(void) __compiletime_error("detected write beyond size of object passed as 1st parameter"); #if !defined(__NO_FORTIFY) && defined(__OPTIMIZE__) && defined(CONFIG_FORTIFY_SOURCE) + +#ifdef CONFIG_KASAN +extern void *__underlying_memchr(const void *p, int c, __kernel_size_t size) __RENAME(memchr); +extern int __underlying_memcmp(const void *p, const void *q, __kernel_size_t size) __RENAME(memcmp); +extern void *__underlying_memcpy(void *p, const void *q, __kernel_size_t size) __RENAME(memcpy); +extern void *__underlying_memmove(void *p, const void *q, __kernel_size_t size) __RENAME(memmove); +extern void *__underlying_memset(void *p, int c, __kernel_size_t size) __RENAME(memset); +extern char *__underlying_strcat(char *p, const char *q) __RENAME(strcat); +extern char *__underlying_strcpy(char *p, const char *q) __RENAME(strcpy); +extern __kernel_size_t __underlying_strlen(const char *p) __RENAME(strlen); +extern char *__underlying_strncat(char *p, const char *q, __kernel_size_t count) __RENAME(strncat); +extern char *__underlying_strncpy(char *p, const char *q, __kernel_size_t size) __RENAME(strncpy); +#else +#define __underlying_memchr __builtin_memchr +#define __underlying_memcmp __builtin_memcmp +#define __underlying_memcpy __builtin_memcpy +#define __underlying_memmove __builtin_memmove +#define __underlying_memset __builtin_memset +#define __underlying_strcat __builtin_strcat +#define __underlying_strcpy __builtin_strcpy +#define __underlying_strlen __builtin_strlen +#define __underlying_strncat __builtin_strncat +#define __underlying_strncpy __builtin_strncpy +#endif + __FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size) { size_t p_size = __builtin_object_size(p, 0); @@ -324,14 +349,14 @@ __FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size) __write_overflow(); if (p_size < size) fortify_panic(__func__); - return __builtin_strncpy(p, q, size); + return __underlying_strncpy(p, q, size); } __FORTIFY_INLINE char *strcat(char *p, const char *q) { size_t p_size = __builtin_object_size(p, 0); if (p_size == (size_t)-1) - return __builtin_strcat(p, q); + return __underlying_strcat(p, q); if (strlcat(p, q, p_size) >= p_size) fortify_panic(__func__); return p; @@ -345,7 +370,7 @@ __FORTIFY_INLINE __kernel_size_t strlen(const char *p) /* Work around gcc excess stack consumption issue */ if (p_size == (size_t)-1 || (__builtin_constant_p(p[p_size - 1]) && p[p_size - 1] == '\0')) - return __builtin_strlen(p); + return __underlying_strlen(p); ret = strnlen(p, p_size); if (p_size <= ret) fortify_panic(__func__); @@ -378,7 +403,7 @@ __FORTIFY_INLINE size_t strlcpy(char *p, const char *q, size_t size) __write_overflow(); if (len >= p_size) fortify_panic(__func__); - __builtin_memcpy(p, q, len); + __underlying_memcpy(p, q, len); p[len] = '\0'; } return ret; @@ -391,12 +416,12 @@ __FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count) size_t p_size = __builtin_object_size(p, 0); size_t q_size = __builtin_object_size(q, 0); if (p_size == (size_t)-1 && q_size == (size_t)-1) - return __builtin_strncat(p, q, count); + return __underlying_strncat(p, q, count); p_len = strlen(p); copy_len = strnlen(q, count); if (p_size < p_len + copy_len + 1) fortify_panic(__func__); - __builtin_memcpy(p + p_len, q, copy_len); + __underlying_memcpy(p + p_len, q, copy_len); p[p_len + copy_len] = '\0'; return p; } @@ -408,7 +433,7 @@ __FORTIFY_INLINE void *memset(void *p, int c, __kernel_size_t size) __write_overflow(); if (p_size < size) fortify_panic(__func__); - return __builtin_memset(p, c, size); + return __underlying_memset(p, c, size); } __FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size) @@ -423,7 +448,7 @@ __FORTIFY_INLINE void *memcpy(void *p, const void *q, __kernel_size_t size) } if (p_size < size || q_size < size) fortify_panic(__func__); - return __builtin_memcpy(p, q, size); + return __underlying_memcpy(p, q, size); } __FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size) @@ -438,7 +463,7 @@ __FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size) } if (p_size < size || q_size < size) fortify_panic(__func__); - return __builtin_memmove(p, q, size); + return __underlying_memmove(p, q, size); } extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan); @@ -464,7 +489,7 @@ __FORTIFY_INLINE int memcmp(const void *p, const void *q, __kernel_size_t size) } if (p_size < size || q_size < size) fortify_panic(__func__); - return __builtin_memcmp(p, q, size); + return __underlying_memcmp(p, q, size); } __FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size) @@ -474,7 +499,7 @@ __FORTIFY_INLINE void *memchr(const void *p, int c, __kernel_size_t size) __read_overflow(); if (p_size < size) fortify_panic(__func__); - return __builtin_memchr(p, c, size); + return __underlying_memchr(p, c, size); } void *__real_memchr_inv(const void *s, int c, size_t n) __RENAME(memchr_inv); @@ -505,7 +530,7 @@ __FORTIFY_INLINE char *strcpy(char *p, const char *q) size_t p_size = __builtin_object_size(p, 0); size_t q_size = __builtin_object_size(q, 0); if (p_size == (size_t)-1 && q_size == (size_t)-1) - return __builtin_strcpy(p, q); + return __underlying_strcpy(p, q); memcpy(p, q, strlen(q) + 1); return p; }