From patchwork Wed Oct 16 08:39:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11192625 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 530201390 for ; Wed, 16 Oct 2019 08:41:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CF8AB2054F for ; Wed, 16 Oct 2019 08:41:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TL6F/pEx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CF8AB2054F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B2A4C8E0007; Wed, 16 Oct 2019 04:41:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B041A8E0001; Wed, 16 Oct 2019 04:41:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97D2E8E0007; Wed, 16 Oct 2019 04:41:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0106.hostedemail.com [216.40.44.106]) by kanga.kvack.org (Postfix) with ESMTP id 5DB8E8E0001 for ; Wed, 16 Oct 2019 04:41:14 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id D8E75180CDD18 for ; Wed, 16 Oct 2019 08:41:13 +0000 (UTC) X-FDA: 76049003226.11.mist09_27b267dc7bb5b X-Spam-Summary: 10, X-HE-Tag: mist09_27b267dc7bb5b X-Filterd-Recvd-Size: 77249 Received: from mail-vk1-f202.google.com (mail-vk1-f202.google.com [209.85.221.202]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 08:41:12 +0000 (UTC) Received: by mail-vk1-f202.google.com with SMTP id a7so9402686vkg.2 for ; Wed, 16 Oct 2019 01:41:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dL+yzW5dYHs5bR/CIzeWc9gE7a/4IIroZ4HMdmnZOo4=; b=TL6F/pExC2ixOOxzrj3lE0OEqBiYrZBsW/NDqSTyZFnTS1gW9Wx8D5Qj/4U9wioc8I yrTdDcIpc31TC81xPts5OHdCs6e/3YdfRB/OLzAiorMgSTcTcjRAJlvAfuzD3TupPKm4 WfaFMzrt2nExZwLE5qXPsb0ZG/3SRRXBi/i86PxpXxg8qn8aFiWALqz1FYO4KpJmUai8 rsw4aHeyu1q1xIv9tJCY19igahH+uMiRsZaFqZtcXWNP6LNHGw6RAlJRTjX96qcFBhkj cKhzpsTuC+tW9CR9Xwnekmd1eGOEDt6OteLuhrsvoDahl6HbkaMtO/uO6Syp9aDAUlGG S5TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dL+yzW5dYHs5bR/CIzeWc9gE7a/4IIroZ4HMdmnZOo4=; b=fZKvRR0wtFMHUaXJZ6stRlIV+2BPU/4Umgv+1I3JVjN8V4OBspXzR34ym7tYhh83Hg OdLmbd4eykszXl+DCOxJOz+cjb6tnGp9L3CWdf8Y40s1tQx7nj2gFJPM919g+9aFKiim qoSxQmVqC97srq4uRDv0zJD462UkMDvT/uYp9EXQuvU7J2Ya6ARm7sxce7t110JbPRsB XWc0JmTy1c23zalRehu6qvRQS4E/x9MAKg5rx2P43tg1WunvpIsPnf7NAyOBDaJ8o2B9 adTc+CuUbzsEarZwdhvpIELOdK3c4KYnHdU9cW3jI1FcWqynvyDL5Et+e+OiubbWpSyu CKFQ== X-Gm-Message-State: APjAAAVd7maOJs3M20qlmXNNS9aWnNI2S083f+5kubsXR0d2FbE18QIx mtZzT/3a7ndRwILZ75kXvo/zks9vIg== X-Google-Smtp-Source: APXvYqxJe9bp66B+3XzNrtYB0inReIjIUxiCU5VHZe/17hum3jfsE8clYf7GkwJWtSBPwxkpjuEqwRKpCg== X-Received: by 2002:a1f:3fd6:: with SMTP id m205mr21630193vka.21.1571215271557; Wed, 16 Oct 2019 01:41:11 -0700 (PDT) Date: Wed, 16 Oct 2019 10:39:52 +0200 In-Reply-To: <20191016083959.186860-1-elver@google.com> Message-Id: <20191016083959.186860-2-elver@google.com> Mime-Version: 1.0 References: <20191016083959.186860-1-elver@google.com> X-Mailer: git-send-email 2.23.0.700.g56cf767bdb-goog Subject: [PATCH 1/8] kcsan: Add Kernel Concurrency Sanitizer infrastructure From: Marco Elver To: elver@google.com Cc: akiyks@gmail.com, stern@rowland.harvard.edu, glider@google.com, parri.andrea@gmail.com, andreyknvl@google.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, boqun.feng@gmail.com, bp@alien8.de, dja@axtens.net, dlustig@nvidia.com, dave.hansen@linux.intel.com, dhowells@redhat.com, dvyukov@google.com, hpa@zytor.com, mingo@redhat.com, j.alglave@ucl.ac.uk, joel@joelfernandes.org, corbet@lwn.net, jpoimboe@redhat.com, luc.maranget@inria.fr, mark.rutland@arm.com, npiggin@gmail.com, paulmck@linux.ibm.com, peterz@infradead.org, tglx@linutronix.de, will@kernel.org, kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-efi@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Kernel Concurrency Sanitizer (KCSAN) is a dynamic data-race detector for kernel space. KCSAN is a sampling watchpoint-based data-race detector. See the included Documentation/dev-tools/kcsan.rst for more details. This patch adds basic infrastructure, but does not yet enable KCSAN for any architecture. Signed-off-by: Marco Elver --- Documentation/dev-tools/kcsan.rst | 202 +++++++++++++ MAINTAINERS | 11 + Makefile | 3 +- include/linux/compiler-clang.h | 9 + include/linux/compiler-gcc.h | 7 + include/linux/compiler.h | 35 ++- include/linux/kcsan-checks.h | 116 ++++++++ include/linux/kcsan.h | 85 ++++++ include/linux/sched.h | 7 + init/init_task.c | 6 + init/main.c | 2 + kernel/Makefile | 1 + kernel/kcsan/Makefile | 14 + kernel/kcsan/atomic.c | 21 ++ kernel/kcsan/core.c | 458 ++++++++++++++++++++++++++++++ kernel/kcsan/debugfs.c | 225 +++++++++++++++ kernel/kcsan/encoding.h | 94 ++++++ kernel/kcsan/kcsan.c | 81 ++++++ kernel/kcsan/kcsan.h | 140 +++++++++ kernel/kcsan/report.c | 307 ++++++++++++++++++++ kernel/kcsan/test.c | 117 ++++++++ lib/Kconfig.debug | 2 + lib/Kconfig.kcsan | 88 ++++++ lib/Makefile | 3 + scripts/Makefile.kcsan | 6 + scripts/Makefile.lib | 10 + 26 files changed, 2041 insertions(+), 9 deletions(-) create mode 100644 Documentation/dev-tools/kcsan.rst create mode 100644 include/linux/kcsan-checks.h create mode 100644 include/linux/kcsan.h create mode 100644 kernel/kcsan/Makefile create mode 100644 kernel/kcsan/atomic.c create mode 100644 kernel/kcsan/core.c create mode 100644 kernel/kcsan/debugfs.c create mode 100644 kernel/kcsan/encoding.h create mode 100644 kernel/kcsan/kcsan.c create mode 100644 kernel/kcsan/kcsan.h create mode 100644 kernel/kcsan/report.c create mode 100644 kernel/kcsan/test.c create mode 100644 lib/Kconfig.kcsan create mode 100644 scripts/Makefile.kcsan diff --git a/Documentation/dev-tools/kcsan.rst b/Documentation/dev-tools/kcsan.rst new file mode 100644 index 000000000000..5b46cc5593c3 --- /dev/null +++ b/Documentation/dev-tools/kcsan.rst @@ -0,0 +1,202 @@ +The Kernel Concurrency Sanitizer (KCSAN) +======================================== + +Overview +-------- + +*Kernel Concurrency Sanitizer (KCSAN)* is a dynamic data-race detector for +kernel space. KCSAN is a sampling watchpoint-based data-race detector -- this +is unlike Kernel Thread Sanitizer (KTSAN), which is a happens-before data-race +detector. Key priorities in KCSAN's design are lack of false positives, +scalability, and simplicity. More details can be found in `Implementation +Details`_. + +KCSAN uses compile-time instrumentation to instrument memory accesses. KCSAN is +supported in both GCC and Clang. With GCC it requires version 7.3.0 or later. +With Clang it requires version 7.0.0 or later. + +Usage +----- + +To enable KCSAN configure kernel with:: + + CONFIG_KCSAN = y + +KCSAN provides several other configuration options to customize behaviour (see +their respective help text for more info). + +debugfs +~~~~~~~ + +* The file ``/sys/kernel/debug/kcsan`` can be read to get stats. + +* KCSAN can be turned on or off by writing ``on`` or ``off`` to + ``/sys/kernel/debug/kcsan``. + +* Writing ``!some_func_name`` to ``/sys/kernel/debug/kcsan`` adds + ``some_func_name`` to the report filter list, which (by default) blacklists + reporting data-races where either one of the top stackframes are a function + in the list. + +* Writing either ``blacklist`` or ``whitelist`` to ``/sys/kernel/debug/kcsan`` + changes the report filtering behaviour. For example, the blacklist feature + can be used to silence frequently occurring data-races; the whitelist feature + can help with reproduction and testing of fixes. + +Error reports +~~~~~~~~~~~~~ + +A typical data-race report looks like this:: + + ================================================================== + BUG: KCSAN: data-race in generic_permission / kernfs_refresh_inode + + write to 0xffff8fee4c40700c of 4 bytes by task 175 on cpu 4: + kernfs_refresh_inode+0x70/0x170 + kernfs_iop_permission+0x4f/0x90 + inode_permission+0x190/0x200 + link_path_walk.part.0+0x503/0x8e0 + path_lookupat.isra.0+0x69/0x4d0 + filename_lookup+0x136/0x280 + user_path_at_empty+0x47/0x60 + vfs_statx+0x9b/0x130 + __do_sys_newlstat+0x50/0xb0 + __x64_sys_newlstat+0x37/0x50 + do_syscall_64+0x85/0x260 + entry_SYSCALL_64_after_hwframe+0x44/0xa9 + + read to 0xffff8fee4c40700c of 4 bytes by task 166 on cpu 6: + generic_permission+0x5b/0x2a0 + kernfs_iop_permission+0x66/0x90 + inode_permission+0x190/0x200 + link_path_walk.part.0+0x503/0x8e0 + path_lookupat.isra.0+0x69/0x4d0 + filename_lookup+0x136/0x280 + user_path_at_empty+0x47/0x60 + do_faccessat+0x11a/0x390 + __x64_sys_access+0x3c/0x50 + do_syscall_64+0x85/0x260 + entry_SYSCALL_64_after_hwframe+0x44/0xa9 + + Reported by Kernel Concurrency Sanitizer on: + CPU: 6 PID: 166 Comm: systemd-journal Not tainted 5.3.0-rc7+ #1 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014 + ================================================================== + +The header of the report provides a short summary of the functions involved in +the race. It is followed by the access types and stack traces of the 2 threads +involved in the data-race. + +The other less common type of data-race report looks like this:: + + ================================================================== + BUG: KCSAN: racing read in e1000_clean_rx_irq+0x551/0xb10 + + race at unknown origin, with read to 0xffff933db8a2ae6c of 1 bytes by interrupt on cpu 0: + e1000_clean_rx_irq+0x551/0xb10 + e1000_clean+0x533/0xda0 + net_rx_action+0x329/0x900 + __do_softirq+0xdb/0x2db + irq_exit+0x9b/0xa0 + do_IRQ+0x9c/0xf0 + ret_from_intr+0x0/0x18 + default_idle+0x3f/0x220 + arch_cpu_idle+0x21/0x30 + do_idle+0x1df/0x230 + cpu_startup_entry+0x14/0x20 + rest_init+0xc5/0xcb + arch_call_rest_init+0x13/0x2b + start_kernel+0x6db/0x700 + + Reported by Kernel Concurrency Sanitizer on: + CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.3.0-rc7+ #2 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014 + ================================================================== + +This report is generated where it was not possible to determine the other +racing thread, but a race was inferred due to the data-value of the watched +memory location having changed. These can occur either due to missing +instrumentation or e.g. DMA accesses. + +Data-Races +---------- + +Informally, two operations *conflict* if they access the same memory location, +and at least one of them is a write operation. In an execution, two memory +operations from different threads form a **data-race** if they *conflict*, at +least one of them is a *plain access* (non-atomic), and they are *unordered* in +the "happens-before" order according to the `LKMM +<../../tools/memory-model/Documentation/explanation.txt>`_. + +Relationship with the Linux Kernel Memory Model (LKMM) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The LKMM defines the propagation and ordering rules of various memory +operations, which gives developers the ability to reason about concurrent code. +Ultimately this allows to determine the possible executions of concurrent code, +and if that code is free from data-races. + +KCSAN is aware of *atomic* accesses (``READ_ONCE``, ``WRITE_ONCE``, +``atomic_*``, etc.), but is oblivious of any ordering guarantees. In other +words, KCSAN assumes that as long as a plain access is not observed to race +with another conflicting access, memory operations are correctly ordered. + +This means that KCSAN will not report *potential* data-races due to missing +memory ordering. If, however, missing memory ordering (that is observable with +a particular compiler and architecture) leads to an observable data-race (e.g. +entering a critical section erroneously), KCSAN would report the resulting +data-race. + +Implementation Details +---------------------- + +The general approach is inspired by `DataCollider +`_. +Unlike DataCollider, KCSAN does not use hardware watchpoints, but instead +relies on compiler instrumentation. Watchpoints are implemented using an +efficient encoding that stores access type, size, and address in a long; the +benefits of using "soft watchpoints" are portability and greater flexibility in +limiting which accesses trigger a watchpoint. + +More specifically, KCSAN requires instrumenting plain (unmarked, non-atomic) +memory operations; for each instrumented plain access: + +1. Check if a matching watchpoint exists; if yes, and at least one access is a + write, then we encountered a racing access. + +2. Periodically, if no matching watchpoint exists, set up a watchpoint and + stall some delay. + +3. Also check the data value before the delay, and re-check the data value + after delay; if the values mismatch, we infer a race of unknown origin. + +To detect data-races between plain and atomic memory operations, KCSAN also +annotates atomic accesses, but only to check if a watchpoint exists +(``kcsan_check_atomic(..)``); i.e. KCSAN never sets up a watchpoint on atomic +accesses. + +Key Properties +~~~~~~~~~~~~~~ + +1. **Performance Overhead:** KCSAN's runtime is minimal, and does not require + locking shared state for each access. This results in significantly better + performance in comparison with KTSAN. + +2. **Memory Overhead:** No shadow memory is required. The current + implementation uses a small array of longs to encode watchpoint information, + which is negligible. + +3. **Memory Ordering:** KCSAN is *not* aware of the LKMM's ordering rules. This + may result in missed data-races (false negatives), compared to a + happens-before data-race detector such as KTSAN. + +4. **Accuracy:** Imprecise, since it uses a sampling strategy. + +5. **Annotation Overheads:** Minimal annotation is required outside the KCSAN + runtime. With a happens-before data-race detector, any omission leads to + false positives, which is especially important in the context of the kernel + which includes numerous custom synchronization mechanisms. With KCSAN, as a + result, maintenance overheads are minimal as the kernel evolves. + +6. **Detects Racy Writes from Devices:** Due to checking data values upon + setting up watchpoints, racy writes from devices can also be detected. diff --git a/MAINTAINERS b/MAINTAINERS index 0154674cbad3..71f7fb625490 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8847,6 +8847,17 @@ F: Documentation/kbuild/kconfig* F: scripts/kconfig/ F: scripts/Kconfig.include +KCSAN +M: Marco Elver +R: Dmitry Vyukov +L: kasan-dev@googlegroups.com +S: Maintained +F: Documentation/dev-tools/kcsan.rst +F: include/linux/kcsan*.h +F: kernel/kcsan/ +F: lib/Kconfig.kcsan +F: scripts/Makefile.kcsan + KDUMP M: Dave Young M: Baoquan He diff --git a/Makefile b/Makefile index ffd7a912fc46..ad4729176252 100644 --- a/Makefile +++ b/Makefile @@ -478,7 +478,7 @@ export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE -export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE CFLAGS_UBSAN +export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE CFLAGS_UBSAN CFLAGS_KCSAN export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL @@ -900,6 +900,7 @@ endif include scripts/Makefile.kasan include scripts/Makefile.extrawarn include scripts/Makefile.ubsan +include scripts/Makefile.kcsan # Add user supplied CPPFLAGS, AFLAGS and CFLAGS as the last assignments KBUILD_CPPFLAGS += $(KCPPFLAGS) diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 333a6695a918..a213eb55e725 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -24,6 +24,15 @@ #define __no_sanitize_address #endif +#if __has_feature(thread_sanitizer) +/* emulate gcc's __SANITIZE_THREAD__ flag */ +#define __SANITIZE_THREAD__ +#define __no_sanitize_thread \ + __attribute__((no_sanitize("thread"))) +#else +#define __no_sanitize_thread +#endif + /* * Not all versions of clang implement the the type-generic versions * of the builtin overflow checkers. Fortunately, clang implements diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index d7ee4c6bad48..de105ca29282 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -145,6 +145,13 @@ #define __no_sanitize_address #endif +#if __has_attribute(__no_sanitize_thread__) && defined(__SANITIZE_THREAD__) +#define __no_sanitize_thread \ + __attribute__((__noinline__)) __attribute__((no_sanitize_thread)) +#else +#define __no_sanitize_thread +#endif + #if GCC_VERSION >= 50100 #define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 #endif diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 5e88e7e33abe..0a7467477f84 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -178,6 +178,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, #endif #include +#include #define __READ_ONCE_SIZE \ ({ \ @@ -193,12 +194,6 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, } \ }) -static __always_inline -void __read_once_size(const volatile void *p, void *res, int size) -{ - __READ_ONCE_SIZE; -} - #ifdef CONFIG_KASAN /* * We can't declare function 'inline' because __no_sanitize_address confilcts @@ -211,14 +206,38 @@ void __read_once_size(const volatile void *p, void *res, int size) # define __no_kasan_or_inline __always_inline #endif -static __no_kasan_or_inline +#ifdef CONFIG_KCSAN +# define __no_kcsan_or_inline __no_sanitize_thread notrace __maybe_unused +#else +# define __no_kcsan_or_inline __always_inline +#endif + +#if defined(CONFIG_KASAN) || defined(CONFIG_KCSAN) +/* Avoid any instrumentation or inline. */ +#define __no_sanitize_or_inline \ + __no_sanitize_address __no_sanitize_thread notrace __maybe_unused +#else +#define __no_sanitize_or_inline __always_inline +#endif + +static __no_kcsan_or_inline +void __read_once_size(const volatile void *p, void *res, int size) +{ + kcsan_check_atomic((const void *)p, size, false); + __READ_ONCE_SIZE; +} + +static __no_sanitize_or_inline void __read_once_size_nocheck(const volatile void *p, void *res, int size) { __READ_ONCE_SIZE; } -static __always_inline void __write_once_size(volatile void *p, void *res, int size) +static __no_kcsan_or_inline +void __write_once_size(volatile void *p, void *res, int size) { + kcsan_check_atomic((const void *)p, size, true); + switch (size) { case 1: *(volatile __u8 *)p = *(__u8 *)res; break; case 2: *(volatile __u16 *)p = *(__u16 *)res; break; diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h new file mode 100644 index 000000000000..bee619b66e1c --- /dev/null +++ b/include/linux/kcsan-checks.h @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_KCSAN_CHECKS_H +#define _LINUX_KCSAN_CHECKS_H + +#include + +/* + * __kcsan_*: Always available when KCSAN is enabled. This may be used + * even in compilation units that selectively disable KCSAN, but must use KCSAN + * to validate access to an address. Never use these in header files! + */ +#ifdef CONFIG_KCSAN +/** + * __kcsan_check_watchpoint - check if a watchpoint exists + * + * Returns true if no race was detected, and we may then proceed to set up a + * watchpoint after. Returns false if either KCSAN is disabled or a race was + * encountered, and we may not set up a watchpoint after. + * + * @ptr address of access + * @size size of access + * @is_write is access a write + * @return true if no race was detected, false otherwise. + */ +bool __kcsan_check_watchpoint(const volatile void *ptr, size_t size, + bool is_write); + +/** + * __kcsan_setup_watchpoint - set up watchpoint and report data-races + * + * Sets up a watchpoint (if sampled), and if a racing access was observed, + * reports the data-race. + * + * @ptr address of access + * @size size of access + * @is_write is access a write + */ +void __kcsan_setup_watchpoint(const volatile void *ptr, size_t size, + bool is_write); +#else +static inline bool __kcsan_check_watchpoint(const volatile void *ptr, + size_t size, bool is_write) +{ + return true; +} +static inline void __kcsan_setup_watchpoint(const volatile void *ptr, + size_t size, bool is_write) +{ +} +#endif + +/* + * kcsan_*: Only available when the particular compilation unit has KCSAN + * instrumentation enabled. May be used in header files. + */ +#ifdef __SANITIZE_THREAD__ +#define kcsan_check_watchpoint __kcsan_check_watchpoint +#define kcsan_setup_watchpoint __kcsan_setup_watchpoint +#else +static inline bool kcsan_check_watchpoint(const volatile void *ptr, size_t size, + bool is_write) +{ + return true; +} +static inline void kcsan_setup_watchpoint(const volatile void *ptr, size_t size, + bool is_write) +{ +} +#endif + +/** + * __kcsan_check_access - check regular access for data-races + * + * Full access that checks watchpoint and sets up a watchpoint if this access is + * sampled. + * + * @ptr address of access + * @size size of access + * @is_write is access a write + */ +#define __kcsan_check_access(ptr, size, is_write) \ + do { \ + if (__kcsan_check_watchpoint(ptr, size, is_write) && \ + !(IS_ENABLED(CONFIG_KCSAN_PLAIN_WRITE_PRETEND_ONCE) && \ + is_write)) \ + __kcsan_setup_watchpoint(ptr, size, is_write); \ + } while (0) +/** + * kcsan_check_access - check regular access for data-races + * + * @ptr address of access + * @size size of access + * @is_write is access a write + */ +#define kcsan_check_access(ptr, size, is_write) \ + do { \ + if (kcsan_check_watchpoint(ptr, size, is_write) && \ + !(IS_ENABLED(CONFIG_KCSAN_PLAIN_WRITE_PRETEND_ONCE) && \ + is_write)) \ + kcsan_setup_watchpoint(ptr, size, is_write); \ + } while (0) + +/* + * Check for atomic accesses: if atomics are not ignored, this simply aliases to + * kcsan_check_watchpoint, otherwise becomes a no-op. + */ +#ifdef CONFIG_KCSAN_IGNORE_ATOMICS +#define kcsan_check_atomic(...) \ + do { \ + } while (0) +#else +#define kcsan_check_atomic kcsan_check_watchpoint +#endif + +#endif /* _LINUX_KCSAN_CHECKS_H */ diff --git a/include/linux/kcsan.h b/include/linux/kcsan.h new file mode 100644 index 000000000000..18c660628376 --- /dev/null +++ b/include/linux/kcsan.h @@ -0,0 +1,85 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_KCSAN_H +#define _LINUX_KCSAN_H + +#include +#include + +#ifdef CONFIG_KCSAN + +/** + * kcsan_init - initialize KCSAN runtime + */ +void kcsan_init(void); + +/** + * kcsan_disable_current - disable KCSAN for the current context + * + * Supports nesting. + */ +void kcsan_disable_current(void); + +/** + * kcsan_enable_current - re-enable KCSAN for the current context + * + * Supports nesting. + */ +void kcsan_enable_current(void); + +/** + * kcsan_begin_atomic - use to denote an atomic region + * + * Accesses within the atomic region may appear to race with other accesses but + * should be considered atomic. + * + * @nest true if regions may be nested, or false for flat region + */ +void kcsan_begin_atomic(bool nest); + +/** + * kcsan_end_atomic - end atomic region + * + * @nest must match argument to kcsan_begin_atomic(). + */ +void kcsan_end_atomic(bool nest); + +/** + * kcsan_atomic_next - consider following accesses as atomic + * + * Force treating the next n memory accesses for the current context as atomic + * operations. + * + * @n number of following memory accesses to treat as atomic. + */ +void kcsan_atomic_next(int n); + +#else /* CONFIG_KCSAN */ + +static inline void kcsan_init(void) +{ +} + +static inline void kcsan_disable_current(void) +{ +} + +static inline void kcsan_enable_current(void) +{ +} + +static inline void kcsan_begin_atomic(bool nest) +{ +} + +static inline void kcsan_end_atomic(bool nest) +{ +} + +static inline void kcsan_atomic_next(int n) +{ +} + +#endif /* CONFIG_KCSAN */ + +#endif /* _LINUX_KCSAN_H */ diff --git a/include/linux/sched.h b/include/linux/sched.h index 2c2e56bd8913..34a1d9310304 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1171,6 +1171,13 @@ struct task_struct { #ifdef CONFIG_KASAN unsigned int kasan_depth; #endif +#ifdef CONFIG_KCSAN + /* See comments at kernel/kcsan/core.c: struct cpu_state. */ + int kcsan_disable; + int kcsan_atomic_next; + int kcsan_atomic_region; + bool kcsan_atomic_region_flat; +#endif #ifdef CONFIG_FUNCTION_GRAPH_TRACER /* Index of current stored address in ret_stack: */ diff --git a/init/init_task.c b/init/init_task.c index 9e5cbe5eab7b..f98fc4c9f635 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -161,6 +161,12 @@ struct task_struct init_task #ifdef CONFIG_KASAN .kasan_depth = 1, #endif +#ifdef CONFIG_KCSAN + .kcsan_disable = 1, + .kcsan_atomic_next = 0, + .kcsan_atomic_region = 0, + .kcsan_atomic_region_flat = 0, +#endif #ifdef CONFIG_TRACE_IRQFLAGS .softirqs_enabled = 1, #endif diff --git a/init/main.c b/init/main.c index 91f6ebb30ef0..4d814de017ee 100644 --- a/init/main.c +++ b/init/main.c @@ -93,6 +93,7 @@ #include #include #include +#include #include #include @@ -779,6 +780,7 @@ asmlinkage __visible void __init start_kernel(void) acpi_subsystem_init(); arch_post_acpi_subsys_init(); sfi_init_late(); + kcsan_init(); /* Do the rest non-__init'ed, we're now alive */ arch_call_rest_init(); diff --git a/kernel/Makefile b/kernel/Makefile index daad787fb795..74ab46e2ebd1 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -102,6 +102,7 @@ obj-$(CONFIG_TRACEPOINTS) += trace/ obj-$(CONFIG_IRQ_WORK) += irq_work.o obj-$(CONFIG_CPU_PM) += cpu_pm.o obj-$(CONFIG_BPF) += bpf/ +obj-$(CONFIG_KCSAN) += kcsan/ obj-$(CONFIG_PERF_EVENTS) += events/ diff --git a/kernel/kcsan/Makefile b/kernel/kcsan/Makefile new file mode 100644 index 000000000000..c25f07062d26 --- /dev/null +++ b/kernel/kcsan/Makefile @@ -0,0 +1,14 @@ +# SPDX-License-Identifier: GPL-2.0 +KCSAN_SANITIZE := n +KCOV_INSTRUMENT := n + +CFLAGS_REMOVE_kcsan.o = $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_core.o = $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_atomic.o = $(CC_FLAGS_FTRACE) + +CFLAGS_kcsan.o = $(call cc-option, -fno-conserve-stack -fno-stack-protector) +CFLAGS_core.o = $(call cc-option, -fno-conserve-stack -fno-stack-protector) +CFLAGS_atomic.o = $(call cc-option, -fno-conserve-stack -fno-stack-protector) + +obj-y := kcsan.o core.o atomic.o debugfs.o report.o +obj-$(CONFIG_KCSAN_SELFTEST) += test.o diff --git a/kernel/kcsan/atomic.c b/kernel/kcsan/atomic.c new file mode 100644 index 000000000000..dd44f7d9e491 --- /dev/null +++ b/kernel/kcsan/atomic.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + +#include "kcsan.h" + +/* + * List all volatile globals that have been observed in races, to suppress + * data-race reports between accesses to these variables. + * + * For now, we assume that volatile accesses of globals are as strong as atomic + * accesses (READ_ONCE, WRITE_ONCE cast to volatile). The situation is still not + * entirely clear, as on some architectures (Alpha) READ_ONCE/WRITE_ONCE do more + * than cast to volatile. Eventually, we hope to be able to remove this + * function. + */ +bool kcsan_is_atomic(const volatile void *ptr) +{ + /* only jiffies for now */ + return ptr == &jiffies; +} diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c new file mode 100644 index 000000000000..e8c3823bf7c4 --- /dev/null +++ b/kernel/kcsan/core.c @@ -0,0 +1,458 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kcsan.h" +#include "encoding.h" + +/* + * Helper macros to iterate slots, starting from address slot itself, followed + * by the right and left slots. + */ +#define CHECK_NUM_SLOTS (1 + 2 * KCSAN_CHECK_ADJACENT) +#define SLOT_IDX(slot, i) \ + ((slot + (((i + KCSAN_CHECK_ADJACENT) % CHECK_NUM_SLOTS) - \ + KCSAN_CHECK_ADJACENT)) % \ + KCSAN_NUM_WATCHPOINTS) + +bool kcsan_enabled; + +/* + * Per-CPU state that should be used instead of 'current' if we are not in a + * task. + */ +struct cpu_state { + int disable; /* disable counter */ + int atomic_next; /* number of following atomic ops */ + + /* + * We use separate variables to store if we are in a nestable or flat + * atomic region. This helps make sure that an atomic region with + * nesting support is not suddenly aborted when a flat region is + * contained within. Effectively this allows supporting nesting flat + * atomic regions within an outer nestable atomic region. Support for + * this is required as there are cases where a seqlock reader critical + * section (flat atomic region) is contained within a seqlock writer + * critical section (nestable atomic region), and the "mismatching + * kcsan_end_atomic()" warning would trigger otherwise. + */ + int atomic_region; + bool atomic_region_flat; +}; +static DEFINE_PER_CPU(struct cpu_state, this_state) = { + .disable = 0, + .atomic_next = 0, + .atomic_region = 0, + .atomic_region_flat = 0, +}; + +/* + * Watchpoints, with each entry encoded as defined in encoding.h: in order to be + * able to safely update and access a watchpoint without introducing locking + * overhead, we encode each watchpoint as a single atomic long. The initial + * zero-initialized state matches INVALID_WATCHPOINT. + */ +static atomic_long_t watchpoints[KCSAN_NUM_WATCHPOINTS]; + +/* + * Instructions skipped counter; see should_watch(). + */ +static DEFINE_PER_CPU(unsigned long, kcsan_skip); + +static inline atomic_long_t *find_watchpoint(unsigned long addr, size_t size, + bool expect_write, + long *encoded_watchpoint) +{ + const int slot = watchpoint_slot(addr); + const unsigned long addr_masked = addr & WATCHPOINT_ADDR_MASK; + atomic_long_t *watchpoint; + unsigned long wp_addr_masked; + size_t wp_size; + bool is_write; + int i; + + for (i = 0; i < CHECK_NUM_SLOTS; ++i) { + watchpoint = &watchpoints[SLOT_IDX(slot, i)]; + *encoded_watchpoint = atomic_long_read(watchpoint); + if (!decode_watchpoint(*encoded_watchpoint, &wp_addr_masked, + &wp_size, &is_write)) + continue; + + if (expect_write && !is_write) + continue; + + /* Check if the watchpoint matches the access. */ + if (matching_access(wp_addr_masked, wp_size, addr_masked, size)) + return watchpoint; + } + + return NULL; +} + +static inline atomic_long_t *insert_watchpoint(unsigned long addr, size_t size, + bool is_write) +{ + const int slot = watchpoint_slot(addr); + const long encoded_watchpoint = encode_watchpoint(addr, size, is_write); + atomic_long_t *watchpoint; + int i; + + for (i = 0; i < CHECK_NUM_SLOTS; ++i) { + long expect_val = INVALID_WATCHPOINT; + + /* Try to acquire this slot. */ + watchpoint = &watchpoints[SLOT_IDX(slot, i)]; + if (atomic_long_try_cmpxchg_relaxed(watchpoint, &expect_val, + encoded_watchpoint)) + return watchpoint; + } + + return NULL; +} + +/* + * Return true if watchpoint was successfully consumed, false otherwise. + * + * This may return false if: + * + * 1. another thread already consumed the watchpoint; + * 2. the thread that set up the watchpoint already removed it; + * 3. the watchpoint was removed and then re-used. + */ +static inline bool try_consume_watchpoint(atomic_long_t *watchpoint, + long encoded_watchpoint) +{ + return atomic_long_try_cmpxchg_relaxed(watchpoint, &encoded_watchpoint, + CONSUMED_WATCHPOINT); +} + +/* + * Return true if watchpoint was not touched, false if consumed. + */ +static inline bool remove_watchpoint(atomic_long_t *watchpoint) +{ + return atomic_long_xchg_relaxed(watchpoint, INVALID_WATCHPOINT) != + CONSUMED_WATCHPOINT; +} + +static inline bool is_atomic(const volatile void *ptr) +{ + if (in_task()) { + if (unlikely(current->kcsan_atomic_next > 0)) { + --current->kcsan_atomic_next; + return true; + } + if (unlikely(current->kcsan_atomic_region > 0 || + current->kcsan_atomic_region_flat)) + return true; + } else { /* interrupt */ + if (unlikely(this_cpu_read(this_state.atomic_next) > 0)) { + this_cpu_dec(this_state.atomic_next); + return true; + } + if (unlikely(this_cpu_read(this_state.atomic_region) > 0 || + this_cpu_read(this_state.atomic_region_flat))) + return true; + } + + return kcsan_is_atomic(ptr); +} + +static inline bool should_watch(const volatile void *ptr) +{ + /* + * Never set up watchpoints when memory operations are atomic. + * + * We need to check this first, because: 1) atomics should not count + * towards skipped instructions below, and 2) to actually decrement + * kcsan_atomic_next for each atomic. + */ + if (is_atomic(ptr)) + return false; + + /* + * We use a per-CPU counter, to avoid excessive contention; there is + * still enough non-determinism for the precise instructions that end up + * being watched to be mostly unpredictable. Using a PRNG like + * prandom_u32() turned out to be too slow. + */ + return (this_cpu_inc_return(kcsan_skip) % + CONFIG_KCSAN_WATCH_SKIP_INST) == 0; +} + +static inline bool is_enabled(void) +{ + return READ_ONCE(kcsan_enabled) && + (in_task() ? current->kcsan_disable : + this_cpu_read(this_state.disable)) == 0; +} + +static inline unsigned int get_delay(void) +{ + unsigned int max_delay = in_task() ? CONFIG_KCSAN_UDELAY_MAX_TASK : + CONFIG_KCSAN_UDELAY_MAX_INTERRUPT; + return IS_ENABLED(CONFIG_KCSAN_DELAY_RANDOMIZE) ? + ((prandom_u32() % max_delay) + 1) : + max_delay; +} + +/* === Public interface ===================================================== */ + +void __init kcsan_init(void) +{ + BUG_ON(!in_task()); + + kcsan_debugfs_init(); + kcsan_enable_current(); +#ifdef CONFIG_KCSAN_EARLY_ENABLE + /* + * We are in the init task, and no other tasks should be running. + */ + WRITE_ONCE(kcsan_enabled, true); +#endif +} + +/* === Exported interface =================================================== */ + +void kcsan_disable_current(void) +{ + if (in_task()) + ++current->kcsan_disable; + else + this_cpu_inc(this_state.disable); +} +EXPORT_SYMBOL(kcsan_disable_current); + +void kcsan_enable_current(void) +{ + int prev = in_task() ? current->kcsan_disable-- : + (this_cpu_dec_return(this_state.disable) + 1); + if (prev == 0) { + kcsan_disable_current(); /* restore to 0 */ + kcsan_disable_current(); + WARN(1, "mismatching %s", __func__); + kcsan_enable_current(); + } +} +EXPORT_SYMBOL(kcsan_enable_current); + +void kcsan_begin_atomic(bool nest) +{ + if (nest) { + if (in_task()) + ++current->kcsan_atomic_region; + else + this_cpu_inc(this_state.atomic_region); + } else { + if (in_task()) + current->kcsan_atomic_region_flat = true; + else + this_cpu_write(this_state.atomic_region_flat, true); + } +} +EXPORT_SYMBOL(kcsan_begin_atomic); + +void kcsan_end_atomic(bool nest) +{ + if (nest) { + int prev = + in_task() ? + current->kcsan_atomic_region-- : + (this_cpu_dec_return(this_state.atomic_region) + + 1); + if (prev == 0) { + kcsan_begin_atomic(true); /* restore to 0 */ + kcsan_disable_current(); + WARN(1, "mismatching %s", __func__); + kcsan_enable_current(); + } + } else { + if (in_task()) + current->kcsan_atomic_region_flat = false; + else + this_cpu_write(this_state.atomic_region_flat, false); + } +} +EXPORT_SYMBOL(kcsan_end_atomic); + +void kcsan_atomic_next(int n) +{ + if (in_task()) + current->kcsan_atomic_next = n; + else + this_cpu_write(this_state.atomic_next, n); +} +EXPORT_SYMBOL(kcsan_atomic_next); + +bool __kcsan_check_watchpoint(const volatile void *ptr, size_t size, + bool is_write) +{ + atomic_long_t *watchpoint; + long encoded_watchpoint; + unsigned long flags; + enum kcsan_report_type report_type; + + if (unlikely(!is_enabled())) + return false; + + watchpoint = find_watchpoint((unsigned long)ptr, size, !is_write, + &encoded_watchpoint); + if (watchpoint == NULL) + return true; + + flags = user_access_save(); + if (!try_consume_watchpoint(watchpoint, encoded_watchpoint)) { + /* + * The other thread may not print any diagnostics, as it has + * already removed the watchpoint, or another thread consumed + * the watchpoint before this thread. + */ + kcsan_counter_inc(kcsan_counter_report_races); + report_type = kcsan_report_race_check_race; + } else { + report_type = kcsan_report_race_check; + } + + /* Encountered a data-race. */ + kcsan_counter_inc(kcsan_counter_data_races); + kcsan_report(ptr, size, is_write, raw_smp_processor_id(), report_type); + + user_access_restore(flags); + return false; +} +EXPORT_SYMBOL(__kcsan_check_watchpoint); + +void __kcsan_setup_watchpoint(const volatile void *ptr, size_t size, + bool is_write) +{ + atomic_long_t *watchpoint; + union { + u8 _1; + u16 _2; + u32 _4; + u64 _8; + } expect_value; + bool is_expected = true; + unsigned long ua_flags = user_access_save(); + unsigned long irq_flags; + + if (!should_watch(ptr)) + goto out; + + if (!check_encodable((unsigned long)ptr, size)) { + kcsan_counter_inc(kcsan_counter_unencodable_accesses); + goto out; + } + + /* + * Disable interrupts & preemptions, to ignore races due to accesses in + * threads running on the same CPU. + */ + local_irq_save(irq_flags); + preempt_disable(); + + watchpoint = insert_watchpoint((unsigned long)ptr, size, is_write); + if (watchpoint == NULL) { + /* + * Out of capacity: the size of `watchpoints`, and the frequency + * with which `should_watch()` returns true should be tweaked so + * that this case happens very rarely. + */ + kcsan_counter_inc(kcsan_counter_no_capacity); + goto out_unlock; + } + + kcsan_counter_inc(kcsan_counter_setup_watchpoints); + kcsan_counter_inc(kcsan_counter_used_watchpoints); + + /* + * Read the current value, to later check and infer a race if the data + * was modified via a non-instrumented access, e.g. from a device. + */ + switch (size) { + case 1: + expect_value._1 = READ_ONCE(*(const u8 *)ptr); + break; + case 2: + expect_value._2 = READ_ONCE(*(const u16 *)ptr); + break; + case 4: + expect_value._4 = READ_ONCE(*(const u32 *)ptr); + break; + case 8: + expect_value._8 = READ_ONCE(*(const u64 *)ptr); + break; + default: + break; /* ignore; we do not diff the values */ + } + +#ifdef CONFIG_KCSAN_DEBUG + kcsan_disable_current(); + pr_err("KCSAN: watching %s, size: %zu, addr: %px [slot: %d, encoded: %lx]\n", + is_write ? "write" : "read", size, ptr, + watchpoint_slot((unsigned long)ptr), + encode_watchpoint((unsigned long)ptr, size, is_write)); + kcsan_enable_current(); +#endif + + /* + * Delay this thread, to increase probability of observing a racy + * conflicting access. + */ + udelay(get_delay()); + + /* + * Re-read value, and check if it is as expected; if not, we infer a + * racy access. + */ + switch (size) { + case 1: + is_expected = expect_value._1 == READ_ONCE(*(const u8 *)ptr); + break; + case 2: + is_expected = expect_value._2 == READ_ONCE(*(const u16 *)ptr); + break; + case 4: + is_expected = expect_value._4 == READ_ONCE(*(const u32 *)ptr); + break; + case 8: + is_expected = expect_value._8 == READ_ONCE(*(const u64 *)ptr); + break; + default: + break; /* ignore; we do not diff the values */ + } + + /* Check if this access raced with another. */ + if (!remove_watchpoint(watchpoint)) { + /* + * No need to increment 'race' counter, as the racing thread + * already did. + */ + kcsan_report(ptr, size, is_write, smp_processor_id(), + kcsan_report_race_setup); + } else if (!is_expected) { + /* Inferring a race, since the value should not have changed. */ + kcsan_counter_inc(kcsan_counter_races_unknown_origin); +#ifdef CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN + kcsan_report(ptr, size, is_write, smp_processor_id(), + kcsan_report_race_unknown_origin); +#endif + } + + kcsan_counter_dec(kcsan_counter_used_watchpoints); +out_unlock: + preempt_enable(); + local_irq_restore(irq_flags); +out: + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(__kcsan_setup_watchpoint); diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c new file mode 100644 index 000000000000..6ddcbd185f3a --- /dev/null +++ b/kernel/kcsan/debugfs.c @@ -0,0 +1,225 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kcsan.h" + +/* + * Statistics counters. + */ +static atomic_long_t counters[kcsan_counter_count]; + +/* + * Addresses for filtering functions from reporting. This list can be used as a + * whitelist or blacklist. + */ +static struct { + unsigned long *addrs; /* array of addresses */ + size_t size; /* current size */ + int used; /* number of elements used */ + bool sorted; /* if elements are sorted */ + bool whitelist; /* if list is a blacklist or whitelist */ +} report_filterlist = { + .addrs = NULL, + .size = 8, /* small initial size */ + .used = 0, + .sorted = false, + .whitelist = false, /* default is blacklist */ +}; +static DEFINE_SPINLOCK(report_filterlist_lock); + +static const char *counter_to_name(enum kcsan_counter_id id) +{ + switch (id) { + case kcsan_counter_used_watchpoints: + return "used_watchpoints"; + case kcsan_counter_setup_watchpoints: + return "setup_watchpoints"; + case kcsan_counter_data_races: + return "data_races"; + case kcsan_counter_no_capacity: + return "no_capacity"; + case kcsan_counter_report_races: + return "report_races"; + case kcsan_counter_races_unknown_origin: + return "races_unknown_origin"; + case kcsan_counter_unencodable_accesses: + return "unencodable_accesses"; + case kcsan_counter_encoding_false_positives: + return "encoding_false_positives"; + case kcsan_counter_count: + BUG(); + } + return NULL; +} + +void kcsan_counter_inc(enum kcsan_counter_id id) +{ + atomic_long_inc(&counters[id]); +} + +void kcsan_counter_dec(enum kcsan_counter_id id) +{ + atomic_long_dec(&counters[id]); +} + +static int cmp_filterlist_addrs(const void *rhs, const void *lhs) +{ + const unsigned long a = *(const unsigned long *)rhs; + const unsigned long b = *(const unsigned long *)lhs; + + return a < b ? -1 : a == b ? 0 : 1; +} + +bool kcsan_skip_report(unsigned long func_addr) +{ + unsigned long symbolsize, offset; + unsigned long flags; + bool ret = false; + + if (!kallsyms_lookup_size_offset(func_addr, &symbolsize, &offset)) + return false; + func_addr -= offset; /* get function start */ + + spin_lock_irqsave(&report_filterlist_lock, flags); + if (report_filterlist.used == 0) + goto out; + + /* Sort array if it is unsorted, and then do a binary search. */ + if (!report_filterlist.sorted) { + sort(report_filterlist.addrs, report_filterlist.used, + sizeof(unsigned long), cmp_filterlist_addrs, NULL); + report_filterlist.sorted = true; + } + ret = !!bsearch(&func_addr, report_filterlist.addrs, + report_filterlist.used, sizeof(unsigned long), + cmp_filterlist_addrs); + if (report_filterlist.whitelist) + ret = !ret; + +out: + spin_unlock_irqrestore(&report_filterlist_lock, flags); + return ret; +} + +static void set_report_filterlist_whitelist(bool whitelist) +{ + unsigned long flags; + + spin_lock_irqsave(&report_filterlist_lock, flags); + report_filterlist.whitelist = whitelist; + spin_unlock_irqrestore(&report_filterlist_lock, flags); +} + +static void insert_report_filterlist(const char *func) +{ + unsigned long flags; + unsigned long addr = kallsyms_lookup_name(func); + + if (!addr) { + pr_err("KCSAN: could not find function: '%s'\n", func); + return; + } + + spin_lock_irqsave(&report_filterlist_lock, flags); + + if (report_filterlist.addrs == NULL) + report_filterlist.addrs = /* initial allocation */ + kvmalloc_array(report_filterlist.size, + sizeof(unsigned long), GFP_KERNEL); + else if (report_filterlist.used == report_filterlist.size) { + /* resize filterlist */ + unsigned long *new_addrs; + + report_filterlist.size *= 2; + new_addrs = kvmalloc_array(report_filterlist.size, + sizeof(unsigned long), GFP_KERNEL); + memcpy(new_addrs, report_filterlist.addrs, + report_filterlist.used * sizeof(unsigned long)); + kvfree(report_filterlist.addrs); + report_filterlist.addrs = new_addrs; + } + + /* Note: deduplicating should be done in userspace. */ + report_filterlist.addrs[report_filterlist.used++] = + kallsyms_lookup_name(func); + report_filterlist.sorted = false; + + spin_unlock_irqrestore(&report_filterlist_lock, flags); +} + +static int show_info(struct seq_file *file, void *v) +{ + int i; + unsigned long flags; + + /* show stats */ + seq_printf(file, "enabled: %i\n", READ_ONCE(kcsan_enabled)); + for (i = 0; i < kcsan_counter_count; ++i) + seq_printf(file, "%s: %ld\n", counter_to_name(i), + atomic_long_read(&counters[i])); + + /* show filter functions, and filter type */ + spin_lock_irqsave(&report_filterlist_lock, flags); + seq_printf(file, "\n%s functions: %s\n", + report_filterlist.whitelist ? "whitelisted" : "blacklisted", + report_filterlist.used == 0 ? "none" : ""); + for (i = 0; i < report_filterlist.used; ++i) + seq_printf(file, " %ps\n", (void *)report_filterlist.addrs[i]); + spin_unlock_irqrestore(&report_filterlist_lock, flags); + + return 0; +} + +static int debugfs_open(struct inode *inode, struct file *file) +{ + return single_open(file, show_info, NULL); +} + +static ssize_t debugfs_write(struct file *file, const char __user *buf, + size_t count, loff_t *off) +{ + char kbuf[KSYM_NAME_LEN]; + char *arg; + int read_len = count < (sizeof(kbuf) - 1) ? count : (sizeof(kbuf) - 1); + + if (copy_from_user(kbuf, buf, read_len)) + return -EINVAL; + kbuf[read_len] = '\0'; + arg = strstrip(kbuf); + + if (!strncmp(arg, "on", sizeof("on") - 1)) + WRITE_ONCE(kcsan_enabled, true); + else if (!strncmp(arg, "off", sizeof("off") - 1)) + WRITE_ONCE(kcsan_enabled, false); + else if (!strncmp(arg, "whitelist", sizeof("whitelist") - 1)) + set_report_filterlist_whitelist(true); + else if (!strncmp(arg, "blacklist", sizeof("blacklist") - 1)) + set_report_filterlist_whitelist(false); + else if (arg[0] == '!') + insert_report_filterlist(&arg[1]); + else + return -EINVAL; + + return count; +} + +static const struct file_operations debugfs_ops = { .read = seq_read, + .open = debugfs_open, + .write = debugfs_write, + .release = single_release }; + +void __init kcsan_debugfs_init(void) +{ + debugfs_create_file("kcsan", 0644, NULL, NULL, &debugfs_ops); +} diff --git a/kernel/kcsan/encoding.h b/kernel/kcsan/encoding.h new file mode 100644 index 000000000000..8f9b1ce0e59f --- /dev/null +++ b/kernel/kcsan/encoding.h @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _MM_KCSAN_ENCODING_H +#define _MM_KCSAN_ENCODING_H + +#include +#include +#include + +#include "kcsan.h" + +#define SLOT_RANGE PAGE_SIZE +#define INVALID_WATCHPOINT 0 +#define CONSUMED_WATCHPOINT 1 + +/* + * The maximum useful size of accesses for which we set up watchpoints is the + * max range of slots we check on an access. + */ +#define MAX_ENCODABLE_SIZE (SLOT_RANGE * (1 + KCSAN_CHECK_ADJACENT)) + +/* + * Number of bits we use to store size info. + */ +#define WATCHPOINT_SIZE_BITS bits_per(MAX_ENCODABLE_SIZE) +/* + * This encoding for addresses discards the upper (1 for is-write + SIZE_BITS); + * however, most 64-bit architectures do not use the full 64-bit address space. + * Also, in order for a false positive to be observable 2 things need to happen: + * + * 1. different addresses but with the same encoded address race; + * 2. and both map onto the same watchpoint slots; + * + * Both these are assumed to be very unlikely. However, in case it still happens + * happens, the report logic will filter out the false positive (see report.c). + */ +#define WATCHPOINT_ADDR_BITS (BITS_PER_LONG - 1 - WATCHPOINT_SIZE_BITS) + +/* + * Masks to set/retrieve the encoded data. + */ +#define WATCHPOINT_WRITE_MASK BIT(BITS_PER_LONG - 1) +#define WATCHPOINT_SIZE_MASK \ + GENMASK(BITS_PER_LONG - 2, BITS_PER_LONG - 2 - WATCHPOINT_SIZE_BITS) +#define WATCHPOINT_ADDR_MASK \ + GENMASK(BITS_PER_LONG - 3 - WATCHPOINT_SIZE_BITS, 0) + +static inline bool check_encodable(unsigned long addr, size_t size) +{ + return size <= MAX_ENCODABLE_SIZE; +} + +static inline long encode_watchpoint(unsigned long addr, size_t size, + bool is_write) +{ + return (long)((is_write ? WATCHPOINT_WRITE_MASK : 0) | + (size << WATCHPOINT_ADDR_BITS) | + (addr & WATCHPOINT_ADDR_MASK)); +} + +static inline bool decode_watchpoint(long watchpoint, + unsigned long *addr_masked, size_t *size, + bool *is_write) +{ + if (watchpoint == INVALID_WATCHPOINT || + watchpoint == CONSUMED_WATCHPOINT) + return false; + + *addr_masked = (unsigned long)watchpoint & WATCHPOINT_ADDR_MASK; + *size = ((unsigned long)watchpoint & WATCHPOINT_SIZE_MASK) >> + WATCHPOINT_ADDR_BITS; + *is_write = !!((unsigned long)watchpoint & WATCHPOINT_WRITE_MASK); + + return true; +} + +/* + * Return watchpoint slot for an address. + */ +static inline int watchpoint_slot(unsigned long addr) +{ + return (addr / PAGE_SIZE) % KCSAN_NUM_WATCHPOINTS; +} + +static inline bool matching_access(unsigned long addr1, size_t size1, + unsigned long addr2, size_t size2) +{ + unsigned long end_range1 = addr1 + size1 - 1; + unsigned long end_range2 = addr2 + size2 - 1; + + return addr1 <= end_range2 && addr2 <= end_range1; +} + +#endif /* _MM_KCSAN_ENCODING_H */ diff --git a/kernel/kcsan/kcsan.c b/kernel/kcsan/kcsan.c new file mode 100644 index 000000000000..ce13e0b38ba2 --- /dev/null +++ b/kernel/kcsan/kcsan.c @@ -0,0 +1,81 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * The Kernel Concurrency Sanitizer (KCSAN) infrastructure. For more info please + * see Documentation/dev-tools/kcsan.rst. + */ + +#include + +#include "kcsan.h" + +/* + * Concurrency Sanitizer uses the same instrumentation as Thread Sanitizer. + */ + +#define DEFINE_TSAN_READ_WRITE(size) \ + void __tsan_read##size(void *ptr) \ + { \ + __kcsan_check_access(ptr, size, false); \ + } \ + EXPORT_SYMBOL(__tsan_read##size); \ + void __tsan_write##size(void *ptr) \ + { \ + __kcsan_check_access(ptr, size, true); \ + } \ + EXPORT_SYMBOL(__tsan_write##size) + +DEFINE_TSAN_READ_WRITE(1); +DEFINE_TSAN_READ_WRITE(2); +DEFINE_TSAN_READ_WRITE(4); +DEFINE_TSAN_READ_WRITE(8); +DEFINE_TSAN_READ_WRITE(16); + +/* + * Not all supported compiler versions distinguish aligned/unaligned accesses, + * but e.g. recent versions of Clang do. + */ +#define DEFINE_TSAN_UNALIGNED_READ_WRITE(size) \ + void __tsan_unaligned_read##size(void *ptr) \ + { \ + __kcsan_check_access(ptr, size, false); \ + } \ + EXPORT_SYMBOL(__tsan_unaligned_read##size); \ + void __tsan_unaligned_write##size(void *ptr) \ + { \ + __kcsan_check_access(ptr, size, true); \ + } \ + EXPORT_SYMBOL(__tsan_unaligned_write##size) + +DEFINE_TSAN_UNALIGNED_READ_WRITE(2); +DEFINE_TSAN_UNALIGNED_READ_WRITE(4); +DEFINE_TSAN_UNALIGNED_READ_WRITE(8); +DEFINE_TSAN_UNALIGNED_READ_WRITE(16); + +void __tsan_read_range(void *ptr, size_t size) +{ + __kcsan_check_access(ptr, size, false); +} +EXPORT_SYMBOL(__tsan_read_range); + +void __tsan_write_range(void *ptr, size_t size) +{ + __kcsan_check_access(ptr, size, true); +} +EXPORT_SYMBOL(__tsan_write_range); + +/* + * The below are not required KCSAN, but can still be emitted by the compiler. + */ +void __tsan_func_entry(void *call_pc) +{ +} +EXPORT_SYMBOL(__tsan_func_entry); +void __tsan_func_exit(void) +{ +} +EXPORT_SYMBOL(__tsan_func_exit); +void __tsan_init(void) +{ +} +EXPORT_SYMBOL(__tsan_init); diff --git a/kernel/kcsan/kcsan.h b/kernel/kcsan/kcsan.h new file mode 100644 index 000000000000..429479b3041d --- /dev/null +++ b/kernel/kcsan/kcsan.h @@ -0,0 +1,140 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _MM_KCSAN_KCSAN_H +#define _MM_KCSAN_KCSAN_H + +#include + +/* + * Total number of watchpoints. An address range maps into a specific slot as + * specified in `encoding.h`. Although larger number of watchpoints may not even + * be usable due to limited thread count, a larger value will improve + * performance due to reducing cache-line contention. + */ +#define KCSAN_NUM_WATCHPOINTS 64 + +/* + * The number of adjacent watchpoints to check; the purpose is 2-fold: + * + * 1. the address slot is already occupied, check if any adjacent slots are + * free; + * 2. accesses that straddle a slot boundary due to size that exceeds a + * slot's range may check adjacent slots if any watchpoint matches. + * + * Note that accesses with very large size may still miss a watchpoint; however, + * given this should be rare, this is a reasonable trade-off to make, since this + * will avoid: + * + * 1. excessive contention between watchpoint checks and setup; + * 2. larger number of simultaneous watchpoints without sacrificing + * performance. + */ +#define KCSAN_CHECK_ADJACENT 1 + +/* + * Globally enable and disable KCSAN. + */ +extern bool kcsan_enabled; + +/* + * Helper that returns true if access to ptr should be considered as an atomic + * access, even though it is not explicitly atomic. + */ +bool kcsan_is_atomic(const volatile void *ptr); + +/* + * Initialize debugfs file. + */ +void kcsan_debugfs_init(void); + +enum kcsan_counter_id { + /* + * Number of watchpoints currently in use. + */ + kcsan_counter_used_watchpoints, + + /* + * Total number of watchpoints set up. + */ + kcsan_counter_setup_watchpoints, + + /* + * Total number of data-races. + */ + kcsan_counter_data_races, + + /* + * Number of times no watchpoints were available. + */ + kcsan_counter_no_capacity, + + /* + * A thread checking a watchpoint raced with another checking thread; + * only one will be reported. + */ + kcsan_counter_report_races, + + /* + * Observed data value change, but writer thread unknown. + */ + kcsan_counter_races_unknown_origin, + + /* + * The access cannot be encoded to a valid watchpoint. + */ + kcsan_counter_unencodable_accesses, + + /* + * Watchpoint encoding caused a watchpoint to fire on mismatching + * accesses. + */ + kcsan_counter_encoding_false_positives, + + kcsan_counter_count, /* number of counters */ +}; + +/* + * Increment/decrement counter with given id; avoid calling these in fast-path. + */ +void kcsan_counter_inc(enum kcsan_counter_id id); +void kcsan_counter_dec(enum kcsan_counter_id id); + +/* + * Returns true if data-races in the function symbol that maps to addr (offsets + * are ignored) should *not* be reported. + */ +bool kcsan_skip_report(unsigned long func_addr); + +enum kcsan_report_type { + /* + * The thread that set up the watchpoint and briefly stalled was + * signalled that another thread triggered the watchpoint, and thus a + * race was encountered. + */ + kcsan_report_race_setup, + + /* + * A thread encountered a watchpoint for the access, therefore a race + * was encountered. + */ + kcsan_report_race_check, + + /* + * A thread encountered a watchpoint for the access, but the other + * racing thread can no longer be signaled that a race occurred. + */ + kcsan_report_race_check_race, + + /* + * No other thread was observed to race with the access, but the data + * value before and after the stall differs. + */ + kcsan_report_race_unknown_origin, +}; +/* + * Print a race report from thread that encountered the race. + */ +void kcsan_report(const volatile void *ptr, size_t size, bool is_write, + int cpu_id, enum kcsan_report_type type); + +#endif /* _MM_KCSAN_KCSAN_H */ diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c new file mode 100644 index 000000000000..1a0f34b623bf --- /dev/null +++ b/kernel/kcsan/report.c @@ -0,0 +1,307 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include + +#include "kcsan.h" +#include "encoding.h" + +/* + * Max. number of stack entries to show in the report. + */ +#define NUM_STACK_ENTRIES 16 + +/* + * Other thread info: communicated from other racing thread to thread that set + * up the watchpoint, which then prints the complete report atomically. Only + * need one struct, as all threads should to be serialized regardless to print + * the reports, with reporting being in the slow-path. + */ +static struct { + const volatile void *ptr; + size_t size; + bool is_write; + int task_pid; + int cpu_id; + unsigned long stack_entries[NUM_STACK_ENTRIES]; + int num_stack_entries; +} other_info = { .ptr = NULL }; + +static DEFINE_SPINLOCK(other_info_lock); +static DEFINE_SPINLOCK(report_lock); + +static bool set_or_lock_other_info(unsigned long *flags, + const volatile void *ptr, size_t size, + bool is_write, int cpu_id, + enum kcsan_report_type type) +{ + if (type != kcsan_report_race_check && type != kcsan_report_race_setup) + return true; + + for (;;) { + spin_lock_irqsave(&other_info_lock, *flags); + + switch (type) { + case kcsan_report_race_check: + if (other_info.ptr != NULL) { + /* still in use, retry */ + break; + } + other_info.ptr = ptr; + other_info.size = size; + other_info.is_write = is_write; + other_info.task_pid = + in_task() ? task_pid_nr(current) : -1; + other_info.cpu_id = cpu_id; + other_info.num_stack_entries = stack_trace_save( + other_info.stack_entries, NUM_STACK_ENTRIES, 1); + /* + * other_info may now be consumed by thread we raced + * with. + */ + spin_unlock_irqrestore(&other_info_lock, *flags); + return false; + + case kcsan_report_race_setup: + if (other_info.ptr == NULL) + break; /* no data available yet, retry */ + + /* + * First check if matching based on how watchpoint was + * encoded. + */ + if (!matching_access((unsigned long)other_info.ptr & + WATCHPOINT_ADDR_MASK, + other_info.size, + (unsigned long)ptr & + WATCHPOINT_ADDR_MASK, + size)) + break; /* mismatching access, retry */ + + if (!matching_access((unsigned long)other_info.ptr, + other_info.size, + (unsigned long)ptr, size)) { + /* + * If the actual accesses to not match, this was + * a false positive due to watchpoint encoding. + */ + other_info.ptr = NULL; /* mark for reuse */ + kcsan_counter_inc( + kcsan_counter_encoding_false_positives); + spin_unlock_irqrestore(&other_info_lock, + *flags); + return false; + } + + /* + * Matching access: keep other_info locked, as this + * thread uses it to print the full report; unlocked in + * end_report. + */ + return true; + + default: + BUG(); + } + + spin_unlock_irqrestore(&other_info_lock, *flags); + } +} + +static void start_report(unsigned long *flags, enum kcsan_report_type type) +{ + switch (type) { + case kcsan_report_race_setup: + /* irqsaved already via other_info_lock */ + spin_lock(&report_lock); + break; + + case kcsan_report_race_unknown_origin: + spin_lock_irqsave(&report_lock, *flags); + break; + + default: + BUG(); + } +} + +static void end_report(unsigned long *flags, enum kcsan_report_type type) +{ + switch (type) { + case kcsan_report_race_setup: + other_info.ptr = NULL; /* mark for reuse */ + spin_unlock(&report_lock); + spin_unlock_irqrestore(&other_info_lock, *flags); + break; + + case kcsan_report_race_unknown_origin: + spin_unlock_irqrestore(&report_lock, *flags); + break; + + default: + BUG(); + } +} + +static const char *get_access_type(bool is_write) +{ + return is_write ? "write" : "read"; +} + +/* Return thread description: in task or interrupt. */ +static const char *get_thread_desc(int task_id) +{ + if (task_id != -1) { + static char buf[32]; /* safe: protected by report_lock */ + + snprintf(buf, sizeof(buf), "task %i", task_id); + return buf; + } + return in_nmi() ? "NMI" : "interrupt"; +} + +/* Helper to skip KCSAN-related functions in stack-trace. */ +static int get_stack_skipnr(unsigned long stack_entries[], int num_entries) +{ + char buf[64]; + int skip = 0; + + for (; skip < num_entries; ++skip) { + snprintf(buf, sizeof(buf), "%ps", (void *)stack_entries[skip]); + if (!strnstr(buf, "csan_", sizeof(buf)) && + !strnstr(buf, "tsan_", sizeof(buf)) && + !strnstr(buf, "_once_size", sizeof(buf))) { + break; + } + } + return skip; +} + +/* Compares symbolized strings of addr1 and addr2. */ +static int sym_strcmp(void *addr1, void *addr2) +{ + char buf1[64]; + char buf2[64]; + + snprintf(buf1, sizeof(buf1), "%pS", addr1); + snprintf(buf2, sizeof(buf2), "%pS", addr2); + return strncmp(buf1, buf2, sizeof(buf1)); +} + +/* + * Returns true if a report was generated, false otherwise. + */ +static bool print_summary(const volatile void *ptr, size_t size, bool is_write, + int cpu_id, enum kcsan_report_type type) +{ + unsigned long stack_entries[NUM_STACK_ENTRIES] = { 0 }; + int num_stack_entries = + stack_trace_save(stack_entries, NUM_STACK_ENTRIES, 1); + int skipnr = get_stack_skipnr(stack_entries, num_stack_entries); + int other_skipnr; + + /* Check if the top stackframe is in a blacklisted function. */ + if (kcsan_skip_report(stack_entries[skipnr])) + return false; + if (type == kcsan_report_race_setup) { + other_skipnr = get_stack_skipnr(other_info.stack_entries, + other_info.num_stack_entries); + if (kcsan_skip_report(other_info.stack_entries[other_skipnr])) + return false; + } + + /* Print report header. */ + pr_err("==================================================================\n"); + switch (type) { + case kcsan_report_race_setup: { + void *this_fn = (void *)stack_entries[skipnr]; + void *other_fn = (void *)other_info.stack_entries[other_skipnr]; + int cmp; + + /* + * Order functions lexographically for consistent bug titles. + * Do not print offset of functions to keep title short. + */ + cmp = sym_strcmp(other_fn, this_fn); + pr_err("BUG: KCSAN: data-race in %ps / %ps\n", + cmp < 0 ? other_fn : this_fn, + cmp < 0 ? this_fn : other_fn); + } break; + + case kcsan_report_race_unknown_origin: + pr_err("BUG: KCSAN: racing %s in %pS\n", + get_access_type(is_write), + (void *)stack_entries[skipnr]); + break; + + default: + BUG(); + } + + pr_err("\n"); + + /* Print information about the racing accesses. */ + switch (type) { + case kcsan_report_race_setup: + pr_err("%s to 0x%px of %zu bytes by %s on cpu %i:\n", + get_access_type(other_info.is_write), other_info.ptr, + other_info.size, get_thread_desc(other_info.task_pid), + other_info.cpu_id); + + /* Print the other thread's stack trace. */ + stack_trace_print(other_info.stack_entries + other_skipnr, + other_info.num_stack_entries - other_skipnr, + 0); + + pr_err("\n"); + pr_err("%s to 0x%px of %zu bytes by %s on cpu %i:\n", + get_access_type(is_write), ptr, size, + get_thread_desc(in_task() ? task_pid_nr(current) : -1), + cpu_id); + break; + + case kcsan_report_race_unknown_origin: + pr_err("race at unknown origin, with %s to 0x%px of %zu bytes by %s on cpu %i:\n", + get_access_type(is_write), ptr, size, + get_thread_desc(in_task() ? task_pid_nr(current) : -1), + cpu_id); + break; + + default: + BUG(); + } + /* Print stack trace of this thread. */ + stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, + 0); + + /* Print report footer. */ + pr_err("\n"); + pr_err("Reported by Kernel Concurrency Sanitizer on:\n"); + dump_stack_print_info(KERN_DEFAULT); + pr_err("==================================================================\n"); + + return true; +} + +void kcsan_report(const volatile void *ptr, size_t size, bool is_write, + int cpu_id, enum kcsan_report_type type) +{ + unsigned long flags = 0; + + if (type == kcsan_report_race_check_race) + return; + + kcsan_disable_current(); + if (set_or_lock_other_info(&flags, ptr, size, is_write, cpu_id, type)) { + start_report(&flags, type); + if (print_summary(ptr, size, is_write, cpu_id, type) && + panic_on_warn) + panic("panic_on_warn set ...\n"); + end_report(&flags, type); + } + kcsan_enable_current(); +} diff --git a/kernel/kcsan/test.c b/kernel/kcsan/test.c new file mode 100644 index 000000000000..68c896a24529 --- /dev/null +++ b/kernel/kcsan/test.c @@ -0,0 +1,117 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include + +#include "encoding.h" + +#define ITERS_PER_TEST 2000 + +/* Test requirements. */ +static bool test_requires(void) +{ + /* random should be initialized */ + return prandom_u32() + prandom_u32() != 0; +} + +/* Test watchpoint encode and decode. */ +static bool test_encode_decode(void) +{ + int i; + + for (i = 0; i < ITERS_PER_TEST; ++i) { + size_t size = prandom_u32() % MAX_ENCODABLE_SIZE + 1; + bool is_write = prandom_u32() % 2; + unsigned long addr; + + prandom_bytes(&addr, sizeof(addr)); + if (WARN_ON(!check_encodable(addr, size))) + return false; + + /* encode and decode */ + { + const long encoded_watchpoint = + encode_watchpoint(addr, size, is_write); + unsigned long verif_masked_addr; + size_t verif_size; + bool verif_is_write; + + /* check special watchpoints */ + if (WARN_ON(decode_watchpoint( + INVALID_WATCHPOINT, &verif_masked_addr, + &verif_size, &verif_is_write))) + return false; + if (WARN_ON(decode_watchpoint( + CONSUMED_WATCHPOINT, &verif_masked_addr, + &verif_size, &verif_is_write))) + return false; + + /* check decoding watchpoint returns same data */ + if (WARN_ON(!decode_watchpoint( + encoded_watchpoint, &verif_masked_addr, + &verif_size, &verif_is_write))) + return false; + if (WARN_ON(verif_masked_addr != + (addr & WATCHPOINT_ADDR_MASK))) + goto fail; + if (WARN_ON(verif_size != size)) + goto fail; + if (WARN_ON(is_write != verif_is_write)) + goto fail; + + continue; +fail: + pr_err("%s fail: %s %zu bytes @ %lx -> encoded: %lx -> %s %zu bytes @ %lx\n", + __func__, is_write ? "write" : "read", size, + addr, encoded_watchpoint, + verif_is_write ? "write" : "read", verif_size, + verif_masked_addr); + return false; + } + } + + return true; +} + +static bool test_matching_access(void) +{ + if (WARN_ON(!matching_access(10, 1, 10, 1))) + return false; + if (WARN_ON(!matching_access(10, 2, 11, 1))) + return false; + if (WARN_ON(!matching_access(10, 1, 9, 2))) + return false; + if (WARN_ON(matching_access(10, 1, 11, 1))) + return false; + if (WARN_ON(matching_access(9, 1, 10, 1))) + return false; + return true; +} + +static int __init kcsan_selftest(void) +{ + int passed = 0; + int total = 0; + +#define RUN_TEST(do_test) \ + do { \ + ++total; \ + if (do_test()) \ + ++passed; \ + else \ + pr_err("KCSAN selftest: " #do_test " failed"); \ + } while (0) + + RUN_TEST(test_requires); + RUN_TEST(test_encode_decode); + RUN_TEST(test_matching_access); + + pr_info("KCSAN selftest: %d/%d tests passed\n", passed, total); + if (passed != total) + panic("KCSAN selftests failed"); + return 0; +} +postcore_initcall(kcsan_selftest); diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 93d97f9b0157..35accd1d93de 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2086,6 +2086,8 @@ source "lib/Kconfig.kgdb" source "lib/Kconfig.ubsan" +source "lib/Kconfig.kcsan" + config ARCH_HAS_DEVMEM_IS_ALLOWED bool diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan new file mode 100644 index 000000000000..b532d0d98f7a --- /dev/null +++ b/lib/Kconfig.kcsan @@ -0,0 +1,88 @@ +# SPDX-License-Identifier: GPL-2.0-only + +config HAVE_ARCH_KCSAN + bool + +menuconfig KCSAN + bool "KCSAN: watchpoint-based dynamic data-race detector" + depends on HAVE_ARCH_KCSAN && !KASAN && STACKTRACE + default n + help + Kernel Concurrency Sanitizer is a dynamic data-race detector, which + uses a watchpoint-based sampling approach to detect races. + +if KCSAN + +config KCSAN_SELFTEST + bool "KCSAN: perform short selftests on boot" + default y + help + Run KCSAN selftests on boot. On test failure, causes kernel to panic. + +config KCSAN_EARLY_ENABLE + bool "KCSAN: early enable" + default y + help + If KCSAN should be enabled globally as soon as possible. KCSAN can + later be enabled/disabled via debugfs. + +config KCSAN_UDELAY_MAX_TASK + int "KCSAN: maximum delay in microseconds (for tasks)" + default 80 + help + For tasks, the max. microsecond delay after setting up a watchpoint. + +config KCSAN_UDELAY_MAX_INTERRUPT + int "KCSAN: maximum delay in microseconds (for interrupts)" + default 20 + help + For interrupts, the max. microsecond delay after setting up a watchpoint. + +config KCSAN_DELAY_RANDOMIZE + bool "KCSAN: randomize delays" + default y + help + If delays should be randomized; if false, the chosen delay is simply + the maximum values defined above. + +config KCSAN_WATCH_SKIP_INST + int "KCSAN: watchpoint instruction skip" + default 2000 + help + The number of per-CPU memory operations to skip watching, before + another watchpoint is set up; in other words, 1 in + KCSAN_WATCH_SKIP_INST per-CPU memory operations are used to set up a + watchpoint. A smaller value results in more aggressive race + detection, whereas a larger value improves system performance at the + cost of missing some races. + +config KCSAN_REPORT_RACE_UNKNOWN_ORIGIN + bool "KCSAN: report races of unknown origin" + default y + help + If KCSAN should report races where only one access is known, and the + conflicting access is of unknown origin. This type of race is + reported if it was only possible to infer a race due to a data-value + change while an access is being delayed on a watchpoint. + +config KCSAN_IGNORE_ATOMICS + bool "KCSAN: do not instrument atomic accesses" + default n + help + If enabled, never instruments atomic accesses. This results in not + reporting data-races where one access is atomic and the other is a + plain access. + +config KCSAN_PLAIN_WRITE_PRETEND_ONCE + bool "KCSAN: pretend plain writes are WRITE_ONCE" + default n + help + This option makes KCSAN pretend that all plain writes are WRITE_ONCE. + This option should only be used to prune initial data-races found in + existing code. + +config KCSAN_DEBUG + bool "Debugging of KCSAN internals" + default n + +endif # KCSAN diff --git a/lib/Makefile b/lib/Makefile index c5892807e06f..778ab704e3ad 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -24,6 +24,9 @@ KASAN_SANITIZE_string.o := n CFLAGS_string.o := $(call cc-option, -fno-stack-protector) endif +# Used by KCSAN while enabled, avoid recursion. +KCSAN_SANITIZE_random32.o := n + lib-y := ctype.o string.o vsprintf.o cmdline.o \ rbtree.o radix-tree.o timerqueue.o xarray.o \ idr.o extable.o \ diff --git a/scripts/Makefile.kcsan b/scripts/Makefile.kcsan new file mode 100644 index 000000000000..caf1111a28ae --- /dev/null +++ b/scripts/Makefile.kcsan @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0 +ifdef CONFIG_KCSAN + +CFLAGS_KCSAN := -fsanitize=thread + +endif # CONFIG_KCSAN diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index 179d55af5852..0e78abab7d83 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -152,6 +152,16 @@ _c_flags += $(if $(patsubst n%,, \ $(CFLAGS_KCOV)) endif +# +# Enable ConcurrencySanitizer flags for kernel except some files or directories +# we don't want to check (depends on variables KCSAN_SANITIZE_obj.o, KCSAN_SANITIZE) +# +ifeq ($(CONFIG_KCSAN),y) +_c_flags += $(if $(patsubst n%,, \ + $(KCSAN_SANITIZE_$(basetarget).o)$(KCSAN_SANITIZE)y), \ + $(CFLAGS_KCSAN)) +endif + # $(srctree)/$(src) for including checkin headers from generated source files # $(objtree)/$(obj) for including generated headers from checkin source files ifeq ($(KBUILD_EXTMOD),) From patchwork Wed Oct 16 08:39:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11192631 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 23EA717E6 for ; Wed, 16 Oct 2019 08:41:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DB9812054F for ; Wed, 16 Oct 2019 08:41:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FDIPWUL/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB9812054F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6E7288E0008; Wed, 16 Oct 2019 04:41:17 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 645648E0001; Wed, 16 Oct 2019 04:41:17 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 534798E0008; Wed, 16 Oct 2019 04:41:17 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0075.hostedemail.com [216.40.44.75]) by kanga.kvack.org (Postfix) with ESMTP id 3447A8E0001 for ; Wed, 16 Oct 2019 04:41:17 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id CBCCC18043061 for ; Wed, 16 Oct 2019 08:41:16 +0000 (UTC) X-FDA: 76049003352.26.spoon01_2837aba73c834 X-Spam-Summary: 2,0,0,7aebe8ff936ca1b0,d41d8cd98f00b204,3qtemxqukcocnuenapxxpun.lxvurwdg-vvtejlt.xap@flex--elver.bounces.google.com,:elver@google.com:akiyks@gmail.com:stern@rowland.harvard.edu:glider@google.com:parri.andrea@gmail.com:andreyknvl@google.com:luto@kernel.org:ard.biesheuvel@linaro.org:arnd@arndb.de:boqun.feng@gmail.com:bp@alien8.de:dja@axtens.net:dlustig@nvidia.com:dave.hansen@linux.intel.com:dhowells@redhat.com:dvyukov@google.com:hpa@zytor.com:mingo@redhat.com:j.alglave@ucl.ac.uk:joel@joelfernandes.org:corbet@lwn.net:jpoimboe@redhat.com:luc.maranget@inria.fr:mark.rutland@arm.com:npiggin@gmail.com:paulmck@linux.ibm.com:peterz@infradead.org:tglx@linutronix.de:will@kernel.org:kasan-dev@googlegroups.com:linux-arch@vger.kernel.org:linux-doc@vger.kernel.org:linux-efi@vger.kernel.org:linux-kbuild@vger.kernel.org:linux-kernel@vger.kernel.org::x86@kernel.org,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1534:1541:1593:1594:1 711:1714 X-HE-Tag: spoon01_2837aba73c834 X-Filterd-Recvd-Size: 4310 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 08:41:16 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id j2so2410055wrg.19 for ; Wed, 16 Oct 2019 01:41:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9SMgLZKRCrUAwZ4Rshpo1HVyP5RqT+TntrXHNZ73xYY=; b=FDIPWUL/JwmYB6hACIcWUwuenrffDDAMzGSei/mRdw2EODhUfYubNcKwRfAgNp0DhM dtUKG7I75/scKfOkForm7Ld7BuMMDf7BHMR3NlZ5qfVmRtDmpmJc6uB3CuAr91yveAnL nGRuVo70brrUGtmp1Ve08DkIzHSi9SVIOYczUBr++j/ObGq2iUw/aBOyKHJ139cBYOgE xnfKiRntzrGA96b5jN/E4fIqaZwTsDPzTCU1SZd5nLiOQ3czOfhus2Fq3v2uFkOsxnIq s4piFfhm90KHCJPaNgDU6oYlPHFiQUMyTLgNXoJ/09nhO4RdOEdIJ82o/pFsXix8mVoS k64A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9SMgLZKRCrUAwZ4Rshpo1HVyP5RqT+TntrXHNZ73xYY=; b=FjdvOIy25Mo3bn88F3o9zGt5Fd6JtTs3gn+snFXV/91Q6emJm1voA/dwokC0pU/em0 7sW5OJ5cQOMe6HatZW7wgHvcahStw0j1V/l8cbCJtKw1ZPwUBrbHZi1SQi82MWrFfAkR CBEJLwG48WNkpedMU/NMDGBoM61OChauc7FJLIUpGKKoRml4F8NCGOjCqHHmDce1Lc98 5STJoiCDZistU4DZpMglLeTTiQP42XRfZsSTY/KD1/xUksJp6BfU0PxmrebYWsB+AYTh yqpbbdhdkmwKvURco4myt96inm7WyuGzeMPFacIhUjixAWmaHty1WGQ8qJJXaYhC+mfZ TQOg== X-Gm-Message-State: APjAAAXKfpZkfwS6ge7Nja8C6hkuUXJjFhKFaDeV8s8KXjOoQu85wFSi 0BVVVxuzBjnDjXh98DSiKta5z72xpQ== X-Google-Smtp-Source: APXvYqwkgvQBQ5yV1eoCrQo14zvvYIx87rvKqZQ+FG5dzvqnKHHgHQFk8ioyGbzu7TlFoxqNUrf9JlniKA== X-Received: by 2002:a5d:4588:: with SMTP id p8mr1610112wrq.180.1571215274810; Wed, 16 Oct 2019 01:41:14 -0700 (PDT) Date: Wed, 16 Oct 2019 10:39:53 +0200 In-Reply-To: <20191016083959.186860-1-elver@google.com> Message-Id: <20191016083959.186860-3-elver@google.com> Mime-Version: 1.0 References: <20191016083959.186860-1-elver@google.com> X-Mailer: git-send-email 2.23.0.700.g56cf767bdb-goog Subject: [PATCH 2/8] objtool, kcsan: Add KCSAN runtime functions to whitelist From: Marco Elver To: elver@google.com Cc: akiyks@gmail.com, stern@rowland.harvard.edu, glider@google.com, parri.andrea@gmail.com, andreyknvl@google.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, boqun.feng@gmail.com, bp@alien8.de, dja@axtens.net, dlustig@nvidia.com, dave.hansen@linux.intel.com, dhowells@redhat.com, dvyukov@google.com, hpa@zytor.com, mingo@redhat.com, j.alglave@ucl.ac.uk, joel@joelfernandes.org, corbet@lwn.net, jpoimboe@redhat.com, luc.maranget@inria.fr, mark.rutland@arm.com, npiggin@gmail.com, paulmck@linux.ibm.com, peterz@infradead.org, tglx@linutronix.de, will@kernel.org, kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-efi@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch adds KCSAN runtime functions to the objtool whitelist. Signed-off-by: Marco Elver --- tools/objtool/check.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 044c9a3cb247..d1acc867b43c 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -466,6 +466,23 @@ static const char *uaccess_safe_builtin[] = { "__asan_report_store4_noabort", "__asan_report_store8_noabort", "__asan_report_store16_noabort", + /* KCSAN */ + "__kcsan_check_watchpoint", + "__kcsan_setup_watchpoint", + /* KCSAN/TSAN out-of-line */ + "__tsan_func_entry", + "__tsan_func_exit", + "__tsan_read_range", + "__tsan_read1", + "__tsan_read2", + "__tsan_read4", + "__tsan_read8", + "__tsan_read16", + "__tsan_write1", + "__tsan_write2", + "__tsan_write4", + "__tsan_write8", + "__tsan_write16", /* KCOV */ "write_comp_data", "__sanitizer_cov_trace_pc", From patchwork Wed Oct 16 08:39:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11192633 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7CEB41390 for ; Wed, 16 Oct 2019 08:41:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3C23821848 for ; Wed, 16 Oct 2019 08:41:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WKlCHYh2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3C23821848 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0C9AB8E0009; Wed, 16 Oct 2019 04:41:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 07A608E0001; Wed, 16 Oct 2019 04:41:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EAA6D8E0009; Wed, 16 Oct 2019 04:41:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0252.hostedemail.com [216.40.44.252]) by kanga.kvack.org (Postfix) with ESMTP id C5C1A8E0001 for ; Wed, 16 Oct 2019 04:41:20 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 554E253B3 for ; Wed, 16 Oct 2019 08:41:20 +0000 (UTC) X-FDA: 76049003520.13.rail73_28b1cdb63dd26 X-Spam-Summary: 2,0,0,4d33fa50cd85a36b,d41d8cd98f00b204,3rdemxqukcooqxhqdsaasxq.oayxuzgj-yywhmow.ads@flex--elver.bounces.google.com,:elver@google.com:akiyks@gmail.com:stern@rowland.harvard.edu:glider@google.com:parri.andrea@gmail.com:andreyknvl@google.com:luto@kernel.org:ard.biesheuvel@linaro.org:arnd@arndb.de:boqun.feng@gmail.com:bp@alien8.de:dja@axtens.net:dlustig@nvidia.com:dave.hansen@linux.intel.com:dhowells@redhat.com:dvyukov@google.com:hpa@zytor.com:mingo@redhat.com:j.alglave@ucl.ac.uk:joel@joelfernandes.org:corbet@lwn.net:jpoimboe@redhat.com:luc.maranget@inria.fr:mark.rutland@arm.com:npiggin@gmail.com:paulmck@linux.ibm.com:peterz@infradead.org:tglx@linutronix.de:will@kernel.org:kasan-dev@googlegroups.com:linux-arch@vger.kernel.org:linux-doc@vger.kernel.org:linux-efi@vger.kernel.org:linux-kbuild@vger.kernel.org:linux-kernel@vger.kernel.org::x86@kernel.org,RULES_HIT:41:152:355:379:541:800:960:966:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1543:15 93:1594: X-HE-Tag: rail73_28b1cdb63dd26 X-Filterd-Recvd-Size: 6395 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 08:41:19 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id k2so11388276wrn.7 for ; Wed, 16 Oct 2019 01:41:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=yDFha54hv48kEXnfmjDw3jNhkPyH3I98qNMyU9YzKxs=; b=WKlCHYh2/KCP6oxfJeI7pN2aRjDhk/aZL2Kh+OHu8RjMDz5oofdRa1ovtsNeuIcJ1A OJyOKoHGK0lGKl0ZSuYrstwQ7yJwrEKE2bBaeq7zuTCQSD5ERzUuW7i9Uu7Eu317e6sY ZfreMP9oBmPbmfrsG7riIXbL/Dz245BjblD2T/QzOu2yp6s3SRXlPP/f42tIOiLXNym4 9B06plwFtBoIf5gyi99EmyMwdQgTP1f8I+WYLl30cXsjuFYcO2peywzDu4bN920T/4hG EtNJevcAFO0Tya/jcC6adNraMgojVBHN6cjbEln5v5CtOfL8p125eIgxK72XosA24BBQ hGMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=yDFha54hv48kEXnfmjDw3jNhkPyH3I98qNMyU9YzKxs=; b=swWv8IeFGcCU/VPjsGP917bSq7SJ5NSWGddJZjPPHq26JskwYKoK5QZ7LPcTCamt5P IP4fslYZ1RIrtDQl1p9++DcBOg8D1sto8wWp/dGOoEuVlaFCigdHQqbN3nP4854wSo+1 GHbD+14q4I3almM8ewKrHVoLD8cKXR8FWbpdaNB+LWIg/Fq4aPBa++VCVbyU0KXkgEOr vIbp+5auhRmP9LqQ6skUsaJaa41ARtH7CrbAa5K39aXNl74chjxIg24ljDTnwLWkiIyw QWNQK24YEE3AO2lrWMZz9tkBYJxoLfRisa3AhZIDvuBAyy6FpPzGqhYU2LA5lX3SI6nG DZhg== X-Gm-Message-State: APjAAAXxdrjAi/Ty1jt+0eKV9s8rMTVOmx9Vyl13N/C/rwja/JWEIr8E pgBL/nopozMzfF0ltSosOarW5YVN2w== X-Google-Smtp-Source: APXvYqzdOQF6tPWDtDYGYyxoxWJlQPExqvV+Kb3qpGIfQVYmIvwOLf4GM2Yg4APXhJ/KjtagamObSVVqQA== X-Received: by 2002:adf:fa87:: with SMTP id h7mr1713158wrr.304.1571215277931; Wed, 16 Oct 2019 01:41:17 -0700 (PDT) Date: Wed, 16 Oct 2019 10:39:54 +0200 In-Reply-To: <20191016083959.186860-1-elver@google.com> Message-Id: <20191016083959.186860-4-elver@google.com> Mime-Version: 1.0 References: <20191016083959.186860-1-elver@google.com> X-Mailer: git-send-email 2.23.0.700.g56cf767bdb-goog Subject: [PATCH 3/8] build, kcsan: Add KCSAN build exceptions From: Marco Elver To: elver@google.com Cc: akiyks@gmail.com, stern@rowland.harvard.edu, glider@google.com, parri.andrea@gmail.com, andreyknvl@google.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, boqun.feng@gmail.com, bp@alien8.de, dja@axtens.net, dlustig@nvidia.com, dave.hansen@linux.intel.com, dhowells@redhat.com, dvyukov@google.com, hpa@zytor.com, mingo@redhat.com, j.alglave@ucl.ac.uk, joel@joelfernandes.org, corbet@lwn.net, jpoimboe@redhat.com, luc.maranget@inria.fr, mark.rutland@arm.com, npiggin@gmail.com, paulmck@linux.ibm.com, peterz@infradead.org, tglx@linutronix.de, will@kernel.org, kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-efi@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This blacklists several compilation units from KCSAN. See the respective inline comments for the reasoning. Signed-off-by: Marco Elver --- kernel/Makefile | 5 +++++ kernel/sched/Makefile | 6 ++++++ mm/Makefile | 8 ++++++++ 3 files changed, 19 insertions(+) diff --git a/kernel/Makefile b/kernel/Makefile index 74ab46e2ebd1..4a597a68b8bc 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -23,6 +23,9 @@ endif # Prevents flicker of uninteresting __do_softirq()/__local_bh_disable_ip() # in coverage traces. KCOV_INSTRUMENT_softirq.o := n +# Avoid KCSAN instrumentation in softirq ("No shared variables, all the data +# are CPU local" => assume no data-races), to reduce overhead in interrupts. +KCSAN_SANITIZE_softirq.o = n # These are called from save_stack_trace() on slub debug path, # and produce insane amounts of uninteresting coverage. KCOV_INSTRUMENT_module.o := n @@ -30,6 +33,7 @@ KCOV_INSTRUMENT_extable.o := n # Don't self-instrument. KCOV_INSTRUMENT_kcov.o := n KASAN_SANITIZE_kcov.o := n +KCSAN_SANITIZE_kcov.o := n CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) # cond_syscall is currently not LTO compatible @@ -118,6 +122,7 @@ obj-$(CONFIG_RSEQ) += rseq.o obj-$(CONFIG_GCC_PLUGIN_STACKLEAK) += stackleak.o KASAN_SANITIZE_stackleak.o := n +KCSAN_SANITIZE_stackleak.o := n KCOV_INSTRUMENT_stackleak.o := n $(obj)/configs.o: $(obj)/config_data.gz diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile index 21fb5a5662b5..e9307a9c54e7 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -7,6 +7,12 @@ endif # that is not a function of syscall inputs. E.g. involuntary context switches. KCOV_INSTRUMENT := n +# There are numerous races here, however, most of them due to plain accesses. +# This would make it even harder for syzbot to find reproducers, because these +# bugs trigger without specific input. Disable by default, but should re-enable +# eventually. +KCSAN_SANITIZE := n + ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y) # According to Alan Modra , the -fno-omit-frame-pointer is # needed for x86 only. Why this used to be enabled for all architectures is beyond diff --git a/mm/Makefile b/mm/Makefile index d996846697ef..33ea0154dd2d 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -7,6 +7,14 @@ KASAN_SANITIZE_slab_common.o := n KASAN_SANITIZE_slab.o := n KASAN_SANITIZE_slub.o := n +# These produce frequent data-race reports: most of them are due to races on +# the same word but accesses to different bits of that word. Re-enable KCSAN +# for these when we have more consensus on what to do about them. +KCSAN_SANITIZE_slab_common.o := n +KCSAN_SANITIZE_slab.o := n +KCSAN_SANITIZE_slub.o := n +KCSAN_SANITIZE_page_alloc.o := n + # These files are disabled because they produce non-interesting and/or # flaky coverage that is not a function of syscall inputs. E.g. slab is out of # free pages, or a task is migrated between nodes. From patchwork Wed Oct 16 08:39:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11192635 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 797CB15AB for ; Wed, 16 Oct 2019 08:41:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 393B421848 for ; Wed, 16 Oct 2019 08:41:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="drf8mRJE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 393B421848 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CB0708E0005; Wed, 16 Oct 2019 04:41:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C64AF8E0001; Wed, 16 Oct 2019 04:41:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B02198E0005; Wed, 16 Oct 2019 04:41:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0143.hostedemail.com [216.40.44.143]) by kanga.kvack.org (Postfix) with ESMTP id 8B0288E0001 for ; Wed, 16 Oct 2019 04:41:23 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id E29585851 for ; Wed, 16 Oct 2019 08:41:22 +0000 (UTC) X-FDA: 76049003604.13.cast60_29196011e833b X-Spam-Summary: 2,0,0,95bb6916b3510b6c,d41d8cd98f00b204,3sdemxqukco4ubluhweewbu.secbydkn-ccalqsa.ehw@flex--elver.bounces.google.com,:elver@google.com:akiyks@gmail.com:stern@rowland.harvard.edu:glider@google.com:parri.andrea@gmail.com:andreyknvl@google.com:luto@kernel.org:ard.biesheuvel@linaro.org:arnd@arndb.de:boqun.feng@gmail.com:bp@alien8.de:dja@axtens.net:dlustig@nvidia.com:dave.hansen@linux.intel.com:dhowells@redhat.com:dvyukov@google.com:hpa@zytor.com:mingo@redhat.com:j.alglave@ucl.ac.uk:joel@joelfernandes.org:corbet@lwn.net:jpoimboe@redhat.com:luc.maranget@inria.fr:mark.rutland@arm.com:npiggin@gmail.com:paulmck@linux.ibm.com:peterz@infradead.org:tglx@linutronix.de:will@kernel.org:kasan-dev@googlegroups.com:linux-arch@vger.kernel.org:linux-doc@vger.kernel.org:linux-efi@vger.kernel.org:linux-kbuild@vger.kernel.org:linux-kernel@vger.kernel.org::x86@kernel.org,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1544:1593:1 594:1605 X-HE-Tag: cast60_29196011e833b X-Filterd-Recvd-Size: 8025 Received: from mail-vk1-f201.google.com (mail-vk1-f201.google.com [209.85.221.201]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 08:41:22 +0000 (UTC) Received: by mail-vk1-f201.google.com with SMTP id n79so9380090vkf.22 for ; Wed, 16 Oct 2019 01:41:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=M8fHJrEZDU8nr2N9+M6QpZBStFZ02E5+Wb/B5OdwNhY=; b=drf8mRJE9/DGBput0jVX7ghxLVlxowFdrNYrtZ1JUmyzJ+Q14Y8qggGemT1Um8XVHy D72zVpsw+IpX6eCWACs952OTZP663vgIpDqN3sleeBNoCs7G6E+RN3qpBBIQciUBIyas RTjcClq5zVlB2yjB1FU+ORf5pxdYZfL1eRO8GGmPDtf9gu1kS1CYZtXwGrS/XdCbrbUp UWp3drVvuXCiIEMPDR6Ftr3VqH7VYYOYNSl/A4lvyDJwBWGIyRCxBq4HFM/2W96ihenx SdZOtNdK6A9yRx81EAJrSQIwKuofjC6+QZGrInO631k84ZxbnoQNJbh67yuB8VP2dFNL 3v/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=M8fHJrEZDU8nr2N9+M6QpZBStFZ02E5+Wb/B5OdwNhY=; b=Eycurni6ShbhjVcOcVnwWNmJR9UHlrmuhDt9KQSrjNg1EftMq+tWxOB3WvcFcmJuNa iYWnu1Vj7obxwrhIroGrpyq/sNaPh3gMe3AVvrvD5+vsgegXVVkj5b2pQhHyr5B5W2Rk 5gEyMD7sr/+OPWo2i7IbqPdcVG3AQNyiMnF3RofaYxKrkf+B54tzWdYOcZRdPOoLs23t wdhk748+DYPZMOA74Sdk7KvHf8c4i5A+ztp+vJwR9wWdoG0W6mgcFufGBJXgiLQVIyXe 62OKlhR7o9e1WJM3LwNGYyuvNIFaFHGSHFTlG0HBBCJu6CIkT5B8h156ctA3EaL3wziZ spKg== X-Gm-Message-State: APjAAAWEb8h4GhfyyQGO1rd3uLkciIVwydUiVpoo41+RP7m8QBFLQc5p KzLViUV3gmVj6acIQ5d1lDfnmV5Z2Q== X-Google-Smtp-Source: APXvYqyVAOEgM7jCaIDL9TCOi66gOy7iJdvQEOB73ZdnypuRYe1BHrY7fJEe54FYJotDqilWyPRJI2fKwA== X-Received: by 2002:ab0:2456:: with SMTP id g22mr15100034uan.82.1571215281436; Wed, 16 Oct 2019 01:41:21 -0700 (PDT) Date: Wed, 16 Oct 2019 10:39:55 +0200 In-Reply-To: <20191016083959.186860-1-elver@google.com> Message-Id: <20191016083959.186860-5-elver@google.com> Mime-Version: 1.0 References: <20191016083959.186860-1-elver@google.com> X-Mailer: git-send-email 2.23.0.700.g56cf767bdb-goog Subject: [PATCH 4/8] seqlock, kcsan: Add annotations for KCSAN From: Marco Elver To: elver@google.com Cc: akiyks@gmail.com, stern@rowland.harvard.edu, glider@google.com, parri.andrea@gmail.com, andreyknvl@google.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, boqun.feng@gmail.com, bp@alien8.de, dja@axtens.net, dlustig@nvidia.com, dave.hansen@linux.intel.com, dhowells@redhat.com, dvyukov@google.com, hpa@zytor.com, mingo@redhat.com, j.alglave@ucl.ac.uk, joel@joelfernandes.org, corbet@lwn.net, jpoimboe@redhat.com, luc.maranget@inria.fr, mark.rutland@arm.com, npiggin@gmail.com, paulmck@linux.ibm.com, peterz@infradead.org, tglx@linutronix.de, will@kernel.org, kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-efi@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since seqlocks in the Linux kernel do not require the use of marked atomic accesses in critical sections, we teach KCSAN to assume such accesses are atomic. KCSAN currently also pretends that writes to `sequence` are atomic, although currently plain writes are used (their corresponding reads are READ_ONCE). Further, to avoid false positives in the absence of clear ending of a seqlock reader critical section (only when using the raw interface), KCSAN assumes a fixed number of accesses after start of a seqlock critical section are atomic. Signed-off-by: Marco Elver --- include/linux/seqlock.h | 44 +++++++++++++++++++++++++++++++++++++---- 1 file changed, 40 insertions(+), 4 deletions(-) diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index bcf4cf26b8c8..1e425831a7ed 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -37,8 +37,24 @@ #include #include #include +#include #include +/* + * The seqlock interface does not prescribe a precise sequence of read + * begin/retry/end. For readers, typically there is a call to + * read_seqcount_begin() and read_seqcount_retry(), however, there are more + * esoteric cases which do not follow this pattern. + * + * As a consequence, we take the following best-effort approach for *raw* usage + * of seqlocks under KCSAN: upon beginning a seq-reader critical section, + * pessimistically mark then next KCSAN_SEQLOCK_REGION_MAX memory accesses as + * atomics; if there is a matching read_seqcount_retry() call, no following + * memory operations are considered atomic. Non-raw usage of seqlocks is not + * affected. + */ +#define KCSAN_SEQLOCK_REGION_MAX 1000 + /* * Version using sequence counter only. * This can be used when code has its own mutex protecting the @@ -115,6 +131,7 @@ static inline unsigned __read_seqcount_begin(const seqcount_t *s) cpu_relax(); goto repeat; } + kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); return ret; } @@ -131,6 +148,7 @@ static inline unsigned raw_read_seqcount(const seqcount_t *s) { unsigned ret = READ_ONCE(s->sequence); smp_rmb(); + kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); return ret; } @@ -183,6 +201,7 @@ static inline unsigned raw_seqcount_begin(const seqcount_t *s) { unsigned ret = READ_ONCE(s->sequence); smp_rmb(); + kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); return ret & ~1; } @@ -202,7 +221,8 @@ static inline unsigned raw_seqcount_begin(const seqcount_t *s) */ static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start) { - return unlikely(s->sequence != start); + kcsan_atomic_next(0); + return unlikely(READ_ONCE(s->sequence) != start); } /** @@ -225,6 +245,7 @@ static inline int read_seqcount_retry(const seqcount_t *s, unsigned start) static inline void raw_write_seqcount_begin(seqcount_t *s) { + kcsan_begin_atomic(true); s->sequence++; smp_wmb(); } @@ -233,6 +254,7 @@ static inline void raw_write_seqcount_end(seqcount_t *s) { smp_wmb(); s->sequence++; + kcsan_end_atomic(true); } /** @@ -262,18 +284,20 @@ static inline void raw_write_seqcount_end(seqcount_t *s) * * void write(void) * { - * Y = true; + * WRITE_ONCE(Y, true); * * raw_write_seqcount_barrier(seq); * - * X = false; + * WRITE_ONCE(X, false); * } */ static inline void raw_write_seqcount_barrier(seqcount_t *s) { + kcsan_begin_atomic(true); s->sequence++; smp_wmb(); s->sequence++; + kcsan_end_atomic(true); } static inline int raw_read_seqcount_latch(seqcount_t *s) @@ -398,7 +422,9 @@ static inline void write_seqcount_end(seqcount_t *s) static inline void write_seqcount_invalidate(seqcount_t *s) { smp_wmb(); + kcsan_begin_atomic(true); s->sequence+=2; + kcsan_end_atomic(true); } typedef struct { @@ -430,11 +456,21 @@ typedef struct { */ static inline unsigned read_seqbegin(const seqlock_t *sl) { - return read_seqcount_begin(&sl->seqcount); + unsigned ret = read_seqcount_begin(&sl->seqcount); + + kcsan_atomic_next(0); /* non-raw usage, assume closing read_seqretry */ + kcsan_begin_atomic(false); + return ret; } static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) { + /* + * Assume not nested: read_seqretry may be called multiple times when + * completing read critical section. + */ + kcsan_end_atomic(false); + return read_seqcount_retry(&sl->seqcount, start); } From patchwork Wed Oct 16 08:39:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11192641 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7E3EE17E6 for ; Wed, 16 Oct 2019 08:41:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 40DF5222C9 for ; Wed, 16 Oct 2019 08:41:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pmKyjAi9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 40DF5222C9 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 07BCF8E000A; Wed, 16 Oct 2019 04:41:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0512A8E0001; Wed, 16 Oct 2019 04:41:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E86EF8E000A; Wed, 16 Oct 2019 04:41:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id C42BC8E0001 for ; Wed, 16 Oct 2019 04:41:26 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 521A6181AEF32 for ; Wed, 16 Oct 2019 08:41:26 +0000 (UTC) X-FDA: 76049003772.20.pump10_299586228ac34 X-Spam-Summary: 2,0,0,97e00c8334510620,d41d8cd98f00b204,3tnemxqukcpexeoxkzhhzex.vhfebgnq-ffdotvd.hkz@flex--elver.bounces.google.com,:elver@google.com:akiyks@gmail.com:stern@rowland.harvard.edu:glider@google.com:parri.andrea@gmail.com:andreyknvl@google.com:luto@kernel.org:ard.biesheuvel@linaro.org:arnd@arndb.de:boqun.feng@gmail.com:bp@alien8.de:dja@axtens.net:dlustig@nvidia.com:dave.hansen@linux.intel.com:dhowells@redhat.com:dvyukov@google.com:hpa@zytor.com:mingo@redhat.com:j.alglave@ucl.ac.uk:joel@joelfernandes.org:corbet@lwn.net:jpoimboe@redhat.com:luc.maranget@inria.fr:mark.rutland@arm.com:npiggin@gmail.com:paulmck@linux.ibm.com:peterz@infradead.org:tglx@linutronix.de:will@kernel.org:kasan-dev@googlegroups.com:linux-arch@vger.kernel.org:linux-doc@vger.kernel.org:linux-efi@vger.kernel.org:linux-kbuild@vger.kernel.org:linux-kernel@vger.kernel.org::x86@kernel.org,RULES_HIT:41:152:355:379:541:800:960:967:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1541:1593:15 94:1711: X-HE-Tag: pump10_299586228ac34 X-Filterd-Recvd-Size: 5236 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 08:41:25 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id c5so1600204qkm.0 for ; Wed, 16 Oct 2019 01:41:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6B6hVR43CAiybn1r8unXu0YGQ/l4GfBcUZa3qNay/Lg=; b=pmKyjAi9Xr5PlfEX3WN4RXfR5rOUMeCVeHFLxeZBnJX60pdmUJcgmq7H9/mtFPlJFa moYsn1hDbJspCdD6Vib5sxCF1XQS2zorrJaslTcgAECl0Igp90eao9ExRzxoghK0ZyKa /YWUX8wsqgNhUXxIKWeHu6EHevPZBqeqVPpwiuyJgZFcEq2EuNxKt0GInMFNLlo6Yxif PrXK74Aw6W9IKNzvlptjE5I+HNUQvjt7bLAKckauSr7e8Ox9+LMH1ifl8ytlPVg92RDl Uels+l0S9OCX6mNGK/TPsotu300tngqOS6U+WCI50TixqqRrW/HI3/xID0mukxXkp8m4 mUjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6B6hVR43CAiybn1r8unXu0YGQ/l4GfBcUZa3qNay/Lg=; b=X/0Y0LByv41YjCRfA2e7/mLzQ8GgJbC2fZAtXB4/VBijq3vouREo50SxwZB646iW2W cWhAP/kFMjdfJdW0XGETSK9CUMx8ePoPgsAs2fMM4IFi9/SzLPDxht2nKg7I+Yxlc80y CB9xieQVQIuLO5n4KNdgAJYaSqV6yNzXVq7jylH7cdEgtMctPzD1Gn/xMI6YBGM7Sefu qDHvssfk5V6zvKaJFrKh8ja4lqdL7gfFPj5nJMhlvzEp5i3ybVcWDcMoV8eV7p0FRcEk mdCwEaYdqx95/uBQy90mtY32anqtnMZpJD5SIgnyibuKKSzNIxyTgcOiGy/k9W2sIQvB lWeA== X-Gm-Message-State: APjAAAU7JeD160vrWH8TSk3o9i03mZpEcwI4xIaY5NHgx17DpDTOm6vU Hqo5N3mqdAStVWcyyisXcahcUOvSuQ== X-Google-Smtp-Source: APXvYqyKyeT2RR7s/B/qIqHhRbJpXB+EYhRbk3Rsw+lqlmOIIkoJ+/xVG889AVzjkW0GfsDaRf2mA9694w== X-Received: by 2002:ac8:3724:: with SMTP id o33mr42240619qtb.87.1571215284755; Wed, 16 Oct 2019 01:41:24 -0700 (PDT) Date: Wed, 16 Oct 2019 10:39:56 +0200 In-Reply-To: <20191016083959.186860-1-elver@google.com> Message-Id: <20191016083959.186860-6-elver@google.com> Mime-Version: 1.0 References: <20191016083959.186860-1-elver@google.com> X-Mailer: git-send-email 2.23.0.700.g56cf767bdb-goog Subject: [PATCH 5/8] seqlock: Require WRITE_ONCE surrounding raw_seqcount_barrier From: Marco Elver To: elver@google.com Cc: akiyks@gmail.com, stern@rowland.harvard.edu, glider@google.com, parri.andrea@gmail.com, andreyknvl@google.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, boqun.feng@gmail.com, bp@alien8.de, dja@axtens.net, dlustig@nvidia.com, dave.hansen@linux.intel.com, dhowells@redhat.com, dvyukov@google.com, hpa@zytor.com, mingo@redhat.com, j.alglave@ucl.ac.uk, joel@joelfernandes.org, corbet@lwn.net, jpoimboe@redhat.com, luc.maranget@inria.fr, mark.rutland@arm.com, npiggin@gmail.com, paulmck@linux.ibm.com, peterz@infradead.org, tglx@linutronix.de, will@kernel.org, kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-efi@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch proposes to require marked atomic accesses surrounding raw_write_seqcount_barrier. We reason that otherwise there is no way to guarantee propagation nor atomicity of writes before/after the barrier [1]. For example, consider the compiler tears stores either before or after the barrier; in this case, readers may observe a partial value, and because readers are unaware that writes are going on (writes are not in a seq-writer critical section), will complete the seq-reader critical section while having observed some partial state. [1] https://lwn.net/Articles/793253/ This came up when designing and implementing KCSAN, because KCSAN would flag these accesses as data-races. After careful analysis, our reasoning as above led us to conclude that the best thing to do is to propose an amendment to the raw_seqcount_barrier usage. Signed-off-by: Marco Elver --- include/linux/seqlock.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 1e425831a7ed..5d50aad53b47 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -265,6 +265,13 @@ static inline void raw_write_seqcount_end(seqcount_t *s) * usual consistency guarantee. It is one wmb cheaper, because we can * collapse the two back-to-back wmb()s. * + * Note that, writes surrounding the barrier should be declared atomic (e.g. + * via WRITE_ONCE): a) to ensure the writes become visible to other threads + * atomically, avoiding compiler optimizations; b) to document which writes are + * meant to propagate to the reader critical section. This is necessary because + * neither writes before and after the barrier are enclosed in a seq-writer + * critical section that would ensure readers are aware of ongoing writes. + * * seqcount_t seq; * bool X = true, Y = false; * From patchwork Wed Oct 16 08:39:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11192643 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1D2CE15AB for ; Wed, 16 Oct 2019 08:41:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D0AE52054F for ; Wed, 16 Oct 2019 08:41:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ksjY6JoO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D0AE52054F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7F49D8E000B; Wed, 16 Oct 2019 04:41:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7CBCE8E0001; Wed, 16 Oct 2019 04:41:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E26C8E000B; Wed, 16 Oct 2019 04:41:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id 485C18E0001 for ; Wed, 16 Oct 2019 04:41:30 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id E1501180CF0AD for ; Wed, 16 Oct 2019 08:41:29 +0000 (UTC) X-FDA: 76049003898.20.noise45_2a1b4ed4d6547 X-Spam-Summary: 2,0,0,c424f7f3e6c89dcf,d41d8cd98f00b204,3t9emxqukcpqahranckkcha.ykihejqt-iigrwyg.knc@flex--elver.bounces.google.com,:elver@google.com:akiyks@gmail.com:stern@rowland.harvard.edu:glider@google.com:parri.andrea@gmail.com:andreyknvl@google.com:luto@kernel.org:ard.biesheuvel@linaro.org:arnd@arndb.de:boqun.feng@gmail.com:bp@alien8.de:dja@axtens.net:dlustig@nvidia.com:dave.hansen@linux.intel.com:dhowells@redhat.com:dvyukov@google.com:hpa@zytor.com:mingo@redhat.com:j.alglave@ucl.ac.uk:joel@joelfernandes.org:corbet@lwn.net:jpoimboe@redhat.com:luc.maranget@inria.fr:mark.rutland@arm.com:npiggin@gmail.com:paulmck@linux.ibm.com:peterz@infradead.org:tglx@linutronix.de:will@kernel.org:kasan-dev@googlegroups.com:linux-arch@vger.kernel.org:linux-doc@vger.kernel.org:linux-efi@vger.kernel.org:linux-kbuild@vger.kernel.org:linux-kernel@vger.kernel.org::x86@kernel.org,RULES_HIT:2:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1593:1594:1605 :1730:17 X-HE-Tag: noise45_2a1b4ed4d6547 X-Filterd-Recvd-Size: 9670 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 08:41:29 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id o128so2043123wmo.1 for ; Wed, 16 Oct 2019 01:41:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3fadaJNY8io0I9mSUJYjT11edd0e7dEg1D1vHmOcbCo=; b=ksjY6JoOjWytq2TYC/oK0zOAvtvo/2GZK/Z9GhwZkoNMbj2PHesiqjczqeQSE93/xV d1/f7nTpc6/nK3QCd2xeAv1p0Fx1ajnVjstMeHMLFBuC6uQbJAwemlBBjU3oU1TFLBuE SBmFMKIpFZJOw2ZF62H3KudIp5W2+fPpliass6KydtEhjgmTAEehW63jvcvWbCw1q76T TLTLYky1Cf0TvkOzNgvfbV21+/XriWkdMzCzYEEYXsmJ1EI0wrZyJ3D2vG6ya7+Plx7Q VM87OGh1AysuXviPUsLPUpvjrhr6o3F5Xn2gIuqaagrendd9yK8vP8tzRMQbht2sdQ0A 7hJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3fadaJNY8io0I9mSUJYjT11edd0e7dEg1D1vHmOcbCo=; b=PsYtvwQkZTfKvtfbkkS3Pki/jfeJFuLIAoU6QNxA6SrL/nlhXkx/OS5DWccbiLkD4B eYlNcaXb0Kd1V6lqQsDd3HCBntN68zEzBuywz7966UUWJOOmtG/E8Rb1ECdi8f4gE+iG Gph9dKmx0q+VuuL53Q4EAT0bjUa1LHUHKiEOam9m+VQVRkviHo9TD6E/LuLTMSIZlNZU rsipJObBBaQKe+F33N8BUsMLa0mK4QRtO1UL6IoQoLlmBhRNPignujH5hcxrwnPIbPj7 TVzxP9x6E4NRIdnN/3sEbct+4FKxsFI04GBPseYeIKYCTz8bOJcUvpJb4MMnG11Ufagw 1Xzw== X-Gm-Message-State: APjAAAWKo8H72e2EzPBxDLqXDWMJ/Iuho9n2BNTyquG56AsRmALvWA0g 1TKNAfOIHCS2E/75qQUUWVEuyUFZFw== X-Google-Smtp-Source: APXvYqypr+HPLvhOtdsheCx+wQ0BYIkOxw9wyADGYOJwPaqWSPto7foE9n4zDfuLs80RRl0nOD7b2URExA== X-Received: by 2002:adf:e688:: with SMTP id r8mr1726076wrm.342.1571215287907; Wed, 16 Oct 2019 01:41:27 -0700 (PDT) Date: Wed, 16 Oct 2019 10:39:57 +0200 In-Reply-To: <20191016083959.186860-1-elver@google.com> Message-Id: <20191016083959.186860-7-elver@google.com> Mime-Version: 1.0 References: <20191016083959.186860-1-elver@google.com> X-Mailer: git-send-email 2.23.0.700.g56cf767bdb-goog Subject: [PATCH 6/8] asm-generic, kcsan: Add KCSAN instrumentation for bitops From: Marco Elver To: elver@google.com Cc: akiyks@gmail.com, stern@rowland.harvard.edu, glider@google.com, parri.andrea@gmail.com, andreyknvl@google.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, boqun.feng@gmail.com, bp@alien8.de, dja@axtens.net, dlustig@nvidia.com, dave.hansen@linux.intel.com, dhowells@redhat.com, dvyukov@google.com, hpa@zytor.com, mingo@redhat.com, j.alglave@ucl.ac.uk, joel@joelfernandes.org, corbet@lwn.net, jpoimboe@redhat.com, luc.maranget@inria.fr, mark.rutland@arm.com, npiggin@gmail.com, paulmck@linux.ibm.com, peterz@infradead.org, tglx@linutronix.de, will@kernel.org, kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-efi@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add explicit KCSAN checks for bitops. Signed-off-by: Marco Elver --- include/asm-generic/bitops-instrumented.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/include/asm-generic/bitops-instrumented.h b/include/asm-generic/bitops-instrumented.h index ddd1c6d9d8db..5767debd4b52 100644 --- a/include/asm-generic/bitops-instrumented.h +++ b/include/asm-generic/bitops-instrumented.h @@ -12,6 +12,7 @@ #define _ASM_GENERIC_BITOPS_INSTRUMENTED_H #include +#include /** * set_bit - Atomically set a bit in memory @@ -26,6 +27,7 @@ static inline void set_bit(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_atomic(addr + BIT_WORD(nr), sizeof(long), true); arch_set_bit(nr, addr); } @@ -41,6 +43,7 @@ static inline void set_bit(long nr, volatile unsigned long *addr) static inline void __set_bit(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_access(addr + BIT_WORD(nr), sizeof(long), true); arch___set_bit(nr, addr); } @@ -54,6 +57,7 @@ static inline void __set_bit(long nr, volatile unsigned long *addr) static inline void clear_bit(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_atomic(addr + BIT_WORD(nr), sizeof(long), true); arch_clear_bit(nr, addr); } @@ -69,6 +73,7 @@ static inline void clear_bit(long nr, volatile unsigned long *addr) static inline void __clear_bit(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_access(addr + BIT_WORD(nr), sizeof(long), true); arch___clear_bit(nr, addr); } @@ -82,6 +87,7 @@ static inline void __clear_bit(long nr, volatile unsigned long *addr) static inline void clear_bit_unlock(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_atomic(addr + BIT_WORD(nr), sizeof(long), true); arch_clear_bit_unlock(nr, addr); } @@ -97,6 +103,7 @@ static inline void clear_bit_unlock(long nr, volatile unsigned long *addr) static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_access(addr + BIT_WORD(nr), sizeof(long), true); arch___clear_bit_unlock(nr, addr); } @@ -113,6 +120,7 @@ static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) static inline void change_bit(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_atomic(addr + BIT_WORD(nr), sizeof(long), true); arch_change_bit(nr, addr); } @@ -128,6 +136,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr) static inline void __change_bit(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_access(addr + BIT_WORD(nr), sizeof(long), true); arch___change_bit(nr, addr); } @@ -141,6 +150,7 @@ static inline void __change_bit(long nr, volatile unsigned long *addr) static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_atomic(addr + BIT_WORD(nr), sizeof(long), true); return arch_test_and_set_bit(nr, addr); } @@ -155,6 +165,7 @@ static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_access(addr + BIT_WORD(nr), sizeof(long), true); return arch___test_and_set_bit(nr, addr); } @@ -170,6 +181,7 @@ static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr) static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_atomic(addr + BIT_WORD(nr), sizeof(long), true); return arch_test_and_set_bit_lock(nr, addr); } @@ -183,6 +195,7 @@ static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr) static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_atomic(addr + BIT_WORD(nr), sizeof(long), true); return arch_test_and_clear_bit(nr, addr); } @@ -197,6 +210,7 @@ static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_access(addr + BIT_WORD(nr), sizeof(long), true); return arch___test_and_clear_bit(nr, addr); } @@ -210,6 +224,7 @@ static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr) static inline bool test_and_change_bit(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_atomic(addr + BIT_WORD(nr), sizeof(long), true); return arch_test_and_change_bit(nr, addr); } @@ -224,6 +239,7 @@ static inline bool test_and_change_bit(long nr, volatile unsigned long *addr) static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_access(addr + BIT_WORD(nr), sizeof(long), true); return arch___test_and_change_bit(nr, addr); } @@ -235,6 +251,7 @@ static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr) static inline bool test_bit(long nr, const volatile unsigned long *addr) { kasan_check_read(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_atomic(addr + BIT_WORD(nr), sizeof(long), false); return arch_test_bit(nr, addr); } @@ -254,6 +271,7 @@ static inline bool clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) { kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + kcsan_check_atomic(addr + BIT_WORD(nr), sizeof(long), true); return arch_clear_bit_unlock_is_negative_byte(nr, addr); } /* Let everybody know we have it. */ From patchwork Wed Oct 16 08:39:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11192645 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 897C21390 for ; Wed, 16 Oct 2019 08:41:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1173921D7D for ; Wed, 16 Oct 2019 08:41:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="B48NK8cz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1173921D7D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7F4A28E000C; Wed, 16 Oct 2019 04:41:34 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7A8578E0001; Wed, 16 Oct 2019 04:41:34 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 646D38E000C; Wed, 16 Oct 2019 04:41:34 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0240.hostedemail.com [216.40.44.240]) by kanga.kvack.org (Postfix) with ESMTP id 33AD08E0001 for ; Wed, 16 Oct 2019 04:41:34 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id C5521181AEF32 for ; Wed, 16 Oct 2019 08:41:33 +0000 (UTC) X-FDA: 76049004066.23.women04_2a96e8aa7aa13 X-Spam-Summary: 2,0,0,c0f0c2de3f7aafd5,d41d8cd98f00b204,3u9emxqukcpgjq0jwlttlqj.htrqnsz2-rrp0fhp.twl@flex--elver.bounces.google.com,:elver@google.com:akiyks@gmail.com:stern@rowland.harvard.edu:glider@google.com:parri.andrea@gmail.com:andreyknvl@google.com:luto@kernel.org:ard.biesheuvel@linaro.org:arnd@arndb.de:boqun.feng@gmail.com:bp@alien8.de:dja@axtens.net:dlustig@nvidia.com:dave.hansen@linux.intel.com:dhowells@redhat.com:dvyukov@google.com:hpa@zytor.com:mingo@redhat.com:j.alglave@ucl.ac.uk:joel@joelfernandes.org:corbet@lwn.net:jpoimboe@redhat.com:luc.maranget@inria.fr:mark.rutland@arm.com:npiggin@gmail.com:paulmck@linux.ibm.com:peterz@infradead.org:tglx@linutronix.de:will@kernel.org:kasan-dev@googlegroups.com:linux-arch@vger.kernel.org:linux-doc@vger.kernel.org:linux-efi@vger.kernel.org:linux-kbuild@vger.kernel.org:linux-kernel@vger.kernel.org::x86@kernel.org,RULES_HIT:152:355:379:541:960:973:982:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1593:1594:1605:1730:1747 :1777:17 X-HE-Tag: women04_2a96e8aa7aa13 X-Filterd-Recvd-Size: 55621 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 08:41:32 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id z205so854347wmb.7 for ; Wed, 16 Oct 2019 01:41:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=IDVIC90B7deb/e6uaad7u5W6ZhJTt8Unq43HHY6F/0o=; b=B48NK8czs5mL7Jz23z3XCebfdb1jfyzrXWmx4EWF2gEAqJj7blYVSv3jlAUFNC2M0o aNYgBe0mbgRY7x60tChQQzaO1r9onFU6qqmTaAH+Kiqf7ZwZ+v4V82RMZFvonlxRN5lo z3iOQ7BBh2Wh5sJdJuFckd0GqUlc0/BNM/ThVEyMH1gWwBHnh9Kw1MXbo+zysfaP63vp SZfWRd/cnNMjZXdnZWj+I235a7ZS5gWNltQf09Be8tlfoF2kpQI7ydW31jArorA6/1CA ZCEyhmR/1Zn9n+SgHg+K2omLsX9WwCLv6rCUFf+2qaqSpe4kJxyT2qIpzEeT3pu6BOci JKnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IDVIC90B7deb/e6uaad7u5W6ZhJTt8Unq43HHY6F/0o=; b=SUDYdNa0JuLm+KV1ISDx9zIQKtTo5ZL3Z+rypzQ3RhC6KzvVcMjdxsxa+lsLGsEj4j gb+WzpQKlttmam5pIPiQwo43Lyqoo/JCHHuFY/qIXqwTO1EcWMRWYyFXFjuLESN026Lw uwFIf3+n1nLx0WYvkDdpLAO2V4FCGk6kMckUUu1jTUobhUyNBHKNox9cQu8Tnm3lurq2 MKRsK6vI1jb5dv9yjFzej4ddF1e3m6aNE/Q/GaJ7Y3kuG8QaChXGnBvb1ovjQ2xKx9pS KXO2lyb/LdfUAQWGnziFJznAc57DCpxki10pYVVDjNCYCWiaIxVTgu01JxiJ2J76ZtLm QhZA== X-Gm-Message-State: APjAAAUwS1Xk3gq01AYh2BGmBDJqKfVizMm0N4GS5fvJ8cTdwossCFd/ +RJBH65oqZ9ajDYpxM8V6WZJJxom2g== X-Google-Smtp-Source: APXvYqy5ZSOYixRtRwgdHuvLT68MYB1ByHEYhJQhc7Zecnm8JYv3A2UF+MTj0RjeAfiYFBMDoUWhkOGBaA== X-Received: by 2002:a05:6000:1190:: with SMTP id g16mr1576051wrx.133.1571215291051; Wed, 16 Oct 2019 01:41:31 -0700 (PDT) Date: Wed, 16 Oct 2019 10:39:58 +0200 In-Reply-To: <20191016083959.186860-1-elver@google.com> Message-Id: <20191016083959.186860-8-elver@google.com> Mime-Version: 1.0 References: <20191016083959.186860-1-elver@google.com> X-Mailer: git-send-email 2.23.0.700.g56cf767bdb-goog Subject: [PATCH 7/8] locking/atomics, kcsan: Add KCSAN instrumentation From: Marco Elver To: elver@google.com Cc: akiyks@gmail.com, stern@rowland.harvard.edu, glider@google.com, parri.andrea@gmail.com, andreyknvl@google.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, boqun.feng@gmail.com, bp@alien8.de, dja@axtens.net, dlustig@nvidia.com, dave.hansen@linux.intel.com, dhowells@redhat.com, dvyukov@google.com, hpa@zytor.com, mingo@redhat.com, j.alglave@ucl.ac.uk, joel@joelfernandes.org, corbet@lwn.net, jpoimboe@redhat.com, luc.maranget@inria.fr, mark.rutland@arm.com, npiggin@gmail.com, paulmck@linux.ibm.com, peterz@infradead.org, tglx@linutronix.de, will@kernel.org, kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-efi@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This adds KCSAN instrumentation to atomic-instrumented.h. Signed-off-by: Marco Elver --- include/asm-generic/atomic-instrumented.h | 192 +++++++++++++++++++++- scripts/atomic/gen-atomic-instrumented.sh | 9 +- 2 files changed, 199 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index e8730c6b9fe2..9e487febc610 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -19,11 +19,13 @@ #include #include +#include static inline int atomic_read(const atomic_t *v) { kasan_check_read(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), false); return arch_atomic_read(v); } #define atomic_read atomic_read @@ -33,6 +35,7 @@ static inline int atomic_read_acquire(const atomic_t *v) { kasan_check_read(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), false); return arch_atomic_read_acquire(v); } #define atomic_read_acquire atomic_read_acquire @@ -42,6 +45,7 @@ static inline void atomic_set(atomic_t *v, int i) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic_set(v, i); } #define atomic_set atomic_set @@ -51,6 +55,7 @@ static inline void atomic_set_release(atomic_t *v, int i) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic_set_release(v, i); } #define atomic_set_release atomic_set_release @@ -60,6 +65,7 @@ static inline void atomic_add(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic_add(i, v); } #define atomic_add atomic_add @@ -69,6 +75,7 @@ static inline int atomic_add_return(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_add_return(i, v); } #define atomic_add_return atomic_add_return @@ -79,6 +86,7 @@ static inline int atomic_add_return_acquire(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_add_return_acquire(i, v); } #define atomic_add_return_acquire atomic_add_return_acquire @@ -89,6 +97,7 @@ static inline int atomic_add_return_release(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_add_return_release(i, v); } #define atomic_add_return_release atomic_add_return_release @@ -99,6 +108,7 @@ static inline int atomic_add_return_relaxed(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_add_return_relaxed(i, v); } #define atomic_add_return_relaxed atomic_add_return_relaxed @@ -109,6 +119,7 @@ static inline int atomic_fetch_add(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_add(i, v); } #define atomic_fetch_add atomic_fetch_add @@ -119,6 +130,7 @@ static inline int atomic_fetch_add_acquire(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_add_acquire(i, v); } #define atomic_fetch_add_acquire atomic_fetch_add_acquire @@ -129,6 +141,7 @@ static inline int atomic_fetch_add_release(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_add_release(i, v); } #define atomic_fetch_add_release atomic_fetch_add_release @@ -139,6 +152,7 @@ static inline int atomic_fetch_add_relaxed(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_add_relaxed(i, v); } #define atomic_fetch_add_relaxed atomic_fetch_add_relaxed @@ -148,6 +162,7 @@ static inline void atomic_sub(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic_sub(i, v); } #define atomic_sub atomic_sub @@ -157,6 +172,7 @@ static inline int atomic_sub_return(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_sub_return(i, v); } #define atomic_sub_return atomic_sub_return @@ -167,6 +183,7 @@ static inline int atomic_sub_return_acquire(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_sub_return_acquire(i, v); } #define atomic_sub_return_acquire atomic_sub_return_acquire @@ -177,6 +194,7 @@ static inline int atomic_sub_return_release(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_sub_return_release(i, v); } #define atomic_sub_return_release atomic_sub_return_release @@ -187,6 +205,7 @@ static inline int atomic_sub_return_relaxed(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_sub_return_relaxed(i, v); } #define atomic_sub_return_relaxed atomic_sub_return_relaxed @@ -197,6 +216,7 @@ static inline int atomic_fetch_sub(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_sub(i, v); } #define atomic_fetch_sub atomic_fetch_sub @@ -207,6 +227,7 @@ static inline int atomic_fetch_sub_acquire(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_sub_acquire(i, v); } #define atomic_fetch_sub_acquire atomic_fetch_sub_acquire @@ -217,6 +238,7 @@ static inline int atomic_fetch_sub_release(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_sub_release(i, v); } #define atomic_fetch_sub_release atomic_fetch_sub_release @@ -227,6 +249,7 @@ static inline int atomic_fetch_sub_relaxed(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_sub_relaxed(i, v); } #define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed @@ -237,6 +260,7 @@ static inline void atomic_inc(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic_inc(v); } #define atomic_inc atomic_inc @@ -247,6 +271,7 @@ static inline int atomic_inc_return(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_inc_return(v); } #define atomic_inc_return atomic_inc_return @@ -257,6 +282,7 @@ static inline int atomic_inc_return_acquire(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_inc_return_acquire(v); } #define atomic_inc_return_acquire atomic_inc_return_acquire @@ -267,6 +293,7 @@ static inline int atomic_inc_return_release(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_inc_return_release(v); } #define atomic_inc_return_release atomic_inc_return_release @@ -277,6 +304,7 @@ static inline int atomic_inc_return_relaxed(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_inc_return_relaxed(v); } #define atomic_inc_return_relaxed atomic_inc_return_relaxed @@ -287,6 +315,7 @@ static inline int atomic_fetch_inc(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_inc(v); } #define atomic_fetch_inc atomic_fetch_inc @@ -297,6 +326,7 @@ static inline int atomic_fetch_inc_acquire(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_inc_acquire(v); } #define atomic_fetch_inc_acquire atomic_fetch_inc_acquire @@ -307,6 +337,7 @@ static inline int atomic_fetch_inc_release(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_inc_release(v); } #define atomic_fetch_inc_release atomic_fetch_inc_release @@ -317,6 +348,7 @@ static inline int atomic_fetch_inc_relaxed(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_inc_relaxed(v); } #define atomic_fetch_inc_relaxed atomic_fetch_inc_relaxed @@ -327,6 +359,7 @@ static inline void atomic_dec(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic_dec(v); } #define atomic_dec atomic_dec @@ -337,6 +370,7 @@ static inline int atomic_dec_return(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_dec_return(v); } #define atomic_dec_return atomic_dec_return @@ -347,6 +381,7 @@ static inline int atomic_dec_return_acquire(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_dec_return_acquire(v); } #define atomic_dec_return_acquire atomic_dec_return_acquire @@ -357,6 +392,7 @@ static inline int atomic_dec_return_release(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_dec_return_release(v); } #define atomic_dec_return_release atomic_dec_return_release @@ -367,6 +403,7 @@ static inline int atomic_dec_return_relaxed(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_dec_return_relaxed(v); } #define atomic_dec_return_relaxed atomic_dec_return_relaxed @@ -377,6 +414,7 @@ static inline int atomic_fetch_dec(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_dec(v); } #define atomic_fetch_dec atomic_fetch_dec @@ -387,6 +425,7 @@ static inline int atomic_fetch_dec_acquire(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_dec_acquire(v); } #define atomic_fetch_dec_acquire atomic_fetch_dec_acquire @@ -397,6 +436,7 @@ static inline int atomic_fetch_dec_release(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_dec_release(v); } #define atomic_fetch_dec_release atomic_fetch_dec_release @@ -407,6 +447,7 @@ static inline int atomic_fetch_dec_relaxed(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_dec_relaxed(v); } #define atomic_fetch_dec_relaxed atomic_fetch_dec_relaxed @@ -416,6 +457,7 @@ static inline void atomic_and(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic_and(i, v); } #define atomic_and atomic_and @@ -425,6 +467,7 @@ static inline int atomic_fetch_and(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_and(i, v); } #define atomic_fetch_and atomic_fetch_and @@ -435,6 +478,7 @@ static inline int atomic_fetch_and_acquire(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_and_acquire(i, v); } #define atomic_fetch_and_acquire atomic_fetch_and_acquire @@ -445,6 +489,7 @@ static inline int atomic_fetch_and_release(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_and_release(i, v); } #define atomic_fetch_and_release atomic_fetch_and_release @@ -455,6 +500,7 @@ static inline int atomic_fetch_and_relaxed(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_and_relaxed(i, v); } #define atomic_fetch_and_relaxed atomic_fetch_and_relaxed @@ -465,6 +511,7 @@ static inline void atomic_andnot(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic_andnot(i, v); } #define atomic_andnot atomic_andnot @@ -475,6 +522,7 @@ static inline int atomic_fetch_andnot(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_andnot(i, v); } #define atomic_fetch_andnot atomic_fetch_andnot @@ -485,6 +533,7 @@ static inline int atomic_fetch_andnot_acquire(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_andnot_acquire(i, v); } #define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire @@ -495,6 +544,7 @@ static inline int atomic_fetch_andnot_release(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_andnot_release(i, v); } #define atomic_fetch_andnot_release atomic_fetch_andnot_release @@ -505,6 +555,7 @@ static inline int atomic_fetch_andnot_relaxed(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_andnot_relaxed(i, v); } #define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed @@ -514,6 +565,7 @@ static inline void atomic_or(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic_or(i, v); } #define atomic_or atomic_or @@ -523,6 +575,7 @@ static inline int atomic_fetch_or(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_or(i, v); } #define atomic_fetch_or atomic_fetch_or @@ -533,6 +586,7 @@ static inline int atomic_fetch_or_acquire(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_or_acquire(i, v); } #define atomic_fetch_or_acquire atomic_fetch_or_acquire @@ -543,6 +597,7 @@ static inline int atomic_fetch_or_release(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_or_release(i, v); } #define atomic_fetch_or_release atomic_fetch_or_release @@ -553,6 +608,7 @@ static inline int atomic_fetch_or_relaxed(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_or_relaxed(i, v); } #define atomic_fetch_or_relaxed atomic_fetch_or_relaxed @@ -562,6 +618,7 @@ static inline void atomic_xor(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic_xor(i, v); } #define atomic_xor atomic_xor @@ -571,6 +628,7 @@ static inline int atomic_fetch_xor(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_xor(i, v); } #define atomic_fetch_xor atomic_fetch_xor @@ -581,6 +639,7 @@ static inline int atomic_fetch_xor_acquire(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_xor_acquire(i, v); } #define atomic_fetch_xor_acquire atomic_fetch_xor_acquire @@ -591,6 +650,7 @@ static inline int atomic_fetch_xor_release(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_xor_release(i, v); } #define atomic_fetch_xor_release atomic_fetch_xor_release @@ -601,6 +661,7 @@ static inline int atomic_fetch_xor_relaxed(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_xor_relaxed(i, v); } #define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed @@ -611,6 +672,7 @@ static inline int atomic_xchg(atomic_t *v, int i) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_xchg(v, i); } #define atomic_xchg atomic_xchg @@ -621,6 +683,7 @@ static inline int atomic_xchg_acquire(atomic_t *v, int i) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_xchg_acquire(v, i); } #define atomic_xchg_acquire atomic_xchg_acquire @@ -631,6 +694,7 @@ static inline int atomic_xchg_release(atomic_t *v, int i) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_xchg_release(v, i); } #define atomic_xchg_release atomic_xchg_release @@ -641,6 +705,7 @@ static inline int atomic_xchg_relaxed(atomic_t *v, int i) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_xchg_relaxed(v, i); } #define atomic_xchg_relaxed atomic_xchg_relaxed @@ -651,6 +716,7 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_cmpxchg(v, old, new); } #define atomic_cmpxchg atomic_cmpxchg @@ -661,6 +727,7 @@ static inline int atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_cmpxchg_acquire(v, old, new); } #define atomic_cmpxchg_acquire atomic_cmpxchg_acquire @@ -671,6 +738,7 @@ static inline int atomic_cmpxchg_release(atomic_t *v, int old, int new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_cmpxchg_release(v, old, new); } #define atomic_cmpxchg_release atomic_cmpxchg_release @@ -681,6 +749,7 @@ static inline int atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_cmpxchg_relaxed(v, old, new); } #define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed @@ -691,7 +760,9 @@ static inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); kasan_check_write(old, sizeof(*old)); + kcsan_check_atomic(old, sizeof(*old), true); return arch_atomic_try_cmpxchg(v, old, new); } #define atomic_try_cmpxchg atomic_try_cmpxchg @@ -702,7 +773,9 @@ static inline bool atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); kasan_check_write(old, sizeof(*old)); + kcsan_check_atomic(old, sizeof(*old), true); return arch_atomic_try_cmpxchg_acquire(v, old, new); } #define atomic_try_cmpxchg_acquire atomic_try_cmpxchg_acquire @@ -713,7 +786,9 @@ static inline bool atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); kasan_check_write(old, sizeof(*old)); + kcsan_check_atomic(old, sizeof(*old), true); return arch_atomic_try_cmpxchg_release(v, old, new); } #define atomic_try_cmpxchg_release atomic_try_cmpxchg_release @@ -724,7 +799,9 @@ static inline bool atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); kasan_check_write(old, sizeof(*old)); + kcsan_check_atomic(old, sizeof(*old), true); return arch_atomic_try_cmpxchg_relaxed(v, old, new); } #define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg_relaxed @@ -735,6 +812,7 @@ static inline bool atomic_sub_and_test(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_sub_and_test(i, v); } #define atomic_sub_and_test atomic_sub_and_test @@ -745,6 +823,7 @@ static inline bool atomic_dec_and_test(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_dec_and_test(v); } #define atomic_dec_and_test atomic_dec_and_test @@ -755,6 +834,7 @@ static inline bool atomic_inc_and_test(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_inc_and_test(v); } #define atomic_inc_and_test atomic_inc_and_test @@ -765,6 +845,7 @@ static inline bool atomic_add_negative(int i, atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_add_negative(i, v); } #define atomic_add_negative atomic_add_negative @@ -775,6 +856,7 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_fetch_add_unless(v, a, u); } #define atomic_fetch_add_unless atomic_fetch_add_unless @@ -785,6 +867,7 @@ static inline bool atomic_add_unless(atomic_t *v, int a, int u) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_add_unless(v, a, u); } #define atomic_add_unless atomic_add_unless @@ -795,6 +878,7 @@ static inline bool atomic_inc_not_zero(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_inc_not_zero(v); } #define atomic_inc_not_zero atomic_inc_not_zero @@ -805,6 +889,7 @@ static inline bool atomic_inc_unless_negative(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_inc_unless_negative(v); } #define atomic_inc_unless_negative atomic_inc_unless_negative @@ -815,6 +900,7 @@ static inline bool atomic_dec_unless_positive(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_dec_unless_positive(v); } #define atomic_dec_unless_positive atomic_dec_unless_positive @@ -825,6 +911,7 @@ static inline int atomic_dec_if_positive(atomic_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic_dec_if_positive(v); } #define atomic_dec_if_positive atomic_dec_if_positive @@ -834,6 +921,7 @@ static inline s64 atomic64_read(const atomic64_t *v) { kasan_check_read(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), false); return arch_atomic64_read(v); } #define atomic64_read atomic64_read @@ -843,6 +931,7 @@ static inline s64 atomic64_read_acquire(const atomic64_t *v) { kasan_check_read(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), false); return arch_atomic64_read_acquire(v); } #define atomic64_read_acquire atomic64_read_acquire @@ -852,6 +941,7 @@ static inline void atomic64_set(atomic64_t *v, s64 i) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic64_set(v, i); } #define atomic64_set atomic64_set @@ -861,6 +951,7 @@ static inline void atomic64_set_release(atomic64_t *v, s64 i) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic64_set_release(v, i); } #define atomic64_set_release atomic64_set_release @@ -870,6 +961,7 @@ static inline void atomic64_add(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic64_add(i, v); } #define atomic64_add atomic64_add @@ -879,6 +971,7 @@ static inline s64 atomic64_add_return(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_add_return(i, v); } #define atomic64_add_return atomic64_add_return @@ -889,6 +982,7 @@ static inline s64 atomic64_add_return_acquire(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_add_return_acquire(i, v); } #define atomic64_add_return_acquire atomic64_add_return_acquire @@ -899,6 +993,7 @@ static inline s64 atomic64_add_return_release(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_add_return_release(i, v); } #define atomic64_add_return_release atomic64_add_return_release @@ -909,6 +1004,7 @@ static inline s64 atomic64_add_return_relaxed(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_add_return_relaxed(i, v); } #define atomic64_add_return_relaxed atomic64_add_return_relaxed @@ -919,6 +1015,7 @@ static inline s64 atomic64_fetch_add(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_add(i, v); } #define atomic64_fetch_add atomic64_fetch_add @@ -929,6 +1026,7 @@ static inline s64 atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_add_acquire(i, v); } #define atomic64_fetch_add_acquire atomic64_fetch_add_acquire @@ -939,6 +1037,7 @@ static inline s64 atomic64_fetch_add_release(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_add_release(i, v); } #define atomic64_fetch_add_release atomic64_fetch_add_release @@ -949,6 +1048,7 @@ static inline s64 atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_add_relaxed(i, v); } #define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed @@ -958,6 +1058,7 @@ static inline void atomic64_sub(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic64_sub(i, v); } #define atomic64_sub atomic64_sub @@ -967,6 +1068,7 @@ static inline s64 atomic64_sub_return(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_sub_return(i, v); } #define atomic64_sub_return atomic64_sub_return @@ -977,6 +1079,7 @@ static inline s64 atomic64_sub_return_acquire(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_sub_return_acquire(i, v); } #define atomic64_sub_return_acquire atomic64_sub_return_acquire @@ -987,6 +1090,7 @@ static inline s64 atomic64_sub_return_release(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_sub_return_release(i, v); } #define atomic64_sub_return_release atomic64_sub_return_release @@ -997,6 +1101,7 @@ static inline s64 atomic64_sub_return_relaxed(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_sub_return_relaxed(i, v); } #define atomic64_sub_return_relaxed atomic64_sub_return_relaxed @@ -1007,6 +1112,7 @@ static inline s64 atomic64_fetch_sub(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_sub(i, v); } #define atomic64_fetch_sub atomic64_fetch_sub @@ -1017,6 +1123,7 @@ static inline s64 atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_sub_acquire(i, v); } #define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire @@ -1027,6 +1134,7 @@ static inline s64 atomic64_fetch_sub_release(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_sub_release(i, v); } #define atomic64_fetch_sub_release atomic64_fetch_sub_release @@ -1037,6 +1145,7 @@ static inline s64 atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_sub_relaxed(i, v); } #define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed @@ -1047,6 +1156,7 @@ static inline void atomic64_inc(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic64_inc(v); } #define atomic64_inc atomic64_inc @@ -1057,6 +1167,7 @@ static inline s64 atomic64_inc_return(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_inc_return(v); } #define atomic64_inc_return atomic64_inc_return @@ -1067,6 +1178,7 @@ static inline s64 atomic64_inc_return_acquire(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_inc_return_acquire(v); } #define atomic64_inc_return_acquire atomic64_inc_return_acquire @@ -1077,6 +1189,7 @@ static inline s64 atomic64_inc_return_release(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_inc_return_release(v); } #define atomic64_inc_return_release atomic64_inc_return_release @@ -1087,6 +1200,7 @@ static inline s64 atomic64_inc_return_relaxed(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_inc_return_relaxed(v); } #define atomic64_inc_return_relaxed atomic64_inc_return_relaxed @@ -1097,6 +1211,7 @@ static inline s64 atomic64_fetch_inc(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_inc(v); } #define atomic64_fetch_inc atomic64_fetch_inc @@ -1107,6 +1222,7 @@ static inline s64 atomic64_fetch_inc_acquire(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_inc_acquire(v); } #define atomic64_fetch_inc_acquire atomic64_fetch_inc_acquire @@ -1117,6 +1233,7 @@ static inline s64 atomic64_fetch_inc_release(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_inc_release(v); } #define atomic64_fetch_inc_release atomic64_fetch_inc_release @@ -1127,6 +1244,7 @@ static inline s64 atomic64_fetch_inc_relaxed(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_inc_relaxed(v); } #define atomic64_fetch_inc_relaxed atomic64_fetch_inc_relaxed @@ -1137,6 +1255,7 @@ static inline void atomic64_dec(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic64_dec(v); } #define atomic64_dec atomic64_dec @@ -1147,6 +1266,7 @@ static inline s64 atomic64_dec_return(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_dec_return(v); } #define atomic64_dec_return atomic64_dec_return @@ -1157,6 +1277,7 @@ static inline s64 atomic64_dec_return_acquire(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_dec_return_acquire(v); } #define atomic64_dec_return_acquire atomic64_dec_return_acquire @@ -1167,6 +1288,7 @@ static inline s64 atomic64_dec_return_release(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_dec_return_release(v); } #define atomic64_dec_return_release atomic64_dec_return_release @@ -1177,6 +1299,7 @@ static inline s64 atomic64_dec_return_relaxed(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_dec_return_relaxed(v); } #define atomic64_dec_return_relaxed atomic64_dec_return_relaxed @@ -1187,6 +1310,7 @@ static inline s64 atomic64_fetch_dec(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_dec(v); } #define atomic64_fetch_dec atomic64_fetch_dec @@ -1197,6 +1321,7 @@ static inline s64 atomic64_fetch_dec_acquire(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_dec_acquire(v); } #define atomic64_fetch_dec_acquire atomic64_fetch_dec_acquire @@ -1207,6 +1332,7 @@ static inline s64 atomic64_fetch_dec_release(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_dec_release(v); } #define atomic64_fetch_dec_release atomic64_fetch_dec_release @@ -1217,6 +1343,7 @@ static inline s64 atomic64_fetch_dec_relaxed(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_dec_relaxed(v); } #define atomic64_fetch_dec_relaxed atomic64_fetch_dec_relaxed @@ -1226,6 +1353,7 @@ static inline void atomic64_and(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic64_and(i, v); } #define atomic64_and atomic64_and @@ -1235,6 +1363,7 @@ static inline s64 atomic64_fetch_and(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_and(i, v); } #define atomic64_fetch_and atomic64_fetch_and @@ -1245,6 +1374,7 @@ static inline s64 atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_and_acquire(i, v); } #define atomic64_fetch_and_acquire atomic64_fetch_and_acquire @@ -1255,6 +1385,7 @@ static inline s64 atomic64_fetch_and_release(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_and_release(i, v); } #define atomic64_fetch_and_release atomic64_fetch_and_release @@ -1265,6 +1396,7 @@ static inline s64 atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_and_relaxed(i, v); } #define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed @@ -1275,6 +1407,7 @@ static inline void atomic64_andnot(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic64_andnot(i, v); } #define atomic64_andnot atomic64_andnot @@ -1285,6 +1418,7 @@ static inline s64 atomic64_fetch_andnot(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_andnot(i, v); } #define atomic64_fetch_andnot atomic64_fetch_andnot @@ -1295,6 +1429,7 @@ static inline s64 atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_andnot_acquire(i, v); } #define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire @@ -1305,6 +1440,7 @@ static inline s64 atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_andnot_release(i, v); } #define atomic64_fetch_andnot_release atomic64_fetch_andnot_release @@ -1315,6 +1451,7 @@ static inline s64 atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_andnot_relaxed(i, v); } #define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed @@ -1324,6 +1461,7 @@ static inline void atomic64_or(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic64_or(i, v); } #define atomic64_or atomic64_or @@ -1333,6 +1471,7 @@ static inline s64 atomic64_fetch_or(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_or(i, v); } #define atomic64_fetch_or atomic64_fetch_or @@ -1343,6 +1482,7 @@ static inline s64 atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_or_acquire(i, v); } #define atomic64_fetch_or_acquire atomic64_fetch_or_acquire @@ -1353,6 +1493,7 @@ static inline s64 atomic64_fetch_or_release(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_or_release(i, v); } #define atomic64_fetch_or_release atomic64_fetch_or_release @@ -1363,6 +1504,7 @@ static inline s64 atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_or_relaxed(i, v); } #define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed @@ -1372,6 +1514,7 @@ static inline void atomic64_xor(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); arch_atomic64_xor(i, v); } #define atomic64_xor atomic64_xor @@ -1381,6 +1524,7 @@ static inline s64 atomic64_fetch_xor(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_xor(i, v); } #define atomic64_fetch_xor atomic64_fetch_xor @@ -1391,6 +1535,7 @@ static inline s64 atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_xor_acquire(i, v); } #define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire @@ -1401,6 +1546,7 @@ static inline s64 atomic64_fetch_xor_release(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_xor_release(i, v); } #define atomic64_fetch_xor_release atomic64_fetch_xor_release @@ -1411,6 +1557,7 @@ static inline s64 atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_xor_relaxed(i, v); } #define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed @@ -1421,6 +1568,7 @@ static inline s64 atomic64_xchg(atomic64_t *v, s64 i) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_xchg(v, i); } #define atomic64_xchg atomic64_xchg @@ -1431,6 +1579,7 @@ static inline s64 atomic64_xchg_acquire(atomic64_t *v, s64 i) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_xchg_acquire(v, i); } #define atomic64_xchg_acquire atomic64_xchg_acquire @@ -1441,6 +1590,7 @@ static inline s64 atomic64_xchg_release(atomic64_t *v, s64 i) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_xchg_release(v, i); } #define atomic64_xchg_release atomic64_xchg_release @@ -1451,6 +1601,7 @@ static inline s64 atomic64_xchg_relaxed(atomic64_t *v, s64 i) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_xchg_relaxed(v, i); } #define atomic64_xchg_relaxed atomic64_xchg_relaxed @@ -1461,6 +1612,7 @@ static inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_cmpxchg(v, old, new); } #define atomic64_cmpxchg atomic64_cmpxchg @@ -1471,6 +1623,7 @@ static inline s64 atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_cmpxchg_acquire(v, old, new); } #define atomic64_cmpxchg_acquire atomic64_cmpxchg_acquire @@ -1481,6 +1634,7 @@ static inline s64 atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_cmpxchg_release(v, old, new); } #define atomic64_cmpxchg_release atomic64_cmpxchg_release @@ -1491,6 +1645,7 @@ static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_cmpxchg_relaxed(v, old, new); } #define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed @@ -1501,7 +1656,9 @@ static inline bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); kasan_check_write(old, sizeof(*old)); + kcsan_check_atomic(old, sizeof(*old), true); return arch_atomic64_try_cmpxchg(v, old, new); } #define atomic64_try_cmpxchg atomic64_try_cmpxchg @@ -1512,7 +1669,9 @@ static inline bool atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); kasan_check_write(old, sizeof(*old)); + kcsan_check_atomic(old, sizeof(*old), true); return arch_atomic64_try_cmpxchg_acquire(v, old, new); } #define atomic64_try_cmpxchg_acquire atomic64_try_cmpxchg_acquire @@ -1523,7 +1682,9 @@ static inline bool atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); kasan_check_write(old, sizeof(*old)); + kcsan_check_atomic(old, sizeof(*old), true); return arch_atomic64_try_cmpxchg_release(v, old, new); } #define atomic64_try_cmpxchg_release atomic64_try_cmpxchg_release @@ -1534,7 +1695,9 @@ static inline bool atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); kasan_check_write(old, sizeof(*old)); + kcsan_check_atomic(old, sizeof(*old), true); return arch_atomic64_try_cmpxchg_relaxed(v, old, new); } #define atomic64_try_cmpxchg_relaxed atomic64_try_cmpxchg_relaxed @@ -1545,6 +1708,7 @@ static inline bool atomic64_sub_and_test(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_sub_and_test(i, v); } #define atomic64_sub_and_test atomic64_sub_and_test @@ -1555,6 +1719,7 @@ static inline bool atomic64_dec_and_test(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_dec_and_test(v); } #define atomic64_dec_and_test atomic64_dec_and_test @@ -1565,6 +1730,7 @@ static inline bool atomic64_inc_and_test(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_inc_and_test(v); } #define atomic64_inc_and_test atomic64_inc_and_test @@ -1575,6 +1741,7 @@ static inline bool atomic64_add_negative(s64 i, atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_add_negative(i, v); } #define atomic64_add_negative atomic64_add_negative @@ -1585,6 +1752,7 @@ static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_fetch_add_unless(v, a, u); } #define atomic64_fetch_add_unless atomic64_fetch_add_unless @@ -1595,6 +1763,7 @@ static inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_add_unless(v, a, u); } #define atomic64_add_unless atomic64_add_unless @@ -1605,6 +1774,7 @@ static inline bool atomic64_inc_not_zero(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_inc_not_zero(v); } #define atomic64_inc_not_zero atomic64_inc_not_zero @@ -1615,6 +1785,7 @@ static inline bool atomic64_inc_unless_negative(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_inc_unless_negative(v); } #define atomic64_inc_unless_negative atomic64_inc_unless_negative @@ -1625,6 +1796,7 @@ static inline bool atomic64_dec_unless_positive(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_dec_unless_positive(v); } #define atomic64_dec_unless_positive atomic64_dec_unless_positive @@ -1635,6 +1807,7 @@ static inline s64 atomic64_dec_if_positive(atomic64_t *v) { kasan_check_write(v, sizeof(*v)); + kcsan_check_atomic(v, sizeof(*v), true); return arch_atomic64_dec_if_positive(v); } #define atomic64_dec_if_positive atomic64_dec_if_positive @@ -1645,6 +1818,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_xchg(__ai_ptr, __VA_ARGS__); \ }) #endif @@ -1654,6 +1828,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_xchg_acquire(__ai_ptr, __VA_ARGS__); \ }) #endif @@ -1663,6 +1838,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_xchg_release(__ai_ptr, __VA_ARGS__); \ }) #endif @@ -1672,6 +1848,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_xchg_relaxed(__ai_ptr, __VA_ARGS__); \ }) #endif @@ -1681,6 +1858,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_cmpxchg(__ai_ptr, __VA_ARGS__); \ }) #endif @@ -1690,6 +1868,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \ }) #endif @@ -1699,6 +1878,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \ }) #endif @@ -1708,6 +1888,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \ }) #endif @@ -1717,6 +1898,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_cmpxchg64(__ai_ptr, __VA_ARGS__); \ }) #endif @@ -1726,6 +1908,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \ }) #endif @@ -1735,6 +1918,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \ }) #endif @@ -1744,6 +1928,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \ }) #endif @@ -1752,6 +1937,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_cmpxchg_local(__ai_ptr, __VA_ARGS__); \ }) @@ -1759,6 +1945,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \ }) @@ -1766,6 +1953,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, sizeof(*__ai_ptr), true); \ arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \ }) @@ -1773,6 +1961,7 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, 2 * sizeof(*__ai_ptr), true); \ arch_cmpxchg_double(__ai_ptr, __VA_ARGS__); \ }) @@ -1781,8 +1970,9 @@ atomic64_dec_if_positive(atomic64_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ kasan_check_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \ + kcsan_check_atomic(__ai_ptr, 2 * sizeof(*__ai_ptr), true); \ arch_cmpxchg_double_local(__ai_ptr, __VA_ARGS__); \ }) #endif /* _ASM_GENERIC_ATOMIC_INSTRUMENTED_H */ -// b29b625d5de9280f680e42c7be859b55b15e5f6a +// 09d5dce9b60c034fcc1edcf5c84a6bbf71988d9c diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh index e09812372b17..c0553743a6f4 100755 --- a/scripts/atomic/gen-atomic-instrumented.sh +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -12,15 +12,20 @@ gen_param_check() local type="${arg%%:*}" local name="$(gen_param_name "${arg}")" local rw="write" + local is_write="true" case "${type#c}" in i) return;; esac # We don't write to constant parameters - [ ${type#c} != ${type} ] && rw="read" + if [ ${type#c} != ${type} ]; then + rw="read" + is_write="false" + fi printf "\tkasan_check_${rw}(${name}, sizeof(*${name}));\n" + printf "\tkcsan_check_atomic(${name}, sizeof(*${name}), ${is_write});\n" } #gen_param_check(arg...) @@ -108,6 +113,7 @@ cat < #include +#include EOF From patchwork Wed Oct 16 08:39:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11192647 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6EFC61390 for ; Wed, 16 Oct 2019 08:41:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2044421848 for ; Wed, 16 Oct 2019 08:41:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NSrpYdVj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2044421848 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9F0648E000D; Wed, 16 Oct 2019 04:41:38 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 952738E0001; Wed, 16 Oct 2019 04:41:38 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8691E8E000D; Wed, 16 Oct 2019 04:41:38 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0060.hostedemail.com [216.40.44.60]) by kanga.kvack.org (Postfix) with ESMTP id 662D28E0001 for ; Wed, 16 Oct 2019 04:41:38 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id B6F9C824556B for ; Wed, 16 Oct 2019 08:41:37 +0000 (UTC) X-FDA: 76049004234.04.spot93_2b08899ccd746 X-Spam-Summary: 30,2,0,c7cc55ae48432b4a,d41d8cd98f00b204,3vtemxqukcpshoyhujrrjoh.frpolqx0-ppnydfn.ruj@flex--elver.bounces.google.com,:elver@google.com:akiyks@gmail.com:stern@rowland.harvard.edu:glider@google.com:parri.andrea@gmail.com:andreyknvl@google.com:luto@kernel.org:ard.biesheuvel@linaro.org:arnd@arndb.de:boqun.feng@gmail.com:bp@alien8.de:dja@axtens.net:dlustig@nvidia.com:dave.hansen@linux.intel.com:dhowells@redhat.com:dvyukov@google.com:hpa@zytor.com:mingo@redhat.com:j.alglave@ucl.ac.uk:joel@joelfernandes.org:corbet@lwn.net:jpoimboe@redhat.com:luc.maranget@inria.fr:mark.rutland@arm.com:npiggin@gmail.com:paulmck@linux.ibm.com:peterz@infradead.org:tglx@linutronix.de:will@kernel.org:kasan-dev@googlegroups.com:linux-arch@vger.kernel.org:linux-doc@vger.kernel.org:linux-efi@vger.kernel.org:linux-kbuild@vger.kernel.org:linux-kernel@vger.kernel.org::x86@kernel.org,RULES_HIT:1:2:41:152:355:379:491:541:800:960:968:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1593:159 4:1605:1 X-HE-Tag: spot93_2b08899ccd746 X-Filterd-Recvd-Size: 10100 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 08:41:35 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id 7so6131512wrl.2 for ; Wed, 16 Oct 2019 01:41:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=7ROjjIT0dSiqK7weicBG4sWceIh3b0q1ZAIHDJ8V6Dk=; b=NSrpYdVj6fvXpFrnYfMPpaxzWB8tZgAHeQar0WnBSBStaLWhUFgVJk9C/oS/EQde3j MN4Pu2PiFyorxV2NAx67TfVsuapiiUeipddfWo7q/H/WBwZUnkM+6M8rF64LWI1zqnod u5c2hdeezFE94Xt0CqiWPN9JZ+XWZGa2LmHJ17RvhMuYF8jJYNzHYmLp4bSOyQhnLb5n wcTS/OQn6cmqiOthUHvG8diywSJRIpleWQFeeW+APiJo3NuWVCMKVB3OAzacpbRddg+g eGRi/qTPu5x/ubxado+lIc7l9pv4VKTx9GmOiVRvCsEI/RgIn0MQsYWgNaf1gyqvzMor 8Gxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7ROjjIT0dSiqK7weicBG4sWceIh3b0q1ZAIHDJ8V6Dk=; b=t8A6USVXExXgn68Aeh+D0fIEIcw8287iJFgX1FkSrRLd4/Q74y/0rr//O2jyjTp6sg UogeOL94OfQ4EFyLug9bIYP5hQV02NdN7BuatZ88OSaCgT8rOMz9CHbO+aido5nNtwjv GRFsy1TVny9YpUJjPzPJ6xh+Giz/YWxkFCudrHqDs5PUHUgRfdMxkF+E1ERpYHHVwD1k He8LOiSS1s8vwWyUfYjceDAaq2prDp0Wzz1guxtXdDQ3i7LbGTxXt0FMspI01/JKgJtz GpmAr3Cif96hrvLIvIgXclKYyGm4ogK4h0Sv4/7o9PowhAfw0JWDjod/fIaN0R19sQRH 9ppQ== X-Gm-Message-State: APjAAAXbDESuDfKk4GPc23rtsWA66P2vA4wmFFbjElKx/BNl1ifI6Sio ydvT2CY7wbPoh6O6rfyv9sIc08LIxg== X-Google-Smtp-Source: APXvYqyPTbF4tMvD/+6fq80mlU+ztvS58zl/Ed2jENLqLLVlUfZnkS670w03BnK/IEZ6vSaVSsQZVxJcuQ== X-Received: by 2002:adf:e983:: with SMTP id h3mr1520622wrm.95.1571215294184; Wed, 16 Oct 2019 01:41:34 -0700 (PDT) Date: Wed, 16 Oct 2019 10:39:59 +0200 In-Reply-To: <20191016083959.186860-1-elver@google.com> Message-Id: <20191016083959.186860-9-elver@google.com> Mime-Version: 1.0 References: <20191016083959.186860-1-elver@google.com> X-Mailer: git-send-email 2.23.0.700.g56cf767bdb-goog Subject: [PATCH 8/8] x86, kcsan: Enable KCSAN for x86 From: Marco Elver To: elver@google.com Cc: akiyks@gmail.com, stern@rowland.harvard.edu, glider@google.com, parri.andrea@gmail.com, andreyknvl@google.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, boqun.feng@gmail.com, bp@alien8.de, dja@axtens.net, dlustig@nvidia.com, dave.hansen@linux.intel.com, dhowells@redhat.com, dvyukov@google.com, hpa@zytor.com, mingo@redhat.com, j.alglave@ucl.ac.uk, joel@joelfernandes.org, corbet@lwn.net, jpoimboe@redhat.com, luc.maranget@inria.fr, mark.rutland@arm.com, npiggin@gmail.com, paulmck@linux.ibm.com, peterz@infradead.org, tglx@linutronix.de, will@kernel.org, kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-efi@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch enables KCSAN for x86, with updates to build rules to not use KCSAN for several incompatible compilation units. Signed-off-by: Marco Elver --- arch/x86/Kconfig | 1 + arch/x86/boot/Makefile | 1 + arch/x86/boot/compressed/Makefile | 1 + arch/x86/entry/vdso/Makefile | 1 + arch/x86/include/asm/bitops.h | 2 +- arch/x86/kernel/Makefile | 6 ++++++ arch/x86/kernel/cpu/Makefile | 3 +++ arch/x86/lib/Makefile | 2 ++ arch/x86/mm/Makefile | 3 +++ arch/x86/purgatory/Makefile | 1 + arch/x86/realmode/Makefile | 1 + arch/x86/realmode/rm/Makefile | 1 + drivers/firmware/efi/libstub/Makefile | 1 + 13 files changed, 23 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index d6e1faa28c58..81859be4a005 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -226,6 +226,7 @@ config X86 select VIRT_TO_BUS select X86_FEATURE_NAMES if PROC_FS select PROC_PID_ARCH_STATUS if PROC_FS + select HAVE_ARCH_KCSAN if X86_64 config INSTRUCTION_DECODER def_bool y diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile index e2839b5c246c..2f9e928acae6 100644 --- a/arch/x86/boot/Makefile +++ b/arch/x86/boot/Makefile @@ -10,6 +10,7 @@ # KASAN_SANITIZE := n +KCSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Kernel does not boot with kcov instrumentation here. diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 6b84afdd7538..0921689f7c70 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -18,6 +18,7 @@ # compressed vmlinux.bin.all + u32 size of vmlinux.bin.all KASAN_SANITIZE := n +KCSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index 0f2154106d01..d2cd34d2ac4e 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -12,6 +12,7 @@ include $(srctree)/lib/vdso/Makefile KBUILD_CFLAGS += $(DISABLE_LTO) KASAN_SANITIZE := n UBSAN_SANITIZE := n +KCSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h index 7d1f6a49bfae..a36d900960e4 100644 --- a/arch/x86/include/asm/bitops.h +++ b/arch/x86/include/asm/bitops.h @@ -201,7 +201,7 @@ arch_test_and_change_bit(long nr, volatile unsigned long *addr) return GEN_BINARY_RMWcc(LOCK_PREFIX __ASM_SIZE(btc), *addr, c, "Ir", nr); } -static __always_inline bool constant_test_bit(long nr, const volatile unsigned long *addr) +static __no_kcsan_or_inline bool constant_test_bit(long nr, const volatile unsigned long *addr) { return ((1UL << (nr & (BITS_PER_LONG-1))) & (addr[nr >> _BITOPS_LONG_SHIFT])) != 0; diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 3578ad248bc9..adccbbfa47e4 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -28,6 +28,12 @@ KASAN_SANITIZE_dumpstack_$(BITS).o := n KASAN_SANITIZE_stacktrace.o := n KASAN_SANITIZE_paravirt.o := n +KCSAN_SANITIZE_head$(BITS).o := n +KCSAN_SANITIZE_dumpstack.o := n +KCSAN_SANITIZE_dumpstack_$(BITS).o := n +KCSAN_SANITIZE_stacktrace.o := n +KCSAN_SANITIZE_paravirt.o := n + OBJECT_FILES_NON_STANDARD_relocate_kernel_$(BITS).o := y OBJECT_FILES_NON_STANDARD_test_nx.o := y OBJECT_FILES_NON_STANDARD_paravirt_patch.o := y diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index d7a1e5a9331c..7651c4f37e5e 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -3,6 +3,9 @@ # Makefile for x86-compatible CPU details, features and quirks # +KCSAN_SANITIZE_common.o = n +KCSAN_SANITIZE_perf_event.o = n + # Don't trace early stages of a secondary CPU boot ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_common.o = -pg diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile index 5246db42de45..4e4b74f525f2 100644 --- a/arch/x86/lib/Makefile +++ b/arch/x86/lib/Makefile @@ -5,11 +5,13 @@ # Produces uninteresting flaky coverage. KCOV_INSTRUMENT_delay.o := n +KCSAN_SANITIZE_delay.o := n # Early boot use of cmdline; don't instrument it ifdef CONFIG_AMD_MEM_ENCRYPT KCOV_INSTRUMENT_cmdline.o := n KASAN_SANITIZE_cmdline.o := n +KCSAN_SANITIZE_cmdline.o := n ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_cmdline.o = -pg diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 84373dc9b341..ee871602f96a 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -7,6 +7,9 @@ KCOV_INSTRUMENT_mem_encrypt_identity.o := n KASAN_SANITIZE_mem_encrypt.o := n KASAN_SANITIZE_mem_encrypt_identity.o := n +KCSAN_SANITIZE_mem_encrypt.o := n +KCSAN_SANITIZE_mem_encrypt_identity.o := n + ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_mem_encrypt.o = -pg CFLAGS_REMOVE_mem_encrypt_identity.o = -pg diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile index fb4ee5444379..72060744f34f 100644 --- a/arch/x86/purgatory/Makefile +++ b/arch/x86/purgatory/Makefile @@ -18,6 +18,7 @@ LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib -z nodefa targets += purgatory.ro KASAN_SANITIZE := n +KCSAN_SANITIZE := n KCOV_INSTRUMENT := n # These are adjustments to the compiler flags used for objects that diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile index 682c895753d9..4fc7ce2534dd 100644 --- a/arch/x86/realmode/Makefile +++ b/arch/x86/realmode/Makefile @@ -7,6 +7,7 @@ # # KASAN_SANITIZE := n +KCSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y subdir- := rm diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile index f60501a384f9..6f7fbe9dfda6 100644 --- a/arch/x86/realmode/rm/Makefile +++ b/arch/x86/realmode/rm/Makefile @@ -7,6 +7,7 @@ # # KASAN_SANITIZE := n +KCSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index 0460c7581220..a56981286623 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -32,6 +32,7 @@ KBUILD_CFLAGS := $(cflags-y) -DDISABLE_BRANCH_PROFILING \ GCOV_PROFILE := n KASAN_SANITIZE := n +KCSAN_SANITIZE := n UBSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y