From patchwork Tue Oct 5 10:58:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 621D6C4332F for ; Tue, 5 Oct 2021 11:00:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 14D126117A for ; Tue, 5 Oct 2021 11:00:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 14D126117A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A8E89940013; Tue, 5 Oct 2021 07:00:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A131D940007; Tue, 5 Oct 2021 07:00:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 816D2940013; Tue, 5 Oct 2021 07:00:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 7299D940007 for ; Tue, 5 Oct 2021 07:00:17 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 248E88249980 for ; Tue, 5 Oct 2021 11:00:17 +0000 (UTC) X-FDA: 78662089674.25.19D1D51 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf23.hostedemail.com (Postfix) with ESMTP id C63649001B20 for ; Tue, 5 Oct 2021 11:00:16 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id n17-20020a7bc5d1000000b0030d4041f73eso922544wmk.4 for ; Tue, 05 Oct 2021 04:00:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=E3HFvEWei3GORNCVEmwUUOkLQSsacsR5poZwBNsW3No=; b=gHVBD0A+3lHoHP3P8gK1UEdrby9miEcsvus8c3g0zgInVbsGgAU3MVoXEmvCMU0pqT XsksqsMBf+Hrn5+rm9PXOFI7hX3xUlvTsfvIp0JihSS0PPAqU4KYHoDYpWRmKjlLfv3Z ux5nfeNlt/aT3R0wsShxX5tYcRBQGyKY82Rl2rRwAfYMVXQQmz8vQj5Wv5lkreriN5iv 4MrGMe4C5FTHMphxlwJoEyac0WBUKG5Pj09EyVrDVCw2pbWupHZuTVzmR6/ARtMZvkBk j/24lJVwkBO4ioR+fijSBKbPcf2HbjlMwYLLnO0riZ/tFPOjsoI5h9EGaPdHgllWr2mf 1vFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=E3HFvEWei3GORNCVEmwUUOkLQSsacsR5poZwBNsW3No=; b=XumNQ6mcM6bhndhTmBcX50bTHjRzTCuXfOMYOPIMZd+xtgJyqwdFdo8BzbkGm1TEQd jaExlPVXlR2HBy5O87Ge4iDKOIrj1VA246vAOthgZSg9qwFzqJkNObRGokzEnmFdcNyv +njvcNm8a+U+yPQsVGiaZK2iqWb5E8A9ViOPtTZsWHSw8MTDTjNzs0pblic7byB2r9Ek DmVKoDmsP+y57MhHM8Ue2MQpkHq+N9lusOx6rGqv9Z0FnWafdvOEgmWL1L/PM4/L4KWc wsIrgw67ZO0T18MXR4TdS5k4maOfpPL50MjqrjlVsZl9XPhUq5Yy9+gJ9NNUpjowf8Sg o5oQ== X-Gm-Message-State: AOAM530eRCFmOdtIEgxEPVvwFoQUhuxK/jJQANsXVMJyKqLXVdPtgee7 NCARWVIy8woitnKJZHXmFnsqFbbqTA== X-Google-Smtp-Source: ABdhPJwKseKhghSFu1rN08u1NlsDKg2r0ReVbz2Z2EPPZdFMgRfGNw4KroaYjgRYAz0IHVtIcy7DHZxqsA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a1c:f31a:: with SMTP id q26mr2540960wmq.159.1633431615667; Tue, 05 Oct 2021 04:00:15 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:56 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-15-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 14/23] locking/barriers, kcsan: Add instrumentation for barriers From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=gHVBD0A+; spf=pass (imf23.hostedemail.com: domain of 3PzBcYQUKCCICJTCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3PzBcYQUKCCICJTCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: C63649001B20 X-Stat-Signature: xggkjttej7hzpfctc49ic5dkgg55n9m6 X-HE-Tag: 1633431616-68801 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds the required KCSAN instrumentation for barriers if CONFIG_SMP. KCSAN supports modeling the effects of: smp_mb() smp_rmb() smp_wmb() smp_store_release() Signed-off-by: Marco Elver --- include/asm-generic/barrier.h | 29 +++++++++++++++-------------- include/linux/spinlock.h | 2 +- 2 files changed, 16 insertions(+), 15 deletions(-) diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 640f09479bdf..27a9c9edfef6 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -14,6 +14,7 @@ #ifndef __ASSEMBLY__ #include +#include #include #ifndef nop @@ -62,15 +63,15 @@ #ifdef CONFIG_SMP #ifndef smp_mb -#define smp_mb() __smp_mb() +#define smp_mb() do { kcsan_mb(); __smp_mb(); } while (0) #endif #ifndef smp_rmb -#define smp_rmb() __smp_rmb() +#define smp_rmb() do { kcsan_rmb(); __smp_rmb(); } while (0) #endif #ifndef smp_wmb -#define smp_wmb() __smp_wmb() +#define smp_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0) #endif #else /* !CONFIG_SMP */ @@ -123,19 +124,19 @@ do { \ #ifdef CONFIG_SMP #ifndef smp_store_mb -#define smp_store_mb(var, value) __smp_store_mb(var, value) +#define smp_store_mb(var, value) do { kcsan_mb(); __smp_store_mb(var, value); } while (0) #endif #ifndef smp_mb__before_atomic -#define smp_mb__before_atomic() __smp_mb__before_atomic() +#define smp_mb__before_atomic() do { kcsan_mb(); __smp_mb__before_atomic(); } while (0) #endif #ifndef smp_mb__after_atomic -#define smp_mb__after_atomic() __smp_mb__after_atomic() +#define smp_mb__after_atomic() do { kcsan_mb(); __smp_mb__after_atomic(); } while (0) #endif #ifndef smp_store_release -#define smp_store_release(p, v) __smp_store_release(p, v) +#define smp_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0) #endif #ifndef smp_load_acquire @@ -178,13 +179,13 @@ do { \ #endif /* CONFIG_SMP */ /* Barriers for virtual machine guests when talking to an SMP host */ -#define virt_mb() __smp_mb() -#define virt_rmb() __smp_rmb() -#define virt_wmb() __smp_wmb() -#define virt_store_mb(var, value) __smp_store_mb(var, value) -#define virt_mb__before_atomic() __smp_mb__before_atomic() -#define virt_mb__after_atomic() __smp_mb__after_atomic() -#define virt_store_release(p, v) __smp_store_release(p, v) +#define virt_mb() do { kcsan_mb(); __smp_mb(); } while (0) +#define virt_rmb() do { kcsan_rmb(); __smp_rmb(); } while (0) +#define virt_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0) +#define virt_store_mb(var, value) do { kcsan_mb(); __smp_store_mb(var, value); } while (0) +#define virt_mb__before_atomic() do { kcsan_mb(); __smp_mb__before_atomic(); } while (0) +#define virt_mb__after_atomic() do { kcsan_mb(); __smp_mb__after_atomic(); } while (0) +#define virt_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0) #define virt_load_acquire(p) __smp_load_acquire(p) /** diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 45310ea1b1d7..f6d69808b929 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -172,7 +172,7 @@ do { \ * Architectures that can implement ACQUIRE better need to take care. */ #ifndef smp_mb__after_spinlock -#define smp_mb__after_spinlock() do { } while (0) +#define smp_mb__after_spinlock() kcsan_mb() #endif #ifdef CONFIG_DEBUG_SPINLOCK