From patchwork Thu Feb 6 18:09:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963480 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF9A91A4B69 for ; Thu, 6 Feb 2025 18:17:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865872; cv=none; b=rd5Dobgzul9QD/mv+WyDk3yGuJcBPlfj9pcWaE9PJWdE6Cejl+rzZmTBgdoU5us1Fh5hLFaMi17x2IM6fc/d1TpWCiIARHaO/CxThQ2WF2MyF07tpdEbS4CABgKxdaP65K9MNQHzbAul92hG81Du6+3XloQIs9xQs7P+A1LKlnc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865872; c=relaxed/simple; bh=yFPTPmKgM1WyRS4/pLwj/qD4AEqMsgmnBOUoRJkxsqk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=EEkk4Lcm5oOMqt1fudLpoYwQH1ylo5OLfmJyc5Ip76JdBoLUHNDcx4pkll9OHfVQY53yvfw+SQjo2EjvqNTSi6O01yA6/4Z9QycNxJx4+6CjzfTTqUOQKJK1AIl7FF/Cmifd0bJ6glIeHHn+HsOEYygEA7kPiytyLYgHwSgRMx4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=h8Q5oXDB; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="h8Q5oXDB" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5d89a53fc46so1424178a12.2 for ; Thu, 06 Feb 2025 10:17:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865868; x=1739470668; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qF82DJs8+giiKRyetuSOsU+JANR9VCfWNPadmkefX9I=; b=h8Q5oXDBlm8Do82HkcBRkumMnHB5P/Hw1WzIcIg1XuVgBAWhfqnu+NP8ZpS4/ZaT+O rNy5fc0uBT7EV78LB8BI3wY+PgP62QqLEFb9n4cB/PSfdZ0mLKelYPcdFhXdhlC6ZWTj p7DIK6a3KPxScbjtaPUY0VojhABy0qQgjGUXBFRNbicEx+m5lzpn1m5fQq9dPzcsfi/S aRqzCLfuX5PTx/wya/4c/rRY+0yhEpg9rTi/+QtEAO54nLG53zPNyc6ZsGMH4P4lTAyE ZT/Xug3WG8KOK7wuEzNTXzzb0kOVWVmGWeXdUIFL5PwnbLUOaYqn3rOJEUZj2K3t0h3e ocTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865868; x=1739470668; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qF82DJs8+giiKRyetuSOsU+JANR9VCfWNPadmkefX9I=; b=NQV4BG/LHdKw/LYzwIMfpRrv2jHsdPVWE4MRXrRn4vTfqsH6sb7va6vabT+2mim9fh Zr25xKWysnDu0cTxzDY0DndNhRqNuknClTxkzrWMV6fpPTXBK/YLfNkv/Er+eD4+jxnX xH+xBi1BOLxigw3MxQOtn+dQDpxar3iq1nMV0aT58FY0FoPKGi9Lk20sVQ4p7B2SIFKX M75xFUPSHu6NWTfmovxAuMdjKie3uSNEZi7ZrlbjJ+oIMISX5/yoSv94h7JLWcEFFiUQ XdqWiPYb2M54kEGmbbh36HsmWoJm/q3klvoM0k8Y9QIxhsX6VRZAPQ23Lv3YsEP8nBF0 7x/Q== X-Forwarded-Encrypted: i=1; AJvYcCXb3dhZ8gvI6zhTiYyzntBGRvPAJI9qAkNwOKxbd1QVNWs7mzBZ7F6KyUv73V8Y2rNHKq4=@vger.kernel.org X-Gm-Message-State: AOJu0Ywt4YADhp+f0s53/XXFvaLkwJaGh89kgbwvNUvdLH+xx9zZkmOd PI5hC8Q3TiqP3ZKC6dHkTNfsUMFapNcGQuxMHlF9LMSmU6T75nzJPYhwqV6nojRoElJwma1+pQ= = X-Google-Smtp-Source: AGHT+IFhkhtKPOLjUVtcZrbrdXay8cNt9Gf56kNUcmjSPeBOeQQDqEs7f95+SgyQPwoEkhsZdOCvuOgK1w== X-Received: from edbcs11.prod.google.com ([2002:a05:6402:c4b:b0:5dc:22e2:2325]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5290:b0:5dc:5a2f:a726 with SMTP id 4fb4d7f45d1cf-5de45072314mr470242a12.22.1738865868148; Thu, 06 Feb 2025 10:17:48 -0800 (PST) Date: Thu, 6 Feb 2025 19:09:55 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-2-elver@google.com> Subject: [PATCH RFC 01/24] compiler_types: Move lock checking attributes to compiler-capability-analysis.h From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org The conditional definition of lock checking macros and attributes is about to become more complex. Factor them out into their own header for better readability, and to make it obvious which features are supported by which mode (currently only Sparse). This is the first step towards generalizing towards "capability analysis". No functional change intended. Signed-off-by: Marco Elver --- include/linux/compiler-capability-analysis.h | 32 ++++++++++++++++++++ include/linux/compiler_types.h | 18 ++--------- 2 files changed, 34 insertions(+), 16 deletions(-) create mode 100644 include/linux/compiler-capability-analysis.h diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/compiler-capability-analysis.h new file mode 100644 index 000000000000..7546ddb83f86 --- /dev/null +++ b/include/linux/compiler-capability-analysis.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Macros and attributes for compiler-based static capability analysis. + */ + +#ifndef _LINUX_COMPILER_CAPABILITY_ANALYSIS_H +#define _LINUX_COMPILER_CAPABILITY_ANALYSIS_H + +#ifdef __CHECKER__ + +/* Sparse context/lock checking support. */ +# define __must_hold(x) __attribute__((context(x,1,1))) +# define __acquires(x) __attribute__((context(x,0,1))) +# define __cond_acquires(x) __attribute__((context(x,0,-1))) +# define __releases(x) __attribute__((context(x,1,0))) +# define __acquire(x) __context__(x,1) +# define __release(x) __context__(x,-1) +# define __cond_lock(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) + +#else /* !__CHECKER__ */ + +# define __must_hold(x) +# define __acquires(x) +# define __cond_acquires(x) +# define __releases(x) +# define __acquire(x) (void)0 +# define __release(x) (void)0 +# define __cond_lock(x, c) (c) + +#endif /* __CHECKER__ */ + +#endif /* _LINUX_COMPILER_CAPABILITY_ANALYSIS_H */ diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 981cc3d7e3aa..4a458e41293c 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -24,6 +24,8 @@ # define BTF_TYPE_TAG(value) /* nothing */ #endif +#include + /* sparse defines __CHECKER__; see Documentation/dev-tools/sparse.rst */ #ifdef __CHECKER__ /* address spaces */ @@ -34,14 +36,6 @@ # define __rcu __attribute__((noderef, address_space(__rcu))) static inline void __chk_user_ptr(const volatile void __user *ptr) { } static inline void __chk_io_ptr(const volatile void __iomem *ptr) { } -/* context/locking */ -# define __must_hold(x) __attribute__((context(x,1,1))) -# define __acquires(x) __attribute__((context(x,0,1))) -# define __cond_acquires(x) __attribute__((context(x,0,-1))) -# define __releases(x) __attribute__((context(x,1,0))) -# define __acquire(x) __context__(x,1) -# define __release(x) __context__(x,-1) -# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) /* other */ # define __force __attribute__((force)) # define __nocast __attribute__((nocast)) @@ -62,14 +56,6 @@ static inline void __chk_io_ptr(const volatile void __iomem *ptr) { } # define __chk_user_ptr(x) (void)0 # define __chk_io_ptr(x) (void)0 -/* context/locking */ -# define __must_hold(x) -# define __acquires(x) -# define __cond_acquires(x) -# define __releases(x) -# define __acquire(x) (void)0 -# define __release(x) (void)0 -# define __cond_lock(x,c) (c) /* other */ # define __force # define __nocast From patchwork Thu Feb 6 18:09:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963481 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39754165F01 for ; Thu, 6 Feb 2025 18:17:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865874; cv=none; b=uvokRXOn1Y9ZxfpXM4Ww6SMeHEj5YEio2ckfQsYH8j3Ci9UJgV5WIUKjZwgAoRWNuesyUf5YLeNa6yuAvt+3XV6lCbS2ctEwYK+kIbZMAw/q1u0/XRci62aJDWB6/iztd0Pi7hGY43H6ipwtABC/ZrRZqN+U9qa6qmGKmNxlozs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865874; c=relaxed/simple; bh=iyz037rEQeSBSVTGhzJ4Z6M1D6aGZ+4JptXyFE10Z/A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=O4YaRZWNCP5P/yTYp/eUHS/UxD7qKqnYOY8AD/UC/tpbMO4eqLRotcijDV4zXZCQAr0uufBTyiJMqItRKsokbUlCT+exO2vfSD2oEqgq/tt7IcQPMg5bBCK8UMYVmlHDOahgHPtuYeRcw7zeXFNE+MMgqBK253bTXZvNmOz3jls= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4fANvAIb; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4fANvAIb" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-38dbdc2926eso752960f8f.2 for ; Thu, 06 Feb 2025 10:17:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865870; x=1739470670; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PGmsmIsXIBV39eIxfDyWh+XEHePbJneD0pcTBLdRlSI=; b=4fANvAIboUIOjoZ20hLhSQdoNtFzsnDLJ4m74AAnwSWwIJl3Dzi1U7iTKzd+mQQSrp J8JBBljauCH1x127RFb7ZAgXm1PZZRQ8pGgvtPpA8X8S1uT2/aip3heAEso6s9Xfr0Wa 3WD4zPlcxlJ0/XYN/D0gZlatGBUvfz+wyZe+3XT1XlSvSdtzT7DQDJDjB0/dCv1BMTZz +uGDuq9RYMhzBxFw42w57Tyb8whvO3vKOHbd2P25bfcOAMFqjomutkvDXl3oeJUNPitg RhWVJHWi1usn6KzAvpfjCPcqWdSApGNmvoDWMUoPiDKE7qJUhhFISNxRxBlPiEZ0wxjO +Jwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865870; x=1739470670; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PGmsmIsXIBV39eIxfDyWh+XEHePbJneD0pcTBLdRlSI=; b=rUTDV8TTYp15WylL3L1LUzoAV7Fq8sxnbg6nxOjKx/lY9NSpxedWBuBMP3d32h1q+Z kOFyEAe6cKiwwGN2dhAC0G+M0woJD1eGTbGlvJ6lHtbebfc/C2N7RmsMEbbXrx1dW2CD LXAPkex9zAOQT9w9d3uhhAcInHlR9MND5nBfOR532wTE4rVfws16hGKGTSvOsTWvv2GN 316aPSxc2f9sght8M9pjX89NuvXM5d1V6XHTMM13GbG7aJQrXW0uR23xpCKV583echCG 4YlTKNKaINwlZpb3FDNDsjLzkn359zG6ovG0msNeXhCLoMyBh2AxDILjLTNuF+BQME1+ u5Hg== X-Forwarded-Encrypted: i=1; AJvYcCVcDeWySQycDjyrVkE8sufHU0iHhgjMkpGgmmGTcKiMwoJ39TdtZQB8t+ORHNNYU2A21X0=@vger.kernel.org X-Gm-Message-State: AOJu0YzYDvvg07Wjep2P1dcqZdxo9qyvsr3kQXyjazHhxqcSruIATEI5 WX1Y3MCQ8dy0FgeEgT2qgbNO/0Im8nWpqTJIli2tmR2eVIAQDameeNtrJQty3HAiVpF0XoIPCA= = X-Google-Smtp-Source: AGHT+IGDAkcn1bQ3IkfDNe8IDZP0C9GaYxa8QLVu0+68ezGhl3fk9usp9e5g9je1OprcIzBUfUjOK5kXzw== X-Received: from wmbhg20.prod.google.com ([2002:a05:600c:5394:b0:436:a247:a0e6]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e15:b0:435:192:63fb with SMTP id 5b1f17b1804b1-4392497d02amr4329395e9.3.1738865870730; Thu, 06 Feb 2025 10:17:50 -0800 (PST) Date: Thu, 6 Feb 2025 19:09:56 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-3-elver@google.com> Subject: [PATCH RFC 02/24] compiler-capability-analysis: Rename __cond_lock() to __cond_acquire() From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Just like the pairing of attribute __acquires() with a matching function-like macro __acquire(), the attribute __cond_acquires() should have a matching function-like macro __cond_acquire(). To be consistent, rename __cond_lock() to __cond_acquire(). Signed-off-by: Marco Elver --- drivers/net/wireless/intel/iwlwifi/iwl-trans.h | 2 +- drivers/net/wireless/intel/iwlwifi/pcie/internal.h | 2 +- include/linux/compiler-capability-analysis.h | 4 ++-- include/linux/mm.h | 6 +++--- include/linux/rwlock.h | 4 ++-- include/linux/rwlock_rt.h | 4 ++-- include/linux/sched/signal.h | 2 +- include/linux/spinlock.h | 12 ++++++------ include/linux/spinlock_rt.h | 6 +++--- kernel/time/posix-timers.c | 2 +- tools/include/linux/compiler_types.h | 4 ++-- 11 files changed, 24 insertions(+), 24 deletions(-) diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h index f6234065dbdd..560a5a899d1f 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h @@ -1136,7 +1136,7 @@ void iwl_trans_set_bits_mask(struct iwl_trans *trans, u32 reg, bool _iwl_trans_grab_nic_access(struct iwl_trans *trans); #define iwl_trans_grab_nic_access(trans) \ - __cond_lock(nic_access, \ + __cond_acquire(nic_access, \ likely(_iwl_trans_grab_nic_access(trans))) void __releases(nic_access) diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h index 856b7e9f717d..a1becf833dc5 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h +++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h @@ -560,7 +560,7 @@ void iwl_trans_pcie_free_pnvm_dram_regions(struct iwl_dram_regions *dram_regions bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans); #define _iwl_trans_pcie_grab_nic_access(trans) \ - __cond_lock(nic_access_nobh, \ + __cond_acquire(nic_access_nobh, \ likely(__iwl_trans_pcie_grab_nic_access(trans))) void iwl_trans_pcie_check_product_reset_status(struct pci_dev *pdev); diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/compiler-capability-analysis.h index 7546ddb83f86..dfed4e7e6ab8 100644 --- a/include/linux/compiler-capability-analysis.h +++ b/include/linux/compiler-capability-analysis.h @@ -15,7 +15,7 @@ # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) -# define __cond_lock(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) +# define __cond_acquire(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) #else /* !__CHECKER__ */ @@ -25,7 +25,7 @@ # define __releases(x) # define __acquire(x) (void)0 # define __release(x) (void)0 -# define __cond_lock(x, c) (c) +# define __cond_acquire(x, c) (c) #endif /* __CHECKER__ */ diff --git a/include/linux/mm.h b/include/linux/mm.h index 7b1068ddcbb7..a2365f4d6826 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2738,7 +2738,7 @@ static inline pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, spinlock_t **ptl) { pte_t *ptep; - __cond_lock(*ptl, ptep = __get_locked_pte(mm, addr, ptl)); + __cond_acquire(*ptl, ptep = __get_locked_pte(mm, addr, ptl)); return ptep; } @@ -3029,7 +3029,7 @@ static inline pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, { pte_t *pte; - __cond_lock(RCU, pte = ___pte_offset_map(pmd, addr, pmdvalp)); + __cond_acquire(RCU, pte = ___pte_offset_map(pmd, addr, pmdvalp)); return pte; } static inline pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr) @@ -3044,7 +3044,7 @@ static inline pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, { pte_t *pte; - __cond_lock(RCU, __cond_lock(*ptlp, + __cond_acquire(RCU, __cond_acquire(*ptlp, pte = __pte_offset_map_lock(mm, pmd, addr, ptlp))); return pte; } diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h index 5b87c6f4a243..58c346947aa2 100644 --- a/include/linux/rwlock.h +++ b/include/linux/rwlock.h @@ -49,8 +49,8 @@ do { \ * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various * methods are defined as nops in the case they are not required. */ -#define read_trylock(lock) __cond_lock(lock, _raw_read_trylock(lock)) -#define write_trylock(lock) __cond_lock(lock, _raw_write_trylock(lock)) +#define read_trylock(lock) __cond_acquire(lock, _raw_read_trylock(lock)) +#define write_trylock(lock) __cond_acquire(lock, _raw_write_trylock(lock)) #define write_lock(lock) _raw_write_lock(lock) #define read_lock(lock) _raw_read_lock(lock) diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h index 7d81fc6918ee..5320b4b66405 100644 --- a/include/linux/rwlock_rt.h +++ b/include/linux/rwlock_rt.h @@ -55,7 +55,7 @@ static __always_inline void read_lock_irq(rwlock_t *rwlock) flags = 0; \ } while (0) -#define read_trylock(lock) __cond_lock(lock, rt_read_trylock(lock)) +#define read_trylock(lock) __cond_acquire(lock, rt_read_trylock(lock)) static __always_inline void read_unlock(rwlock_t *rwlock) { @@ -111,7 +111,7 @@ static __always_inline void write_lock_irq(rwlock_t *rwlock) flags = 0; \ } while (0) -#define write_trylock(lock) __cond_lock(lock, rt_write_trylock(lock)) +#define write_trylock(lock) __cond_acquire(lock, rt_write_trylock(lock)) #define write_trylock_irqsave(lock, flags) \ ({ \ diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h index d5d03d919df8..3304cce4b1bf 100644 --- a/include/linux/sched/signal.h +++ b/include/linux/sched/signal.h @@ -741,7 +741,7 @@ static inline struct sighand_struct *lock_task_sighand(struct task_struct *task, struct sighand_struct *ret; ret = __lock_task_sighand(task, flags); - (void)__cond_lock(&task->sighand->siglock, ret); + (void)__cond_acquire(&task->sighand->siglock, ret); return ret; } diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 63dd8cf3c3c2..678e6f0679a1 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -212,7 +212,7 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) * various methods are defined as nops in the case they are not * required. */ -#define raw_spin_trylock(lock) __cond_lock(lock, _raw_spin_trylock(lock)) +#define raw_spin_trylock(lock) __cond_acquire(lock, _raw_spin_trylock(lock)) #define raw_spin_lock(lock) _raw_spin_lock(lock) @@ -284,7 +284,7 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) #define raw_spin_unlock_bh(lock) _raw_spin_unlock_bh(lock) #define raw_spin_trylock_bh(lock) \ - __cond_lock(lock, _raw_spin_trylock_bh(lock)) + __cond_acquire(lock, _raw_spin_trylock_bh(lock)) #define raw_spin_trylock_irq(lock) \ ({ \ @@ -499,21 +499,21 @@ static inline int rwlock_needbreak(rwlock_t *lock) */ extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); #define atomic_dec_and_lock(atomic, lock) \ - __cond_lock(lock, _atomic_dec_and_lock(atomic, lock)) + __cond_acquire(lock, _atomic_dec_and_lock(atomic, lock)) extern int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, unsigned long *flags); #define atomic_dec_and_lock_irqsave(atomic, lock, flags) \ - __cond_lock(lock, _atomic_dec_and_lock_irqsave(atomic, lock, &(flags))) + __cond_acquire(lock, _atomic_dec_and_lock_irqsave(atomic, lock, &(flags))) extern int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock); #define atomic_dec_and_raw_lock(atomic, lock) \ - __cond_lock(lock, _atomic_dec_and_raw_lock(atomic, lock)) + __cond_acquire(lock, _atomic_dec_and_raw_lock(atomic, lock)) extern int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock, unsigned long *flags); #define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) \ - __cond_lock(lock, _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags))) + __cond_acquire(lock, _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags))) int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *lock_mask, size_t max_size, unsigned int cpu_mult, diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h index f6499c37157d..eaad4dd2baac 100644 --- a/include/linux/spinlock_rt.h +++ b/include/linux/spinlock_rt.h @@ -123,13 +123,13 @@ static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, } #define spin_trylock(lock) \ - __cond_lock(lock, rt_spin_trylock(lock)) + __cond_acquire(lock, rt_spin_trylock(lock)) #define spin_trylock_bh(lock) \ - __cond_lock(lock, rt_spin_trylock_bh(lock)) + __cond_acquire(lock, rt_spin_trylock_bh(lock)) #define spin_trylock_irq(lock) \ - __cond_lock(lock, rt_spin_trylock(lock)) + __cond_acquire(lock, rt_spin_trylock(lock)) #define spin_trylock_irqsave(lock, flags) \ ({ \ diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c index 1b675aee99a9..dbada41c10ad 100644 --- a/kernel/time/posix-timers.c +++ b/kernel/time/posix-timers.c @@ -63,7 +63,7 @@ static struct k_itimer *__lock_timer(timer_t timer_id, unsigned long *flags); #define lock_timer(tid, flags) \ ({ struct k_itimer *__timr; \ - __cond_lock(&__timr->it_lock, __timr = __lock_timer(tid, flags)); \ + __cond_acquire(&__timr->it_lock, __timr = __lock_timer(tid, flags)); \ __timr; \ }) diff --git a/tools/include/linux/compiler_types.h b/tools/include/linux/compiler_types.h index d09f9dc172a4..b1db30e510d0 100644 --- a/tools/include/linux/compiler_types.h +++ b/tools/include/linux/compiler_types.h @@ -20,7 +20,7 @@ # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) -# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) +# define __cond_acquire(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) #else /* __CHECKER__ */ /* context/locking */ # define __must_hold(x) @@ -28,7 +28,7 @@ # define __releases(x) # define __acquire(x) (void)0 # define __release(x) (void)0 -# define __cond_lock(x,c) (c) +# define __cond_acquire(x,c) (c) #endif /* __CHECKER__ */ /* Compiler specific macros. */ From patchwork Thu Feb 6 18:09:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963482 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB3331AAA0D for ; Thu, 6 Feb 2025 18:17:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865878; cv=none; b=pH38mKPQwQJa5tto9XrdmCuEux72XKgU1T/EsH7SRuPNaEky7zbUpaWCpJnRs6W/7GOLaX9rAXLq7a65xnBwgEuPMhEUz4ZlK0sliElePUl7xYbh0Wd5t3WVdObQJP1vc3pMzyocdl9aSqPFYjWaKM0/fr5+yd1s8pBvjGsCc8s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865878; c=relaxed/simple; bh=yRN4Gj4MvxGTov8s/QOCv5uvlVtRE0CbXf/8otlwQng=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=i7930bF3iQPISNjOruWcDPLO/pPqoTpnOuBDQdaMnzdHH1dSh8oWeUAtKve2TfjgzgJl6b+tkwsr1gdGtrBX4RsugZ77i8gFEB8Pr0ZvF9V5FLTIGRb3h8M+4Q9Z1uSMq1GpF+Ggd/LKJEiR17gGBVFC5gKDCzQ63xWP3vpbr74= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gx2Fwhyf; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gx2Fwhyf" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-ab397fff5a3so136798566b.1 for ; Thu, 06 Feb 2025 10:17:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865873; x=1739470673; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=i/6NHOCJ0j6Hx0AS5oQYzqhaSxqaXPhlyrR7hNMYhdY=; b=gx2FwhyfFt0kG5d9ezrfN0DEXUcQqs1MdEB8/PFrl6xAbFLyczy5ZCDpmNUElSJQU1 DdbwR1InlZlanZrZAWavT9JA/Y/3VHixiVq8BnBHA+FEJPPOdSbDa3cxqUDPOcqflDsz ekd9T3EYwTZepqn20zf39MBaPMCnVAlz2PIV1ysEyVAMjM4NaM0vOwXUbrqlktmEiuaF cNLnobQbDxyRluXqw7pTJsue3nt0Q6BWuJ7ATQ/KEpejMbLBXPvb5Owdo0cfH8kb9VOC 9BkOvsE8go7mknZmtI3tl16vu+CO8Kxu6uXQLKdUZUc57lbsdKRxn/lsrg76vsQ69Zk3 p5Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865873; x=1739470673; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=i/6NHOCJ0j6Hx0AS5oQYzqhaSxqaXPhlyrR7hNMYhdY=; b=YY+oB8sYdpBcVcajJ4UtyvEEDDuqxj39XZhcAO2nTosPNyZ0SV2FYY66oCru3bxAGQ 117ct6rssj9gy43C36vLukpQzjiZ3vgNcTGbjkVftUpkJkvfg9ERsEqLWb9RsZAaMnP4 kp5mGEuc4pcGknrsbDhPP4olBfpdGetnCoreQxKn5fkt5KO3pTfqT0frV+qj61OZOaoG JUlk3HRIndaZk0ss+LUa+uu9KrLBDpGOhMe9xJCXhUAUHClhOmLMugRBA56yoKL6zpE9 atQa6W1JKwcGXgn95V5ol4wCuPvnysPtqzbArSY+RMTYRppn+Sq21xbR/HK01chO0w8g /NLA== X-Forwarded-Encrypted: i=1; AJvYcCWt8NRJJqk2TT6lBIAXSFRmjXamCwQ10UBSFRv42FREh1Ws8ljffFU/cNlKBqxIk5Kxwek=@vger.kernel.org X-Gm-Message-State: AOJu0Yw4guARF2ZPH+40sY2eKpaRmlLsTyh8temYtEhgB73mAm3jcm+K esggk0opmiuuYd8uQJXNIS9c0dfEE5MpbYvLFOcuotNU8qf5zfJexpkiFFIfWjFvjCh8rOAAmg= = X-Google-Smtp-Source: AGHT+IHjRr9nsq+Pgl1IGJW7jXb045jrv43NdzF0tjB/1RMMH17fs3hF0cGFnuSlHPWuXEy7zIMiUwUXog== X-Received: from ejcvo15.prod.google.com ([2002:a17:907:a80f:b0:aab:9ce1:20df]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:6d21:b0:ab6:8bb8:af2e with SMTP id a640c23a62f3a-ab76e913b7amr490828066b.26.1738865873264; Thu, 06 Feb 2025 10:17:53 -0800 (PST) Date: Thu, 6 Feb 2025 19:09:57 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-4-elver@google.com> Subject: [PATCH RFC 03/24] compiler-capability-analysis: Add infrastructure for Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Capability analysis is a C language extension, which enables statically checking that user-definable "capabilities" are acquired and released where required. An obvious application is lock-safety checking for the kernel's various synchronization primitives (each of which represents a "capability"), and checking that locking rules are not violated. Clang originally called the feature "Thread Safety Analysis" [1], with some terminology still using the thread-safety-analysis-only names. This was later changed and the feature became more flexible, gaining the ability to define custom "capabilities". Its foundations can be found in "capability systems", used to specify the permissibility of operations to depend on some capability being held (or not held). [1] https://clang.llvm.org/docs/ThreadSafetyAnalysis.html [2] https://www.cs.cornell.edu/talc/papers/capabilities.pdf Because the feature is not just able to express capabilities related to synchronization primitives, the naming chosen for the kernel departs from Clang's initial "Thread Safety" nomenclature and refers to the feature as "Capability Analysis" to avoid confusion. The implementation still makes references to the older terminology in some places, such as `-Wthread-safety` being the warning enabled option that also still appears in diagnostic messages. See more details in the kernel-doc documentation added in this and the subsequent changes. [ RFC Note: A Clang version that supports -Wthread-safety-addressof is recommended, but not required: https://github.com/llvm/llvm-project/pull/123063 Should this patch series reach non-RFC stage, it is planned to be committed to Clang before. ] Signed-off-by: Marco Elver --- Makefile | 1 + include/linux/compiler-capability-analysis.h | 385 ++++++++++++++++++- lib/Kconfig.debug | 29 ++ scripts/Makefile.capability-analysis | 5 + scripts/Makefile.lib | 10 + 5 files changed, 423 insertions(+), 7 deletions(-) create mode 100644 scripts/Makefile.capability-analysis diff --git a/Makefile b/Makefile index 9e0d63d9d94b..e89b9f7d4a08 100644 --- a/Makefile +++ b/Makefile @@ -1082,6 +1082,7 @@ include-$(CONFIG_KCOV) += scripts/Makefile.kcov include-$(CONFIG_RANDSTRUCT) += scripts/Makefile.randstruct include-$(CONFIG_AUTOFDO_CLANG) += scripts/Makefile.autofdo include-$(CONFIG_PROPELLER_CLANG) += scripts/Makefile.propeller +include-$(CONFIG_WARN_CAPABILITY_ANALYSIS) += scripts/Makefile.capability-analysis include-$(CONFIG_GCC_PLUGINS) += scripts/Makefile.gcc-plugins include $(addprefix $(srctree)/, $(include-y)) diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/compiler-capability-analysis.h index dfed4e7e6ab8..ca63b6513dc3 100644 --- a/include/linux/compiler-capability-analysis.h +++ b/include/linux/compiler-capability-analysis.h @@ -6,26 +6,397 @@ #ifndef _LINUX_COMPILER_CAPABILITY_ANALYSIS_H #define _LINUX_COMPILER_CAPABILITY_ANALYSIS_H +#if defined(WARN_CAPABILITY_ANALYSIS) + +/* + * The below attributes are used to define new capability types. Internal only. + */ +# define __cap_type(name) __attribute__((capability(#name))) +# define __acquires_cap(var) __attribute__((acquire_capability(var))) +# define __acquires_shared_cap(var) __attribute__((acquire_shared_capability(var))) +# define __try_acquires_cap(ret, var) __attribute__((try_acquire_capability(ret, var))) +# define __try_acquires_shared_cap(ret, var) __attribute__((try_acquire_shared_capability(ret, var))) +# define __releases_cap(var) __attribute__((release_capability(var))) +# define __releases_shared_cap(var) __attribute__((release_shared_capability(var))) +# define __asserts_cap(var) __attribute__((assert_capability(var))) +# define __asserts_shared_cap(var) __attribute__((assert_shared_capability(var))) +# define __returns_cap(var) __attribute__((lock_returned(var))) + +/* + * The below are used to annotate code being checked. Internal only. + */ +# define __excludes_cap(var) __attribute__((locks_excluded(var))) +# define __requires_cap(var) __attribute__((requires_capability(var))) +# define __requires_shared_cap(var) __attribute__((requires_shared_capability(var))) + +/** + * __var_guarded_by - struct member and globals attribute, declares variable + * protected by capability + * @var: the capability instance that guards the member or global + * + * Declares that the struct member or global variable must be guarded by the + * given capability @var. Read operations on the data require shared access, + * while write operations require exclusive access. + * + * .. code-block:: c + * + * struct some_state { + * spinlock_t lock; + * long counter __var_guarded_by(&lock); + * }; + */ +# define __var_guarded_by(var) __attribute__((guarded_by(var))) + +/** + * __ref_guarded_by - struct member and globals attribute, declares pointed-to + * data is protected by capability + * @var: the capability instance that guards the member or global + * + * Declares that the data pointed to by the struct member pointer or global + * pointer must be guarded by the given capability @var. Read operations on the + * data require shared access, while write operations require exclusive access. + * + * .. code-block:: c + * + * struct some_state { + * spinlock_t lock; + * long *counter __ref_guarded_by(&lock); + * }; + */ +# define __ref_guarded_by(var) __attribute__((pt_guarded_by(var))) + +/** + * struct_with_capability() - declare or define a capability struct + * @name: struct name + * + * Helper to declare or define a struct type with capability of the same name. + * + * .. code-block:: c + * + * struct_with_capability(my_handle) { + * int foo; + * long bar; + * }; + * + * struct some_state { + * ... + * }; + * // ... declared elsewhere ... + * struct_with_capability(some_state); + * + * Note: The implementation defines several helper functions that can acquire, + * release, and assert the capability. + */ +# define struct_with_capability(name) \ + struct __cap_type(name) name; \ + static __always_inline void __acquire_cap(const struct name *var) \ + __attribute__((overloadable)) __no_capability_analysis __acquires_cap(var) { } \ + static __always_inline void __acquire_shared_cap(const struct name *var) \ + __attribute__((overloadable)) __no_capability_analysis __acquires_shared_cap(var) { } \ + static __always_inline bool __try_acquire_cap(const struct name *var, bool ret) \ + __attribute__((overloadable)) __no_capability_analysis __try_acquires_cap(1, var) \ + { return ret; } \ + static __always_inline bool __try_acquire_shared_cap(const struct name *var, bool ret) \ + __attribute__((overloadable)) __no_capability_analysis __try_acquires_shared_cap(1, var) \ + { return ret; } \ + static __always_inline void __release_cap(const struct name *var) \ + __attribute__((overloadable)) __no_capability_analysis __releases_cap(var) { } \ + static __always_inline void __release_shared_cap(const struct name *var) \ + __attribute__((overloadable)) __no_capability_analysis __releases_shared_cap(var) { } \ + static __always_inline void __assert_cap(const struct name *var) \ + __attribute__((overloadable)) __asserts_cap(var) { } \ + static __always_inline void __assert_shared_cap(const struct name *var) \ + __attribute__((overloadable)) __asserts_shared_cap(var) { } \ + struct name + +/** + * disable_capability_analysis() - disables capability analysis + * + * Disables capability analysis. Must be paired with a later + * enable_capability_analysis(). + */ +# define disable_capability_analysis() \ + __diag_push(); \ + __diag_ignore_all("-Wunknown-warning-option", "") \ + __diag_ignore_all("-Wthread-safety", "") \ + __diag_ignore_all("-Wthread-safety-addressof", "") + +/** + * enable_capability_analysis() - re-enables capability analysis + * + * Re-enables capability analysis. Must be paired with a prior + * disable_capability_analysis(). + */ +# define enable_capability_analysis() __diag_pop() + +/** + * __no_capability_analysis - function attribute, disables capability analysis + * + * Function attribute denoting that capability analysis is disabled for the + * whole function. Prefer use of `capability_unsafe()` where possible. + */ +# define __no_capability_analysis __attribute__((no_thread_safety_analysis)) + +#else /* !WARN_CAPABILITY_ANALYSIS */ + +# define __cap_type(name) +# define __acquires_cap(var) +# define __acquires_shared_cap(var) +# define __try_acquires_cap(ret, var) +# define __try_acquires_shared_cap(ret, var) +# define __releases_cap(var) +# define __releases_shared_cap(var) +# define __asserts_cap(var) +# define __asserts_shared_cap(var) +# define __returns_cap(var) +# define __var_guarded_by(var) +# define __ref_guarded_by(var) +# define __excludes_cap(var) +# define __requires_cap(var) +# define __requires_shared_cap(var) +# define __acquire_cap(var) do { } while (0) +# define __acquire_shared_cap(var) do { } while (0) +# define __try_acquire_cap(var, ret) (ret) +# define __try_acquire_shared_cap(var, ret) (ret) +# define __release_cap(var) do { } while (0) +# define __release_shared_cap(var) do { } while (0) +# define __assert_cap(var) do { (void)(var); } while (0) +# define __assert_shared_cap(var) do { (void)(var); } while (0) +# define struct_with_capability(name) struct name +# define disable_capability_analysis() +# define enable_capability_analysis() +# define __no_capability_analysis + +#endif /* WARN_CAPABILITY_ANALYSIS */ + +/** + * capability_unsafe() - disable capability checking for contained code + * + * Disables capability checking for contained statements or expression. + * + * .. code-block:: c + * + * struct some_data { + * spinlock_t lock; + * int counter __var_guarded_by(&lock); + * }; + * + * int foo(struct some_data *d) + * { + * // ... + * // other code that is still checked ... + * // ... + * return capability_unsafe(d->counter); + * } + */ +#define capability_unsafe(...) \ +({ \ + disable_capability_analysis(); \ + __VA_ARGS__; \ + enable_capability_analysis() \ +}) + +/** + * token_capability() - declare an abstract global capability instance + * @name: token capability name + * + * Helper that declares an abstract global capability instance @name that can be + * used as a token capability, but not backed by a real data structure (linker + * error if accidentally referenced). The type name is `__capability_@name`. + */ +#define token_capability(name) \ + struct_with_capability(__capability_##name) {}; \ + extern const struct __capability_##name *name + +/** + * token_capability_instance() - declare another instance of a global capability + * @cap: token capability previously declared with token_capability() + * @name: name of additional global capability instance + * + * Helper that declares an additional instance @name of the same token + * capability class @name. This is helpful where multiple related token + * capabilities are declared, as it also allows using the same underlying type + * (`__capability_@cap`) as function arguments. + */ +#define token_capability_instance(cap, name) \ + extern const struct __capability_##cap *name + +/* + * Common keywords for static capability analysis. Both Clang's capability + * analysis and Sparse's context tracking are currently supported. + */ #ifdef __CHECKER__ /* Sparse context/lock checking support. */ # define __must_hold(x) __attribute__((context(x,1,1))) +# define __must_not_hold(x) # define __acquires(x) __attribute__((context(x,0,1))) # define __cond_acquires(x) __attribute__((context(x,0,-1))) # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) # define __cond_acquire(x, c) ((c) ? ({ __acquire(x); 1; }) : 0) +/* For Sparse, there's no distinction between exclusive and shared locks. */ +# define __must_hold_shared __must_hold +# define __acquires_shared __acquires +# define __cond_acquires_shared __cond_acquires +# define __releases_shared __releases +# define __acquire_shared __acquire +# define __release_shared __release +# define __cond_acquire_shared __cond_acquire #else /* !__CHECKER__ */ -# define __must_hold(x) -# define __acquires(x) -# define __cond_acquires(x) -# define __releases(x) -# define __acquire(x) (void)0 -# define __release(x) (void)0 -# define __cond_acquire(x, c) (c) +/** + * __must_hold() - function attribute, caller must hold exclusive capability + * @x: capability instance pointer + * + * Function attribute declaring that the caller must hold the given capability + * instance @x exclusively. + */ +# define __must_hold(x) __requires_cap(x) + +/** + * __must_not_hold() - function attribute, caller must not hold capability + * @x: capability instance pointer + * + * Function attribute declaring that the caller must not hold the given + * capability instance @x. + */ +# define __must_not_hold(x) __excludes_cap(x) + +/** + * __acquires() - function attribute, function acquires capability exclusively + * @x: capability instance pointer + * + * Function attribute declaring that the function acquires the the given + * capability instance @x exclusively, but does not release it. + */ +# define __acquires(x) __acquires_cap(x) + +/** + * __cond_acquires() - function attribute, function conditionally + * acquires a capability exclusively + * @x: capability instance pointer + * + * Function attribute declaring that the function conditionally acquires the + * given capability instance @x exclusively, but does not release it. + */ +# define __cond_acquires(x) __try_acquires_cap(1, x) + +/** + * __releases() - function attribute, function releases a capability exclusively + * @x: capability instance pointer + * + * Function attribute declaring that the function releases the given capability + * instance @x exclusively. The capability must be held on entry. + */ +# define __releases(x) __releases_cap(x) + +/** + * __acquire() - function to acquire capability exclusively + * @x: capability instance pinter + * + * No-op function that acquires the given capability instance @x exclusively. + */ +# define __acquire(x) __acquire_cap(x) + +/** + * __release() - function to release capability exclusively + * @x: capability instance pinter + * + * No-op function that releases the given capability instance @x. + */ +# define __release(x) __release_cap(x) + +/** + * __cond_acquire() - function that conditionally acquires a capability + * exclusively + * @x: capability instance pinter + * @c: boolean expression + * + * Return: result of @c + * + * No-op function that conditionally acquires capability instance @x + * exclusively, if the boolean expression @c is true. The result of @c is the + * return value, to be able to create a capability-enabled interface; for + * example: + * + * .. code-block:: c + * + * #define spin_trylock(l) __cond_acquire(&lock, _spin_trylock(&lock)) + */ +# define __cond_acquire(x, c) __try_acquire_cap(x, c) + +/** + * __must_hold_shared() - function attribute, caller must hold shared capability + * @x: capability instance pointer + * + * Function attribute declaring that the caller must hold the given capability + * instance @x with shared access. + */ +# define __must_hold_shared(x) __requires_shared_cap(x) + +/** + * __acquires_shared() - function attribute, function acquires capability shared + * @x: capability instance pointer + * + * Function attribute declaring that the function acquires the the given + * capability instance @x with shared access, but does not release it. + */ +# define __acquires_shared(x) __acquires_shared_cap(x) + +/** + * __cond_acquires_shared() - function attribute, function conditionally + * acquires a capability shared + * @x: capability instance pointer + * + * Function attribute declaring that the function conditionally acquires the + * given capability instance @x with shared access, but does not release it. + */ +# define __cond_acquires_shared(x) __try_acquires_shared_cap(1, x) + +/** + * __releases_shared() - function attribute, function releases a + * capability shared + * @x: capability instance pointer + * + * Function attribute declaring that the function releases the given capability + * instance @x with shared access. The capability must be held on entry. + */ +# define __releases_shared(x) __releases_shared_cap(x) + +/** + * __acquire_shared() - function to acquire capability shared + * @x: capability instance pinter + * + * No-op function that acquires the given capability instance @x with shared + * access. + */ +# define __acquire_shared(x) __acquire_shared_cap(x) + +/** + * __release_shared() - function to release capability shared + * @x: capability instance pinter + * + * No-op function that releases the given capability instance @x with shared + * access. + */ +# define __release_shared(x) __release_shared_cap(x) + +/** + * __cond_acquire_shared() - function that conditionally acquires a capability + * shared + * @x: capability instance pinter + * @c: boolean expression + * + * Return: result of @c + * + * No-op function that conditionally acquires capability instance @x with shared + * access, if the boolean expression @c is true. The result of @c is the return + * value, to be able to create a capability-enabled interface. + */ +# define __cond_acquire_shared(x, c) __try_acquire_shared_cap(x, c) #endif /* __CHECKER__ */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 1af972a92d06..801ad28fe6d7 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -603,6 +603,35 @@ config DEBUG_FORCE_WEAK_PER_CPU To ensure that generic code follows the above rules, this option forces all percpu variables to be defined as weak. +config WARN_CAPABILITY_ANALYSIS + bool "Compiler capability-analysis warnings" + depends on CC_IS_CLANG && $(cc-option,-Wthread-safety -fexperimental-late-parse-attributes) + # Branch profiling re-defines "if", which messes with the compiler's + # ability to analyze __cond_acquire(..), resulting in false positives. + depends on !TRACE_BRANCH_PROFILING + default y + help + Capability analysis is a C language extension, which enables + statically checking that user-definable "capabilities" are acquired + and released where required. + + Clang's name of the feature ("Thread Safety Analysis") refers to + the original name of the feature; it was later expanded to be a + generic "Capability Analysis" framework. + + Produces warnings by default. Select CONFIG_WERROR if you wish to + turn these warnings into errors. + +config WARN_CAPABILITY_ANALYSIS_ALL + bool "Enable capability analysis for all source files" + depends on WARN_CAPABILITY_ANALYSIS + depends on EXPERT && !COMPILE_TEST + help + Enable tree-wide capability analysis. This is likely to produce a + large number of false positives - enable at your own risk. + + If unsure, say N. + endmenu # "Compiler options" menu "Generic Kernel Debugging Instruments" diff --git a/scripts/Makefile.capability-analysis b/scripts/Makefile.capability-analysis new file mode 100644 index 000000000000..71383812201c --- /dev/null +++ b/scripts/Makefile.capability-analysis @@ -0,0 +1,5 @@ +# SPDX-License-Identifier: GPL-2.0 + +export CFLAGS_CAPABILITY_ANALYSIS := -DWARN_CAPABILITY_ANALYSIS \ + -fexperimental-late-parse-attributes -Wthread-safety \ + $(call cc-option,-Wthread-safety-addressof) diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index ad55ef201aac..5bf37af96cdf 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -191,6 +191,16 @@ _c_flags += $(if $(patsubst n%,, \ -D__KCSAN_INSTRUMENT_BARRIERS__) endif +# +# Enable capability analysis flags only where explicitly opted in. +# (depends on variables CAPABILITY_ANALYSIS_obj.o, CAPABILITY_ANALYSIS) +# +ifeq ($(CONFIG_WARN_CAPABILITY_ANALYSIS),y) +_c_flags += $(if $(patsubst n%,, \ + $(CAPABILITY_ANALYSIS_$(target-stem).o)$(CAPABILITY_ANALYSIS)$(if $(is-kernel-object),$(CONFIG_WARN_CAPABILITY_ANALYSIS_ALL))), \ + $(CFLAGS_CAPABILITY_ANALYSIS)) +endif + # # Enable AutoFDO build flags except some files or directories we don't want to # enable (depends on variables AUTOFDO_PROFILE_obj.o and AUTOFDO_PROFILE). From patchwork Thu Feb 6 18:09:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963483 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 760C31A76AE for ; Thu, 6 Feb 2025 18:17:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865880; cv=none; b=HTuad0QgaVAtKnZEdWaxp09LjGiC/Cq2gerLnmb6clWrb1tQsCYGtXVA7QkMSWYr6XhYMPXL3yhKTwWfEEQUuTnjcAL1zUJahSRCwZ0RGawJSrU0la7yUVj/srYGBIrYXdnCuc3WERBVg70vMiQx57XFQ+fD1/AKlWeHFa6hMQY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865880; c=relaxed/simple; bh=xGjyV1mmuJOCMCkvKm/wdONdE3U0T4zFS8a+y0DCXOQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SF6GgX3Kpqdjosg25wW4h0VdFFcQ/OVLQSVqCJn/Qth4eYCH816uQ2JiEpDgtPTzwTgf3xHIac2j7oP3wFI3o7kuJPzfYz9A4uCVAvrNU8Nnrc08H/HTdtmMyJknw3kW2f6jUgq7EBtKulLc9Wlq3/uJuXsaYzOQAxJxsU9kP0c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jnwHMmxz; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jnwHMmxz" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5dce2e916baso1062147a12.1 for ; Thu, 06 Feb 2025 10:17:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865876; x=1739470676; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NkeMRttk3RYqT58NIYnzD7f2BrALmFrVkg1AkAEzinM=; b=jnwHMmxz31HO/SuKyVJLKnLoGlliGxLEh188YIEnq/kE1q2O441gQWrcroUHGVbFBd RQlr5FMtjFkA8Re/cURU8cn7tZTW7q5tV17z6pNzvdC1086RUGMu+uygO6/dMYbXmY2B C88qaNjwSAyrGcZ83l85dC6RXp4eoyDuw7hkcqHRBjahckpzh4h9EucC2X0GYLkfhiib 1ouyyCBpDKneCs3tgyu3BS8BL4g5bWkZXAaBKQQ3A5AV12DONxn2EYu4W4GUMOIJ0CiC K89XdYPq4GxoCf/HRIZUHhWWV6Yz1yuUYvXt8+XeKDWYMA8fmmuMhZzsQ69CBfqiAoIO 217w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865876; x=1739470676; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NkeMRttk3RYqT58NIYnzD7f2BrALmFrVkg1AkAEzinM=; b=lTBs6Z7nFiwQ2eVbGoxMLjDQQ+NJ/FieOAszX/+IKEZLVQ9hlxFYLVLg9zmjHW+U+e rasz2GDcG1Elqe+llojTxm53kxsiatOfyr1eDLUbzuYPoz9aELkzscfM6QhEpZ34YeGq u4MZWHsprmOUBgoDWJKkPiyloa6ITDPA0ja9o61QCcWyo+MXYFSI3G+nC4Ivqt7Lrc1P ycuApGXMvYr9CYORrByFmSDXRL9MH54spL0wC/uTm5hVfLybPUYDFXcCJRCcpj+3sNVl BJOFdZ+o3fcz4CKaD631v8c0QG43mAP6uSikm/Q3QK+jB5MewT5T5ZNa9RvFsYN9lxpQ TyEw== X-Forwarded-Encrypted: i=1; AJvYcCXhxvSNYlbfUo7Nge+QNdPY7OGFgz7CrJ4vUWbhXoLegrYM8t18aQQMRygR25VAgwfwKOk=@vger.kernel.org X-Gm-Message-State: AOJu0Yzr6SZA0HNRQTjukVKWgbCFC/VaxAj0TS9CWFGDZjjt0/SCriwD 0FOy+M9Aj1R8Lsl7iHUkwIw2W9pg1V5G32A0v2FJef64RupCpzIw0WiOko1NiJyGyEZxOlGY4w= = X-Google-Smtp-Source: AGHT+IGXY3SpDaruUK6VARumHi3XqNDM2oza8GEyGvEbUTnudtsurff0yLICaOG9kxpXPQLackltedLCRQ== X-Received: from edat29.prod.google.com ([2002:a05:6402:241d:b0:5dc:764b:8e16]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:2801:b0:5dc:d31a:398d with SMTP id 4fb4d7f45d1cf-5de450059cbmr548326a12.10.1738865875912; Thu, 06 Feb 2025 10:17:55 -0800 (PST) Date: Thu, 6 Feb 2025 19:09:58 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-5-elver@google.com> Subject: [PATCH RFC 04/24] compiler-capability-analysis: Add test stub From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Add a simple test stub where we will add common supported patterns that should not generate false positive of each new supported capability. Signed-off-by: Marco Elver --- lib/Kconfig.debug | 14 ++++++++++++++ lib/Makefile | 3 +++ lib/test_capability-analysis.c | 18 ++++++++++++++++++ 3 files changed, 35 insertions(+) create mode 100644 lib/test_capability-analysis.c diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 801ad28fe6d7..b76fa3dc59ec 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2764,6 +2764,20 @@ config LINEAR_RANGES_TEST If unsure, say N. +config CAPABILITY_ANALYSIS_TEST + bool "Compiler capability-analysis warnings test" + depends on EXPERT + help + This builds the test for compiler-based capability analysis. The test + does not add executable code to the kernel, but is meant to test that + common patterns supported by the analysis do not result in false + positive warnings. + + When adding support for new capabilities, it is strongly recommended + to add supported patterns to this test. + + If unsure, say N. + config CMDLINE_KUNIT_TEST tristate "KUnit test for cmdline API" if !KUNIT_ALL_TESTS depends on KUNIT diff --git a/lib/Makefile b/lib/Makefile index d5cfc7afbbb8..1dbb59175eb0 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -394,6 +394,9 @@ obj-$(CONFIG_CRC_KUNIT_TEST) += crc_kunit.o obj-$(CONFIG_SIPHASH_KUNIT_TEST) += siphash_kunit.o obj-$(CONFIG_USERCOPY_KUNIT_TEST) += usercopy_kunit.o +CAPABILITY_ANALYSIS_test_capability-analysis.o := y +obj-$(CONFIG_CAPABILITY_ANALYSIS_TEST) += test_capability-analysis.o + obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o obj-$(CONFIG_FIRMWARE_TABLE) += fw_table.o diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c new file mode 100644 index 000000000000..a0adacce30ff --- /dev/null +++ b/lib/test_capability-analysis.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Compile-only tests for common patterns that should not generate false + * positive errors when compiled with Clang's capability analysis. + */ + +#include + +/* + * Test that helper macros work as expected. + */ +static void __used test_common_helpers(void) +{ + BUILD_BUG_ON(capability_unsafe(3) != 3); /* plain expression */ + BUILD_BUG_ON(capability_unsafe((void)2; 3;) != 3); /* does not swallow semi-colon */ + BUILD_BUG_ON(capability_unsafe((void)2, 3) != 3); /* does not swallow commas */ + capability_unsafe(do { } while (0)); /* works with void statements */ +} From patchwork Thu Feb 6 18:09:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963484 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6010E1DF735 for ; Thu, 6 Feb 2025 18:18:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865882; cv=none; b=GrSUQXkZjuoE1k8TFabewnOTpHygCnos/O4gFulmpCyEmTLCQX+0nx4gb9Z+FWnuGxDyIXjsgP6k9n8HxPe1Lf0Rtv50/rriqbf1KPM4wwiDFg0lrnaPviuOZyjHSCYh85sDQmHfiuacrbYlXdTh/3R9KW6XjvIob6ZjK+nS4NM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865882; c=relaxed/simple; bh=XCrStM0aHgrhET61z+k7oYXwhS7WvQ1g7ufeBP2YzBY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZR3w903r9vYOgSsYfJG34hxxBle/e82EcCKQbYR93sEAr0jkHtz/7bDc91mGEcJjZoDpbqkH2orwip5ifPblZnJk5FlziiSwZ9jUp5KVjgjg9/FPXozHS5uGjauVbLneqfGXRdSTvFXg26NfS3D1HtqFiMTE9bVT8vV+WS9pSpY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kLTbSLfF; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kLTbSLfF" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5da03762497so1299632a12.1 for ; Thu, 06 Feb 2025 10:18:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865879; x=1739470679; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3zCXzYYWNA/v8qcQtOcKIh5TmuX7E6oXR0H5ZAQV+vQ=; b=kLTbSLfFD8hXSeYRCsWr99/TbfXUnOf0PeyHTNmtT69ImMflQ9mMroaITX+XVPU6hh Uo52sYB5nQHSu1JndZm5unLW9gjMmqaDa3vkPI1hjHLwUVt5pLAiTtxpvehPZWNUVwVf +efioDf4d/pTiMX2kEP9Pqhjcak8vv2gEYwJKFhBcNkDB15VJb2nB/9fJBSl/jqwFLMF 2SrxCfauqeIrh6RUW7uXrunDLszUqDHyp39boA8kPofjSNCBv6YoshA9+0JzTVLCWPRZ Um2vnQygRkqmlcHVe6dsBOMZdTrk93bWr9X6iFrjwGzoW3V53ucw9VYzwSO8VaXqrVp0 XBBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865879; x=1739470679; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3zCXzYYWNA/v8qcQtOcKIh5TmuX7E6oXR0H5ZAQV+vQ=; b=ltYJ/dFzi322j6ROBtMcxC8LDqFa0F0jwjBP92sXC5MU9Qil22qhnPsCPwI3WFyUTT CFnp9ZHrxZtgG6jjoprzl/WkuSzSkHpY9CyHh7bCPKEE52hagk2VBpzyL+eo0eWdygv9 XBNyOBpFiZbR0j9alDmvJ7uaSB783zsC1Rqi7/rM9dwEci/3kDjwWmW5+zmUGDxpeufl 1tH9veMvfVJaOmHDzlTnPR4ELWolkFJMjIY2xiTxMezFVuX0Hk772WXp5xp9R13JTyn/ eGaRuwkYhiWKAVfcGMOgn6ymXn6aDUBr0ipnz6aINz6MMhxamu1Pxc1XkiZMkAtHNNxe GZ9Q== X-Forwarded-Encrypted: i=1; AJvYcCUB7NQZ35ID/hBsSvrE63TPO7hgx4yANKtxGZKOats+7+Uu72sxL+yrnIBgPxQxfBw41ss=@vger.kernel.org X-Gm-Message-State: AOJu0Yz+pOu4LYG0lkT1uI0+plRBklhT0u04k3augJqUCGjD7YBqxosU PGGMm0Q0xQKi0sAt9xek8mvMk7He6dykUXb84pZihHEIpvTb99FsvWv4BbjrPOCcKnj3pfNXcQ= = X-Google-Smtp-Source: AGHT+IE3fLvFaStZCPDAHMprTwdHS2CDtyLSA/FOO9gR6TbzSQgYNeNMtTUtaGDdnKoUqzREyKkIXGTCnw== X-Received: from edap10.prod.google.com ([2002:a05:6402:500a:b0:5d8:ab23:4682]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:4404:b0:5dc:abe4:9d8d with SMTP id 4fb4d7f45d1cf-5dcecca9427mr4164056a12.9.1738865878495; Thu, 06 Feb 2025 10:17:58 -0800 (PST) Date: Thu, 6 Feb 2025 19:09:59 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-6-elver@google.com> Subject: [PATCH RFC 05/24] Documentation: Add documentation for Compiler-Based Capability Analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Adds documentation in Documentation/dev-tools/capability-analysis.rst, and adds it to the index and cross-references from Sparse's document. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 147 ++++++++++++++++++ Documentation/dev-tools/index.rst | 1 + Documentation/dev-tools/sparse.rst | 4 + 3 files changed, 152 insertions(+) create mode 100644 Documentation/dev-tools/capability-analysis.rst diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst new file mode 100644 index 000000000000..2211af90e01b --- /dev/null +++ b/Documentation/dev-tools/capability-analysis.rst @@ -0,0 +1,147 @@ +.. SPDX-License-Identifier: GPL-2.0 +.. Copyright (C) 2025, Google LLC. + +.. _capability-analysis: + +Compiler-Based Capability Analysis +================================== + +Capability analysis is a C language extension, which enables statically +checking that user-definable "capabilities" are acquired and released where +required. An obvious application is lock-safety checking for the kernel's +various synchronization primitives (each of which represents a "capability"), +and checking that locking rules are not violated. + +The Clang compiler currently supports the full set of capability analysis +features. To enable for Clang, configure the kernel with:: + + CONFIG_WARN_CAPABILITY_ANALYSIS=y + +The analysis is *opt-in by default*, and requires declaring which modules and +subsystems should be analyzed in the respective `Makefile`:: + + CAPABILITY_ANALYSIS_mymodule.o := y + +Or for all translation units in the directory:: + + CAPABILITY_ANALYSIS := y + +It is possible to enable the analysis tree-wide, however, which will result in +numerous false positive warnings currently and is *not* generally recommended:: + + CONFIG_WARN_CAPABILITY_ANALYSIS_ALL=y + +Independent of the above Clang support, a subset of the analysis is supported +by :ref:`Sparse `, with weaker guarantees (fewer false positives with +tree-wide analysis, more more false negatives). Compared to Sparse, Clang's +analysis is more complete. + +Programming Model +----------------- + +The below describes the programming model around using capability-enabled +types. + +.. note:: + Enabling capability analysis can be seen as enabling a dialect of Linux C with + a Capability System. Some valid patterns involving complex control-flow are + constrained (such as conditional acquisition and later conditional release + in the same function, or returning pointers to capabilities from functions. + +Capability analysis is a way to specify permissibility of operations to depend +on capabilities being held (or not held). Typically we are interested in +protecting data and code by requiring some capability to be held, for example a +specific lock. The analysis ensures that the caller cannot perform the +operation without holding the appropriate capability. + +Capabilities are associated with named structs, along with functions that +operate on capability-enabled struct instances to acquire and release the +associated capability. + +Capabilities can be held either exclusively or shared. This mechanism allows +assign more precise privileges when holding a capability, typically to +distinguish where a thread may only read (shared) or also write (exclusive) to +guarded data. + +The set of capabilities that are actually held by a given thread at a given +point in program execution is a run-time concept. The static analysis works by +calculating an approximation of that set, called the capability environment. +The capability environment is calculated for every program point, and describes +the set of capabilities that are statically known to be held, or not held, at +that particular point. This environment is a conservative approximation of the +full set of capabilities that will actually held by a thread at run-time. + +More details are also documented `here +`_. + +.. note:: + Unlike Sparse's context tracking analysis, Clang's analysis explicitly does + not infer capabilities acquired or released by inline functions. It requires + explicit annotations to (a) assert that it's not a bug if a capability is + released or acquired, and (b) to retain consistency between inline and + non-inline function declarations. + +Supported Kernel Primitives +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. Currently the following synchronization primitives are supported: + +For capabilities with an initialization function (e.g., `spin_lock_init()`), +calling this function on the capability instance before initializing any +guarded members or globals prevents the compiler from issuing warnings about +unguarded initialization. + +Lockdep assertions, such as `lockdep_assert_held()`, inform the compiler's +capability analysis that the associated synchronization primitive is held after +the assertion. This avoids false positives in complex control-flow scenarios +and encourages the use of Lockdep where static analysis is limited. For +example, this is useful when a function doesn't *always* require a lock, making +`__must_hold()` inappropriate. + +Keywords +~~~~~~~~ + +.. kernel-doc:: include/linux/compiler-capability-analysis.h + :identifiers: struct_with_capability + token_capability token_capability_instance + __var_guarded_by __ref_guarded_by + __must_hold + __must_not_hold + __acquires + __cond_acquires + __releases + __must_hold_shared + __acquires_shared + __cond_acquires_shared + __releases_shared + __acquire + __release + __cond_acquire + __acquire_shared + __release_shared + __cond_acquire_shared + capability_unsafe + __no_capability_analysis + disable_capability_analysis enable_capability_analysis + +Background +---------- + +Clang originally called the feature `Thread Safety Analysis +`_, with some +terminology still using the thread-safety-analysis-only names. This was later +changed and the feature become more flexible, gaining the ability to define +custom "capabilities". + +Indeed, its foundations can be found in `capability systems +`_, used to specify +the permissibility of operations to depend on some capability being held (or +not held). + +Because the feature is not just able to express capabilities related to +synchronization primitives, the naming chosen for the kernel departs from +Clang's initial "Thread Safety" nomenclature and refers to the feature as +"Capability Analysis" to avoid confusion. The implementation still makes +references to the older terminology in some places, such as `-Wthread-safety` +being the warning enabled option that also still appears in diagnostic +messages. diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/index.rst index 65c54b27a60b..62ac23f797cd 100644 --- a/Documentation/dev-tools/index.rst +++ b/Documentation/dev-tools/index.rst @@ -18,6 +18,7 @@ Documentation/process/debugging/index.rst :maxdepth: 2 testing-overview + capability-analysis checkpatch clang-format coccinelle diff --git a/Documentation/dev-tools/sparse.rst b/Documentation/dev-tools/sparse.rst index dc791c8d84d1..8c2077834b6f 100644 --- a/Documentation/dev-tools/sparse.rst +++ b/Documentation/dev-tools/sparse.rst @@ -2,6 +2,8 @@ .. Copyright 2004 Pavel Machek .. Copyright 2006 Bob Copeland +.. _sparse: + Sparse ====== @@ -72,6 +74,8 @@ releasing the lock inside the function in a balanced way, no annotation is needed. The three annotations above are for cases where sparse would otherwise report a context imbalance. +Also see :ref:`Compiler-Based Capability Analysis `. + Getting sparse -------------- From patchwork Thu Feb 6 18:10:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963485 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FC7B1DF74E for ; Thu, 6 Feb 2025 18:18:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865884; cv=none; b=NgtXd+IN7rjnxCMN4e0RPgbmDuUVo2DITRyCwL0zj1lakF8tSX6Knuoragbxpf9w9C36TplX00pKyXgyOQGsZx6SkMhf/pF/8tjb1t9T/myiv+pcpS176f0bvBgNas+H7Mwb1IR1ffcBlknasycPYdyl8wH7hdn7Tpvndx3+TZY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865884; c=relaxed/simple; bh=OUyhDVr77oUg9Ew94dbfOaYtUgHMDOjvOzaZtS/8oZ4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mE135elN9u+dd+6x6RifC50a8qMJRGKBWaTwwjzXwb6c4Sru6MKnEyo+SSCYLItyCUZaw4SBbhdcgVi4feXRajaupRh1DvvnWujrwewF9EPsgu5N4He8My0OsPOKMPDQUQTcU0T1F3+mp9bxQAbYK5Q37bZJr8/RDGlLdc/3zD4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XqltBCv/; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XqltBCv/" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4362153dcd6so6975395e9.2 for ; Thu, 06 Feb 2025 10:18:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865881; x=1739470681; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VsoCl0Fwo1Ic1mnwHNM6zvZdMp3PVmxgZxLwt6H7/D4=; b=XqltBCv/PhH36XyvvGL0gKijrfWt6NFZkgWxlBdjL2PWo9C3koQ3faCtpRQrLO8bhf TAK1W6eHQkT4MDRKhDPZDBRQtSAF/YdoPIKr2n6KcTktxDWY1Xm14N1iY0ifgNQbSRKI FrdkikeWJS1lpbvZAE0/ZctiCnXnkWGJyMttf239oKIh8NlZP2SgTLnc/rLSbmColp9h sFwyH4fl6ddo/wBNWBtyXiIP4+Fcv71X4DDBggAWE4uR78TGDSGpI0PkNSC1vpvvv0H+ s65A+fGSpkfgbnd2/fITFdQEY9/tVlcFyEWLcbik4rM61HX1gWI5NS+TWpiknzfnYz7w gFkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865881; x=1739470681; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VsoCl0Fwo1Ic1mnwHNM6zvZdMp3PVmxgZxLwt6H7/D4=; b=ipEykKCn5EWQ1kln8KOegxC+3BDKG+x1+IZ6avQMZg3PXPHncY6pWizt9bKKP0mEKJ RPJGNV/Tlu3y0HElrjVfemEbEGUVI7O9oG6b5yMkxNfp87puNX8NroA1GmgNa5Ueglqj ZUQPPcoTffZh+VKrx1m00eoaQSEfZ4oiSFEc2wxMPD7acLtsU22GPOkedG/E4CZxzz4z /iWROZ4szSF11pHdM8N4Kiq1Xb8gUUzG9nYlwEVOCWcGDrDICW6Htffk1D2WvE7jLL5H yvL+mDCCt/GcCAdFdiufL0+yWiHIMgxKIf7pvdXe24rz8Z4iva9xhzYaKOUJA16foTQP k4hw== X-Forwarded-Encrypted: i=1; AJvYcCX9K4IWZe2JGJgnfzIsg4kA1o7NjeWhNiyVcQXTO95keholaiWE2id99JJcQNTLU28uI14=@vger.kernel.org X-Gm-Message-State: AOJu0YxP8d4+q+/36Uz9zj/SuLJ4kmYzsDNX5Tk9+J4ONtqI/+Y8oNDN kT4dNtuSU2nFTnO+wdjZd0vNHiUvWJR3dI8GuAxPzCR2NVX0ED+EputTLZZATGv7hsba9LljCA= = X-Google-Smtp-Source: AGHT+IHiTmn6vfthg2vhJ36+4kRebwXf2RJ94SpIllvtOjA4FgWz+fAElpnxs8wxYNVNWceG+v5BwvdraA== X-Received: from wmbeq14.prod.google.com ([2002:a05:600c:848e:b0:434:e9e2:2991]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e15:b0:434:ff9d:a370 with SMTP id 5b1f17b1804b1-439248c34e9mr5139685e9.0.1738865881193; Thu, 06 Feb 2025 10:18:01 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:00 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-7-elver@google.com> Subject: [PATCH RFC 06/24] checkpatch: Warn about capability_unsafe() without comment From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Warn about applications of capability_unsafe() without a comment, to encourage documenting the reasoning behind why it was deemed safe. Signed-off-by: Marco Elver --- scripts/checkpatch.pl | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 7b28ad331742..c28efdb1d404 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -6693,6 +6693,14 @@ sub process { } } +# check for capability_unsafe without a comment. + if ($line =~ /\bcapability_unsafe\b/) { + if (!ctx_has_comment($first_line, $linenr)) { + WARN("CAPABILITY_UNSAFE", + "capability_unsafe without comment\n" . $herecurr); + } + } + # check of hardware specific defines if ($line =~ m@^.\s*\#\s*if.*\b(__i386__|__powerpc64__|__sun__|__s390x__)\b@ && $realfile !~ m@include/asm-@) { CHK("ARCH_DEFINES", From patchwork Thu Feb 6 18:10:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963486 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45B6B1DEFF1 for ; Thu, 6 Feb 2025 18:18:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865888; cv=none; b=aeKhDEBTd+qFnHKN7e8L6nWw4q4ea7Hj9J85HPa9Za8c0EYt40Oc6v+Ij6vAnECWjexxDVLLz4xQJY5E/O9VA9SJkICXJL+y+x+ORam5Zs92yeA19FrPoDgRLX3cs5b5VDjvRi/B3NKYUZ/OW7TSL+c5tkf5BvXZ0vB52Cr67s0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865888; c=relaxed/simple; bh=nbrPwXyZS1EvTzB5y+SypJlT/s4PiDMg7VB+pOJqZ2M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OONr1Tsx+LoHU56K5WG9llP/w3ZzD9BVbn/6gsYh09LwO6JXcYw9K3r4mKqDxnvLmcEnPGeewbTXxBa1FbVC8yCsE8KvyfuvdXHS+PW5IicIDJOV6pEBbz0EBuY7H54mh9DX6/tNR3SIE9vbYMgIe5yozT3k3TUuxWuxKo3MrSk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bk7WjvjF; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bk7WjvjF" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-aa67855b3deso144665866b.1 for ; Thu, 06 Feb 2025 10:18:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865883; x=1739470683; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2DbQd0Feo/QZz3Rhlb0Ii9wR2JgNHixpKT0msEk4JbI=; b=bk7WjvjFVYAf2KOpHRMqC2KHFewZGo+a1MN8tjt1FxDZYFYpx5faGTdcW/0ozFvpLt nWh/ckwZyxgHi8cw1yGj5ibTIlsW3oxTxARraym7hqDgr38W6Ch/TTRPOJg476A1nJ+d wvWAzc8yllD0aF0ji0II36tK18mLDrvDEFtbjtyaREAG8EWlhheLF7MHh9EIgjX2Ylpg prGwDeHVKOFP3oGV9DZngU709BNycm5Y7QYN9q4I97dO9cGvoQB+ZLlRoQ5+p1rxD9Zn pSt4HoLUxVaoGTHTO+wLCwYkd/1UeoUMsafPzEp9BgvU9PhqvfTpJ5BhpAn1g4mxy0Dp AotA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865883; x=1739470683; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2DbQd0Feo/QZz3Rhlb0Ii9wR2JgNHixpKT0msEk4JbI=; b=XZQgQEgCXipahktmzD6sZVFrRdZYKk0TaYEq98nDB3F7NoiqW0ULfXUIimDm3MuyHW nloXxBjnGRabz1uM1hBsVEODPGuaqM3JyAJGEvpUoMgt61q8+IhRPxpvQWfhEzgDccHJ v8BHHcWauM7v6kWz7hP3/zO4yb1rPc+7egXdVj4l8KeHGCh7KzYprvYxTqlJc3bhM9Gc 5aJQXO6S4M/1C64c+kjJFnclBGckwO4PvKk7BJPr0yYIgNQ3gbmBEDow3Obs48XE1uOD kOFu7AvQR68lnHBMlA78mvwVi0wbCAEWacg+LnQIMrB7b2sZkbqpB0qtqiOd3WkG/CFe 12kw== X-Forwarded-Encrypted: i=1; AJvYcCVq74MO+cK2CLja79Iv5QF3Re+6a1tJ+2i+v8RbHlxmtmjg1cEcRWt8VhgAOF5qsLjquQY=@vger.kernel.org X-Gm-Message-State: AOJu0Yw/qg6ScvKkdhS3IDyANTcu3bJKMPuUc9Thflnd0VuGK2XwMJOy 2e3MrINXR5kncBhnaboU07+SjChDsr5COVVPEdDf1flmJUo6DLZNkgKd7vJYt3hI+lxBC/QYoA= = X-Google-Smtp-Source: AGHT+IEpKrf/iQkrY6KOXq/iYZBQV1l1pAnmjhSh7YEqml+QLzZmS4/pcVjkplICVk/aelHvFs7D7YrXFw== X-Received: from edbes17.prod.google.com ([2002:a05:6402:3811:b0:5d8:7c8:cde8]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7251:b0:aa6:6885:e2f0 with SMTP id a640c23a62f3a-ab75e35de7emr916219966b.46.1738865883628; Thu, 06 Feb 2025 10:18:03 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:01 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-8-elver@google.com> Subject: [PATCH RFC 07/24] cleanup: Basic compatibility with capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Due to the scoped cleanup helpers used for lock guards wrapping acquire/release around their own constructors/destructors that store pointers to the passed locks in a separate struct, we currently cannot accurately annotate *destructors* which lock was released. While it's possible to annotate the constructor to say which lock was acquired, that alone would result in false positives claiming the lock was not released on function return. Instead, to avoid false positives, we can claim that the constructor "asserts" that the taken lock is held. This will ensure we can still benefit from the analysis where scoped guards are used to protect access to guarded variables, while avoiding false positives. The only downside are false negatives where we might accidentally lock the same lock again: raw_spin_lock(&my_lock); ... guard(raw_spinlock)(&my_lock); // no warning Arguably, lockdep will immediately catch issues like this. While Clang's analysis supports scoped guards in C++ [1], there's no way to apply this to C right now. Better support for Linux's scoped guard design could be added in future if deemed critical. [1] https://clang.llvm.org/docs/ThreadSafetyAnalysis.html#scoped-capability Signed-off-by: Marco Elver --- include/linux/cleanup.h | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h index ec00e3f7af2b..93a166549add 100644 --- a/include/linux/cleanup.h +++ b/include/linux/cleanup.h @@ -223,7 +223,7 @@ const volatile void * __must_check_fn(const volatile void *val) * @exit is an expression using '_T' -- similar to FREE above. * @init is an expression in @init_args resulting in @type * - * EXTEND_CLASS(name, ext, init, init_args...): + * EXTEND_CLASS(name, ext, ctor_attrs, init, init_args...): * extends class @name to @name@ext with the new constructor * * CLASS(name, var)(args...): @@ -243,15 +243,18 @@ const volatile void * __must_check_fn(const volatile void *val) #define DEFINE_CLASS(_name, _type, _exit, _init, _init_args...) \ typedef _type class_##_name##_t; \ static inline void class_##_name##_destructor(_type *p) \ + __no_capability_analysis \ { _type _T = *p; _exit; } \ static inline _type class_##_name##_constructor(_init_args) \ + __no_capability_analysis \ { _type t = _init; return t; } -#define EXTEND_CLASS(_name, ext, _init, _init_args...) \ +#define EXTEND_CLASS(_name, ext, ctor_attrs, _init, _init_args...) \ typedef class_##_name##_t class_##_name##ext##_t; \ static inline void class_##_name##ext##_destructor(class_##_name##_t *p)\ { class_##_name##_destructor(p); } \ static inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \ + __no_capability_analysis ctor_attrs \ { class_##_name##_t t = _init; return t; } #define CLASS(_name, var) \ @@ -299,7 +302,7 @@ static __maybe_unused const bool class_##_name##_is_conditional = _is_cond #define DEFINE_GUARD_COND(_name, _ext, _condlock) \ __DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true); \ - EXTEND_CLASS(_name, _ext, \ + EXTEND_CLASS(_name, _ext,, \ ({ void *_t = _T; if (_T && !(_condlock)) _t = NULL; _t; }), \ class_##_name##_t _T) \ static inline void * class_##_name##_ext##_lock_ptr(class_##_name##_t *_T) \ @@ -371,6 +374,7 @@ typedef struct { \ } class_##_name##_t; \ \ static inline void class_##_name##_destructor(class_##_name##_t *_T) \ + __no_capability_analysis \ { \ if (_T->lock) { _unlock; } \ } \ @@ -383,6 +387,7 @@ static inline void *class_##_name##_lock_ptr(class_##_name##_t *_T) \ #define __DEFINE_LOCK_GUARD_1(_name, _type, _lock) \ static inline class_##_name##_t class_##_name##_constructor(_type *l) \ + __no_capability_analysis __asserts_cap(l) \ { \ class_##_name##_t _t = { .lock = l }, *_T = &_t; \ _lock; \ @@ -391,6 +396,7 @@ static inline class_##_name##_t class_##_name##_constructor(_type *l) \ #define __DEFINE_LOCK_GUARD_0(_name, _lock) \ static inline class_##_name##_t class_##_name##_constructor(void) \ + __no_capability_analysis \ { \ class_##_name##_t _t = { .lock = (void*)1 }, \ *_T __maybe_unused = &_t; \ @@ -410,7 +416,7 @@ __DEFINE_LOCK_GUARD_0(_name, _lock) #define DEFINE_LOCK_GUARD_1_COND(_name, _ext, _condlock) \ __DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true); \ - EXTEND_CLASS(_name, _ext, \ + EXTEND_CLASS(_name, _ext, __asserts_cap(l), \ ({ class_##_name##_t _t = { .lock = l }, *_T = &_t;\ if (_T->lock && !(_condlock)) _T->lock = NULL; \ _t; }), \ From patchwork Thu Feb 6 18:10:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963487 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 88B891DFD9A for ; Thu, 6 Feb 2025 18:18:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865889; cv=none; b=c1BL2UhNI6ODqwCBpZH92h/bYQsgZIf6c7aRubKxrXi+0lPTVsckXuMu+HMea1ZaNBwGqjE0pMhEgnUGLwMBnWfPKmnxu1HWXiWkjU7fbS51tV0xmeoDQdfXcD57JKmN88I2IDaCqBQ3Z4CCKua0R42cN9hHTyPVpJXK4yODTPo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865889; c=relaxed/simple; bh=NC/giOsxEdW/2YEKwyZ/rqMsDsoQXLA5Fl9b5LdcKjw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=m8ZxKGCDy89Pltw4YlNdckHEUH0E5lWiEH3gMozDajYpCkDsoAC93xB0t0ZeQB31uJDy0gs1b2WNY4yI+Fp35xfbXGhGjF/cAHiVaQRYrZuzyF46ukVSp4EGmj2vxbGy273p3w8+r6VaiMZdEpBo8T0/gsbAIQyNwOMCJ4p7go8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LYpNoJMK; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LYpNoJMK" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-ab76f438dddso137064566b.2 for ; Thu, 06 Feb 2025 10:18:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865886; x=1739470686; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=B9zo14rswWWzdXk+7RXhASP4eQAv2ef2DvmlHG5M+4Q=; b=LYpNoJMKrv7QAgp9cXs2m1IV91l7Bzo+YU98sn32nTGzsEhZxs34RSpl4Ltx9Ranqh Y4WvTY6cM7l5waJzCuRqpRc96GYq6Ex6mZ8MHhxRecMiHoR6nEnaf3Wu9J/JCv3eH1MR wc2Z6oiqqEmGr6ovVaQvHDE5Va5fVs5ky8Gz00BD7iIyMLLVByj1H5lArIuJ/5YS5h8a ZeSFnVtg0Ccya33GQzNJb7r2WqWzAWPv5DukjPKua57otUhFDrTQREeNFyU/T/lbWOGh tEs0PeScPpCe5RVRET1TMyBvF9IKs2+YbA7RVlMPPioBmqJD+w1FgT0s9gP+m9Hefw/H r5YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865886; x=1739470686; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=B9zo14rswWWzdXk+7RXhASP4eQAv2ef2DvmlHG5M+4Q=; b=g8uG520gidaOOPOKJaPFK0IJn4hm7bvrupdh2q9jzkFGzgPCIXq/9AfnkmHKuybsKk oFr+07/gLLBLihYitE7phtTxO0FXr+T8gdI1ruPhmHy19QYnRaOZRVT7UFAZjL4QI6kq oll/dz5Dc5WUyN3T7G4KZN33JoDDA8DT5J4qN4f5UW7vY2zAGwMV3cMw4jtCb/iF/sCt ODVJYSKBM3vBPip76WN7HbxvkbMAuneDbrrhvxMRh5iSsvyaBQhRbzSUyNsJvbmoGdit laRXqivGhaVr7gRA95nTOgDJ2QesJuWoITDEEwWfL6CN9Ao4CFUOY2ZFf3u2BHubDt5e xJOw== X-Forwarded-Encrypted: i=1; AJvYcCWuWoBkFY50HpqFNFRv7CISMMuO24U/sHWoy7SeL6IhKD/PssOJyLpQvrf+hL4FsFQ6BpI=@vger.kernel.org X-Gm-Message-State: AOJu0YzIgD2WyF3Z9EtMU2rFV/+yHiZTvGnJuGmJs18ookuxzWQMg23k z1jEtgZYajfxEOJvjQOr5dNJ/IslsiqN/y45Oc1fW/Q3axTkNtgYldMC41JOrTVu7f7XydYNlg= = X-Google-Smtp-Source: AGHT+IFu7xPJdokZBAxGmpTaLRXegeAIT0R1AZB6cJDpPIO2PbBj0EGjZIpu4bBVqoqg73DZbQallQnyAQ== X-Received: from ejcvi3.prod.google.com ([2002:a17:907:d403:b0:aa6:90a8:f5f8]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7216:b0:ab7:647:d52d with SMTP id a640c23a62f3a-ab75e322c0dmr999272366b.51.1738865886143; Thu, 06 Feb 2025 10:18:06 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:02 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-9-elver@google.com> Subject: [PATCH RFC 08/24] lockdep: Annotate lockdep assertions for capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Clang's capability analysis can be made aware of functions that assert that capabilities/locks are held. Presence of these annotations causes the analysis to assume the capability is held after calls to the annotated function, and avoid false positives with complex control-flow; for example, where not all control-flow paths in a function require a held lock, and therefore marking the function with __must_hold(..) is inappropriate. Signed-off-by: Marco Elver --- include/linux/lockdep.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 67964dc4db95..5cea929b2219 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -282,16 +282,16 @@ extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie); do { WARN_ON_ONCE(debug_locks && !(cond)); } while (0) #define lockdep_assert_held(l) \ - lockdep_assert(lockdep_is_held(l) != LOCK_STATE_NOT_HELD) + do { lockdep_assert(lockdep_is_held(l) != LOCK_STATE_NOT_HELD); __assert_cap(l); } while (0) #define lockdep_assert_not_held(l) \ lockdep_assert(lockdep_is_held(l) != LOCK_STATE_HELD) #define lockdep_assert_held_write(l) \ - lockdep_assert(lockdep_is_held_type(l, 0)) + do { lockdep_assert(lockdep_is_held_type(l, 0)); __assert_cap(l); } while (0) #define lockdep_assert_held_read(l) \ - lockdep_assert(lockdep_is_held_type(l, 1)) + do { lockdep_assert(lockdep_is_held_type(l, 1)); __assert_shared_cap(l); } while (0) #define lockdep_assert_held_once(l) \ lockdep_assert_once(lockdep_is_held(l) != LOCK_STATE_NOT_HELD) @@ -389,10 +389,10 @@ extern int lockdep_is_held(const void *); #define lockdep_assert(c) do { } while (0) #define lockdep_assert_once(c) do { } while (0) -#define lockdep_assert_held(l) do { (void)(l); } while (0) +#define lockdep_assert_held(l) __assert_cap(l) #define lockdep_assert_not_held(l) do { (void)(l); } while (0) -#define lockdep_assert_held_write(l) do { (void)(l); } while (0) -#define lockdep_assert_held_read(l) do { (void)(l); } while (0) +#define lockdep_assert_held_write(l) __assert_cap(l) +#define lockdep_assert_held_read(l) __assert_shared_cap(l) #define lockdep_assert_held_once(l) do { (void)(l); } while (0) #define lockdep_assert_none_held_once() do { } while (0) From patchwork Thu Feb 6 18:10:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963488 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E18681E231F for ; Thu, 6 Feb 2025 18:18:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865892; cv=none; b=qnCfMUvL/jMhg4VP36wJ7XxAUaSsj3teDaheDKSJvuHfrFThN/fRF3r6/HK2gjGJfYBmggRyyHOA6XdzOyK4diUFO4zLt633j9418M23NKIvyvUYRH7fktty/V7CaVq2VRmdecJXHk5F/C5xNWqe1RyR1kPNKhnR0NDoAQ8q72s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865892; c=relaxed/simple; bh=gVcJwP7DaHofrIRS7DtPO8XOt5Qq6uoCKcfH8A5mJms=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ogNYoh/5kky5kKCTWOxOSU2zfRbBfYZhG/srR/HuyZOBTLFKQhgCgDW7MF8+AvNCY3e1s0r8vzWAge1x0aVaB2s3xYUMfDib/jGgUEl9l123e4lD9RUXqQ8KTNBerxLzm6g4Mn0s+TQAJiB7sXkkkwGAuogvPMNbsdf4HZM96l0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TM5qe8JC; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TM5qe8JC" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5d3f4cbbbbcso1754963a12.1 for ; Thu, 06 Feb 2025 10:18:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865888; x=1739470688; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ooxnk/JQzY6kVJXM0+XJ/prw2lUk2BfGet6u4s4YFKY=; b=TM5qe8JCxvngk/Xf9n2ocZ9VmuWIHAglxQU+gttFoEd9CeNy7FoJ7y/1DtAEUOVp5C PenvK8mGhFY5wN4uWryfeEUNCAkInQjPynmLIigiRsXqO308JqQHw3qKKm5HyBr/emDS S7LL4Q1XolUEhQwPdF85IxiVWjabXhKLeGY34O2JRvFLjUVqDMrzd4gXQcQx4/Puafba 905i5iPEzGlyZ5o5181EImyR667cRJk2BfZ4RR6a0GLZJ4nghnZHNJGx4YTIiLS75p24 RhK1ehsDHGvV0Dvx3qgNLQSY7Ws1VWsp4Iq0UqkfdjRPa8LodTWkTMxfVAdCcHcepP36 ptnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865888; x=1739470688; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ooxnk/JQzY6kVJXM0+XJ/prw2lUk2BfGet6u4s4YFKY=; b=NxelWaevyyTNBstnIZ37eBRHRl2MqzpnLkBdAjIP4QE1K/fbeLYTEm+9wg1nkpKcaq FhJocRiBCk8ZY7VSFBEccpt8bTuN2sdz8qks/8xKFOUGxUbYPphxav4Af8aBFE/CKu9o Ob2Q0VDjQSgIT8wnzIqyW5IhXnkh9kwfcxRT0xiCvnqtqyrogjwWPQyTct+LpAfssOwS jGrg0yRfcQVFjEuUqHIOdZdBUMOOwOD9kbnI/xxG9sbLOsnYlXUydax/EMFvONp8E9iF ADMj2hQrjuANrXnjJ0hWOaudhkwm/hEP9WZNqFotVNpAl3OUjgAJjDd3F7s2dkpjCDlh 8N6w== X-Forwarded-Encrypted: i=1; AJvYcCWQnju0pdPmPmUpvt34M7fzM0Y2Wr1XF/OnYEj9/OzwDArAvLvQR217Haya2V2S/eQWVhc=@vger.kernel.org X-Gm-Message-State: AOJu0YwnW+lza+KazJl72jLl8+4CWap2ZVbK5Gu9wZo3f72J3P7XR20Z VhqJ157BgvRHPKCeLQ+hSeCw3Mv4mq8ihylqpgshgkNLCxdqbyM8Cr4DGvwV1Yl8uOXZutN++w= = X-Google-Smtp-Source: AGHT+IFm7el1MNhepx8reHTJ4p1sUzV5FhX1LdLPWyerUR85KR1MkFSqrbz9hShMncx2dK3vsIt3v94VSw== X-Received: from edben19.prod.google.com ([2002:a05:6402:5293:b0:5dc:1039:c937]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a50:d582:0:b0:5dc:c9ce:b029 with SMTP id 4fb4d7f45d1cf-5de44fe9d66mr457639a12.5.1738865888539; Thu, 06 Feb 2025 10:18:08 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:03 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-10-elver@google.com> Subject: [PATCH RFC 09/24] locking/rwlock, spinlock: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Add support for Clang's capability analysis for raw_spinlock_t, spinlock_t, and rwlock. This wholesale conversion is required because all three of them are interdependent. To avoid warnings in constructors, the initialization functions mark a capability as acquired when initialized before guarded variables. The test verifies that common patterns do not generate false positives. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 3 +- include/linux/rwlock.h | 25 ++-- include/linux/rwlock_api_smp.h | 29 +++- include/linux/rwlock_rt.h | 35 +++-- include/linux/rwlock_types.h | 10 +- include/linux/spinlock.h | 45 +++--- include/linux/spinlock_api_smp.h | 14 +- include/linux/spinlock_api_up.h | 71 +++++----- include/linux/spinlock_rt.h | 21 +-- include/linux/spinlock_types.h | 10 +- include/linux/spinlock_types_raw.h | 5 +- lib/test_capability-analysis.c | 128 ++++++++++++++++++ 12 files changed, 299 insertions(+), 97 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 2211af90e01b..904448605a77 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -84,7 +84,8 @@ More details are also documented `here Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. Currently the following synchronization primitives are supported: +Currently the following synchronization primitives are supported: +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h index 58c346947aa2..44755fd96c27 100644 --- a/include/linux/rwlock.h +++ b/include/linux/rwlock.h @@ -22,23 +22,24 @@ do { \ static struct lock_class_key __key; \ \ __rwlock_init((lock), #lock, &__key); \ + __assert_cap(lock); \ } while (0) #else # define rwlock_init(lock) \ - do { *(lock) = __RW_LOCK_UNLOCKED(lock); } while (0) + do { *(lock) = __RW_LOCK_UNLOCKED(lock); __assert_cap(lock); } while (0) #endif #ifdef CONFIG_DEBUG_SPINLOCK - extern void do_raw_read_lock(rwlock_t *lock) __acquires(lock); + extern void do_raw_read_lock(rwlock_t *lock) __acquires_shared(lock); extern int do_raw_read_trylock(rwlock_t *lock); - extern void do_raw_read_unlock(rwlock_t *lock) __releases(lock); + extern void do_raw_read_unlock(rwlock_t *lock) __releases_shared(lock); extern void do_raw_write_lock(rwlock_t *lock) __acquires(lock); extern int do_raw_write_trylock(rwlock_t *lock); extern void do_raw_write_unlock(rwlock_t *lock) __releases(lock); #else -# define do_raw_read_lock(rwlock) do {__acquire(lock); arch_read_lock(&(rwlock)->raw_lock); } while (0) +# define do_raw_read_lock(rwlock) do {__acquire_shared(lock); arch_read_lock(&(rwlock)->raw_lock); } while (0) # define do_raw_read_trylock(rwlock) arch_read_trylock(&(rwlock)->raw_lock) -# define do_raw_read_unlock(rwlock) do {arch_read_unlock(&(rwlock)->raw_lock); __release(lock); } while (0) +# define do_raw_read_unlock(rwlock) do {arch_read_unlock(&(rwlock)->raw_lock); __release_shared(lock); } while (0) # define do_raw_write_lock(rwlock) do {__acquire(lock); arch_write_lock(&(rwlock)->raw_lock); } while (0) # define do_raw_write_trylock(rwlock) arch_write_trylock(&(rwlock)->raw_lock) # define do_raw_write_unlock(rwlock) do {arch_write_unlock(&(rwlock)->raw_lock); __release(lock); } while (0) @@ -49,7 +50,7 @@ do { \ * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various * methods are defined as nops in the case they are not required. */ -#define read_trylock(lock) __cond_acquire(lock, _raw_read_trylock(lock)) +#define read_trylock(lock) __cond_acquire_shared(lock, _raw_read_trylock(lock)) #define write_trylock(lock) __cond_acquire(lock, _raw_write_trylock(lock)) #define write_lock(lock) _raw_write_lock(lock) @@ -112,12 +113,12 @@ do { \ } while (0) #define write_unlock_bh(lock) _raw_write_unlock_bh(lock) -#define write_trylock_irqsave(lock, flags) \ -({ \ - local_irq_save(flags); \ - write_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ -}) +#define write_trylock_irqsave(lock, flags) \ + __cond_acquire(lock, ({ \ + local_irq_save(flags); \ + _raw_write_trylock(lock) ? \ + 1 : ({ local_irq_restore(flags); 0; }); \ + })) #ifdef arch_rwlock_is_contended #define rwlock_is_contended(lock) \ diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h index 31d3d1116323..3e975105a606 100644 --- a/include/linux/rwlock_api_smp.h +++ b/include/linux/rwlock_api_smp.h @@ -15,12 +15,12 @@ * Released under the General Public License (GPL). */ -void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires_shared(lock); void __lockfunc _raw_write_lock(rwlock_t *lock) __acquires(lock); void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass) __acquires(lock); -void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires_shared(lock); void __lockfunc _raw_write_lock_bh(rwlock_t *lock) __acquires(lock); -void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires_shared(lock); void __lockfunc _raw_write_lock_irq(rwlock_t *lock) __acquires(lock); unsigned long __lockfunc _raw_read_lock_irqsave(rwlock_t *lock) __acquires(lock); @@ -28,11 +28,11 @@ unsigned long __lockfunc _raw_write_lock_irqsave(rwlock_t *lock) __acquires(lock); int __lockfunc _raw_read_trylock(rwlock_t *lock); int __lockfunc _raw_write_trylock(rwlock_t *lock); -void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases(lock); +void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases_shared(lock); void __lockfunc _raw_write_unlock(rwlock_t *lock) __releases(lock); -void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases(lock); +void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases_shared(lock); void __lockfunc _raw_write_unlock_bh(rwlock_t *lock) __releases(lock); -void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases(lock); +void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases_shared(lock); void __lockfunc _raw_write_unlock_irq(rwlock_t *lock) __releases(lock); void __lockfunc _raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) @@ -145,6 +145,7 @@ static inline int __raw_write_trylock(rwlock_t *lock) #if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC) static inline void __raw_read_lock(rwlock_t *lock) + __acquires_shared(lock) __no_capability_analysis { preempt_disable(); rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_); @@ -152,6 +153,7 @@ static inline void __raw_read_lock(rwlock_t *lock) } static inline unsigned long __raw_read_lock_irqsave(rwlock_t *lock) + __acquires_shared(lock) __no_capability_analysis { unsigned long flags; @@ -163,6 +165,7 @@ static inline unsigned long __raw_read_lock_irqsave(rwlock_t *lock) } static inline void __raw_read_lock_irq(rwlock_t *lock) + __acquires_shared(lock) __no_capability_analysis { local_irq_disable(); preempt_disable(); @@ -171,6 +174,7 @@ static inline void __raw_read_lock_irq(rwlock_t *lock) } static inline void __raw_read_lock_bh(rwlock_t *lock) + __acquires_shared(lock) __no_capability_analysis { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_); @@ -178,6 +182,7 @@ static inline void __raw_read_lock_bh(rwlock_t *lock) } static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock) + __acquires(lock) __no_capability_analysis { unsigned long flags; @@ -189,6 +194,7 @@ static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock) } static inline void __raw_write_lock_irq(rwlock_t *lock) + __acquires(lock) __no_capability_analysis { local_irq_disable(); preempt_disable(); @@ -197,6 +203,7 @@ static inline void __raw_write_lock_irq(rwlock_t *lock) } static inline void __raw_write_lock_bh(rwlock_t *lock) + __acquires(lock) __no_capability_analysis { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -204,6 +211,7 @@ static inline void __raw_write_lock_bh(rwlock_t *lock) } static inline void __raw_write_lock(rwlock_t *lock) + __acquires(lock) __no_capability_analysis { preempt_disable(); rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -211,6 +219,7 @@ static inline void __raw_write_lock(rwlock_t *lock) } static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass) + __acquires(lock) __no_capability_analysis { preempt_disable(); rwlock_acquire(&lock->dep_map, subclass, 0, _RET_IP_); @@ -220,6 +229,7 @@ static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass) #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */ static inline void __raw_write_unlock(rwlock_t *lock) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); @@ -227,6 +237,7 @@ static inline void __raw_write_unlock(rwlock_t *lock) } static inline void __raw_read_unlock(rwlock_t *lock) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -235,6 +246,7 @@ static inline void __raw_read_unlock(rwlock_t *lock) static inline void __raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -243,6 +255,7 @@ __raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) } static inline void __raw_read_unlock_irq(rwlock_t *lock) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -251,6 +264,7 @@ static inline void __raw_read_unlock_irq(rwlock_t *lock) } static inline void __raw_read_unlock_bh(rwlock_t *lock) + __releases_shared(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_read_unlock(lock); @@ -259,6 +273,7 @@ static inline void __raw_read_unlock_bh(rwlock_t *lock) static inline void __raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); @@ -267,6 +282,7 @@ static inline void __raw_write_unlock_irqrestore(rwlock_t *lock, } static inline void __raw_write_unlock_irq(rwlock_t *lock) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); @@ -275,6 +291,7 @@ static inline void __raw_write_unlock_irq(rwlock_t *lock) } static inline void __raw_write_unlock_bh(rwlock_t *lock) + __releases(lock) { rwlock_release(&lock->dep_map, _RET_IP_); do_raw_write_unlock(lock); diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h index 5320b4b66405..c6280b0e4503 100644 --- a/include/linux/rwlock_rt.h +++ b/include/linux/rwlock_rt.h @@ -22,28 +22,32 @@ do { \ \ init_rwbase_rt(&(rwl)->rwbase); \ __rt_rwlock_init(rwl, #rwl, &__key); \ + __assert_cap(rwl); \ } while (0) -extern void rt_read_lock(rwlock_t *rwlock) __acquires(rwlock); +extern void rt_read_lock(rwlock_t *rwlock) __acquires_shared(rwlock); extern int rt_read_trylock(rwlock_t *rwlock); -extern void rt_read_unlock(rwlock_t *rwlock) __releases(rwlock); +extern void rt_read_unlock(rwlock_t *rwlock) __releases_shared(rwlock); extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock); extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquires(rwlock); extern int rt_write_trylock(rwlock_t *rwlock); extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock); static __always_inline void read_lock(rwlock_t *rwlock) + __acquires_shared(rwlock) { rt_read_lock(rwlock); } static __always_inline void read_lock_bh(rwlock_t *rwlock) + __acquires_shared(rwlock) { local_bh_disable(); rt_read_lock(rwlock); } static __always_inline void read_lock_irq(rwlock_t *rwlock) + __acquires_shared(rwlock) { rt_read_lock(rwlock); } @@ -55,37 +59,43 @@ static __always_inline void read_lock_irq(rwlock_t *rwlock) flags = 0; \ } while (0) -#define read_trylock(lock) __cond_acquire(lock, rt_read_trylock(lock)) +#define read_trylock(lock) __cond_acquire_shared(lock, rt_read_trylock(lock)) static __always_inline void read_unlock(rwlock_t *rwlock) + __releases_shared(rwlock) { rt_read_unlock(rwlock); } static __always_inline void read_unlock_bh(rwlock_t *rwlock) + __releases_shared(rwlock) { rt_read_unlock(rwlock); local_bh_enable(); } static __always_inline void read_unlock_irq(rwlock_t *rwlock) + __releases_shared(rwlock) { rt_read_unlock(rwlock); } static __always_inline void read_unlock_irqrestore(rwlock_t *rwlock, unsigned long flags) + __releases_shared(rwlock) { rt_read_unlock(rwlock); } static __always_inline void write_lock(rwlock_t *rwlock) + __acquires(rwlock) { rt_write_lock(rwlock); } #ifdef CONFIG_DEBUG_LOCK_ALLOC static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass) + __acquires(rwlock) { rt_write_lock_nested(rwlock, subclass); } @@ -94,12 +104,14 @@ static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass) #endif static __always_inline void write_lock_bh(rwlock_t *rwlock) + __acquires(rwlock) { local_bh_disable(); rt_write_lock(rwlock); } static __always_inline void write_lock_irq(rwlock_t *rwlock) + __acquires(rwlock) { rt_write_lock(rwlock); } @@ -114,33 +126,34 @@ static __always_inline void write_lock_irq(rwlock_t *rwlock) #define write_trylock(lock) __cond_acquire(lock, rt_write_trylock(lock)) #define write_trylock_irqsave(lock, flags) \ -({ \ - int __locked; \ - \ - typecheck(unsigned long, flags); \ - flags = 0; \ - __locked = write_trylock(lock); \ - __locked; \ -}) + __cond_acquire(lock, ({ \ + typecheck(unsigned long, flags); \ + flags = 0; \ + rt_write_trylock(lock); \ + })) static __always_inline void write_unlock(rwlock_t *rwlock) + __releases(rwlock) { rt_write_unlock(rwlock); } static __always_inline void write_unlock_bh(rwlock_t *rwlock) + __releases(rwlock) { rt_write_unlock(rwlock); local_bh_enable(); } static __always_inline void write_unlock_irq(rwlock_t *rwlock) + __releases(rwlock) { rt_write_unlock(rwlock); } static __always_inline void write_unlock_irqrestore(rwlock_t *rwlock, unsigned long flags) + __releases(rwlock) { rt_write_unlock(rwlock); } diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h index 1948442e7750..231489cc30f2 100644 --- a/include/linux/rwlock_types.h +++ b/include/linux/rwlock_types.h @@ -22,7 +22,7 @@ * portions Copyright 2005, Red Hat, Inc., Ingo Molnar * Released under the General Public License (GPL). */ -typedef struct { +struct_with_capability(rwlock) { arch_rwlock_t raw_lock; #ifdef CONFIG_DEBUG_SPINLOCK unsigned int magic, owner_cpu; @@ -31,7 +31,8 @@ typedef struct { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} rwlock_t; +}; +typedef struct rwlock rwlock_t; #define RWLOCK_MAGIC 0xdeaf1eed @@ -54,13 +55,14 @@ typedef struct { #include -typedef struct { +struct_with_capability(rwlock) { struct rwbase_rt rwbase; atomic_t readers; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} rwlock_t; +}; +typedef struct rwlock rwlock_t; #define __RWLOCK_RT_INITIALIZER(name) \ { \ diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 678e6f0679a1..1646a9920fd7 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -106,11 +106,12 @@ do { \ static struct lock_class_key __key; \ \ __raw_spin_lock_init((lock), #lock, &__key, LD_WAIT_SPIN); \ + __assert_cap(lock); \ } while (0) #else # define raw_spin_lock_init(lock) \ - do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); } while (0) + do { *(lock) = __RAW_SPIN_LOCK_UNLOCKED(lock); __assert_cap(lock); } while (0) #endif #define raw_spin_is_locked(lock) arch_spin_is_locked(&(lock)->raw_lock) @@ -286,19 +287,19 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) #define raw_spin_trylock_bh(lock) \ __cond_acquire(lock, _raw_spin_trylock_bh(lock)) -#define raw_spin_trylock_irq(lock) \ -({ \ - local_irq_disable(); \ - raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_enable(); 0; }); \ -}) +#define raw_spin_trylock_irq(lock) \ + __cond_acquire(lock, ({ \ + local_irq_disable(); \ + _raw_spin_trylock(lock) ? \ + 1 : ({ local_irq_enable(); 0; }); \ + })) -#define raw_spin_trylock_irqsave(lock, flags) \ -({ \ - local_irq_save(flags); \ - raw_spin_trylock(lock) ? \ - 1 : ({ local_irq_restore(flags); 0; }); \ -}) +#define raw_spin_trylock_irqsave(lock, flags) \ + __cond_acquire(lock, ({ \ + local_irq_save(flags); \ + _raw_spin_trylock(lock) ? \ + 1 : ({ local_irq_restore(flags); 0; }); \ + })) #ifndef CONFIG_PREEMPT_RT /* Include rwlock functions for !RT */ @@ -334,6 +335,7 @@ do { \ \ __raw_spin_lock_init(spinlock_check(lock), \ #lock, &__key, LD_WAIT_CONFIG); \ + __assert_cap(lock); \ } while (0) #else @@ -342,21 +344,25 @@ do { \ do { \ spinlock_check(_lock); \ *(_lock) = __SPIN_LOCK_UNLOCKED(_lock); \ + __assert_cap(_lock); \ } while (0) #endif static __always_inline void spin_lock(spinlock_t *lock) + __acquires(lock) __no_capability_analysis { raw_spin_lock(&lock->rlock); } static __always_inline void spin_lock_bh(spinlock_t *lock) + __acquires(lock) __no_capability_analysis { raw_spin_lock_bh(&lock->rlock); } static __always_inline int spin_trylock(spinlock_t *lock) + __cond_acquires(lock) __no_capability_analysis { return raw_spin_trylock(&lock->rlock); } @@ -372,6 +378,7 @@ do { \ } while (0) static __always_inline void spin_lock_irq(spinlock_t *lock) + __acquires(lock) __no_capability_analysis { raw_spin_lock_irq(&lock->rlock); } @@ -379,47 +386,53 @@ static __always_inline void spin_lock_irq(spinlock_t *lock) #define spin_lock_irqsave(lock, flags) \ do { \ raw_spin_lock_irqsave(spinlock_check(lock), flags); \ + __release(spinlock_check(lock)); __acquire(lock); \ } while (0) #define spin_lock_irqsave_nested(lock, flags, subclass) \ do { \ raw_spin_lock_irqsave_nested(spinlock_check(lock), flags, subclass); \ + __release(spinlock_check(lock)); __acquire(lock); \ } while (0) static __always_inline void spin_unlock(spinlock_t *lock) + __releases(lock) __no_capability_analysis { raw_spin_unlock(&lock->rlock); } static __always_inline void spin_unlock_bh(spinlock_t *lock) + __releases(lock) __no_capability_analysis { raw_spin_unlock_bh(&lock->rlock); } static __always_inline void spin_unlock_irq(spinlock_t *lock) + __releases(lock) __no_capability_analysis { raw_spin_unlock_irq(&lock->rlock); } static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) + __releases(lock) __no_capability_analysis { raw_spin_unlock_irqrestore(&lock->rlock, flags); } static __always_inline int spin_trylock_bh(spinlock_t *lock) + __cond_acquires(lock) __no_capability_analysis { return raw_spin_trylock_bh(&lock->rlock); } static __always_inline int spin_trylock_irq(spinlock_t *lock) + __cond_acquires(lock) __no_capability_analysis { return raw_spin_trylock_irq(&lock->rlock); } #define spin_trylock_irqsave(lock, flags) \ -({ \ - raw_spin_trylock_irqsave(spinlock_check(lock), flags); \ -}) + __cond_acquire(lock, raw_spin_trylock_irqsave(spinlock_check(lock), flags)) /** * spin_is_locked() - Check whether a spinlock is locked. diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h index 9ecb0ab504e3..fab02d8bf0c9 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -34,8 +34,8 @@ unsigned long __lockfunc _raw_spin_lock_irqsave(raw_spinlock_t *lock) unsigned long __lockfunc _raw_spin_lock_irqsave_nested(raw_spinlock_t *lock, int subclass) __acquires(lock); -int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock); -int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock); +int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(lock); +int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(lock); void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock); @@ -84,6 +84,7 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) #endif static inline int __raw_spin_trylock(raw_spinlock_t *lock) + __cond_acquires(lock) { preempt_disable(); if (do_raw_spin_trylock(lock)) { @@ -102,6 +103,7 @@ static inline int __raw_spin_trylock(raw_spinlock_t *lock) #if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC) static inline unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock) + __acquires(lock) __no_capability_analysis { unsigned long flags; @@ -113,6 +115,7 @@ static inline unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock) } static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) + __acquires(lock) __no_capability_analysis { local_irq_disable(); preempt_disable(); @@ -121,6 +124,7 @@ static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) } static inline void __raw_spin_lock_bh(raw_spinlock_t *lock) + __acquires(lock) __no_capability_analysis { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -128,6 +132,7 @@ static inline void __raw_spin_lock_bh(raw_spinlock_t *lock) } static inline void __raw_spin_lock(raw_spinlock_t *lock) + __acquires(lock) __no_capability_analysis { preempt_disable(); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); @@ -137,6 +142,7 @@ static inline void __raw_spin_lock(raw_spinlock_t *lock) #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */ static inline void __raw_spin_unlock(raw_spinlock_t *lock) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -145,6 +151,7 @@ static inline void __raw_spin_unlock(raw_spinlock_t *lock) static inline void __raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -153,6 +160,7 @@ static inline void __raw_spin_unlock_irqrestore(raw_spinlock_t *lock, } static inline void __raw_spin_unlock_irq(raw_spinlock_t *lock) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -161,6 +169,7 @@ static inline void __raw_spin_unlock_irq(raw_spinlock_t *lock) } static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock) + __releases(lock) { spin_release(&lock->dep_map, _RET_IP_); do_raw_spin_unlock(lock); @@ -168,6 +177,7 @@ static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock) } static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock) + __cond_acquires(lock) { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); if (do_raw_spin_trylock(lock)) { diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_up.h index 819aeba1c87e..018f5aabc1be 100644 --- a/include/linux/spinlock_api_up.h +++ b/include/linux/spinlock_api_up.h @@ -24,68 +24,77 @@ * flags straight, to suppress compiler warnings of unused lock * variables, and to add the proper checker annotations: */ -#define ___LOCK(lock) \ - do { __acquire(lock); (void)(lock); } while (0) +#define ___LOCK_void(lock) \ + do { (void)(lock); } while (0) -#define __LOCK(lock) \ - do { preempt_disable(); ___LOCK(lock); } while (0) +#define ___LOCK_(lock) \ + do { __acquire(lock); ___LOCK_void(lock); } while (0) -#define __LOCK_BH(lock) \ - do { __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); ___LOCK(lock); } while (0) +#define ___LOCK_shared(lock) \ + do { __acquire_shared(lock); ___LOCK_void(lock); } while (0) -#define __LOCK_IRQ(lock) \ - do { local_irq_disable(); __LOCK(lock); } while (0) +#define __LOCK(lock, ...) \ + do { preempt_disable(); ___LOCK_##__VA_ARGS__(lock); } while (0) -#define __LOCK_IRQSAVE(lock, flags) \ - do { local_irq_save(flags); __LOCK(lock); } while (0) +#define __LOCK_BH(lock, ...) \ + do { __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); ___LOCK_##__VA_ARGS__(lock); } while (0) -#define ___UNLOCK(lock) \ +#define __LOCK_IRQ(lock, ...) \ + do { local_irq_disable(); __LOCK(lock, ##__VA_ARGS__); } while (0) + +#define __LOCK_IRQSAVE(lock, flags, ...) \ + do { local_irq_save(flags); __LOCK(lock, ##__VA_ARGS__); } while (0) + +#define ___UNLOCK_(lock) \ do { __release(lock); (void)(lock); } while (0) -#define __UNLOCK(lock) \ - do { preempt_enable(); ___UNLOCK(lock); } while (0) +#define ___UNLOCK_shared(lock) \ + do { __release_shared(lock); (void)(lock); } while (0) -#define __UNLOCK_BH(lock) \ +#define __UNLOCK(lock, ...) \ + do { preempt_enable(); ___UNLOCK_##__VA_ARGS__(lock); } while (0) + +#define __UNLOCK_BH(lock, ...) \ do { __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); \ - ___UNLOCK(lock); } while (0) + ___UNLOCK_##__VA_ARGS__(lock); } while (0) -#define __UNLOCK_IRQ(lock) \ - do { local_irq_enable(); __UNLOCK(lock); } while (0) +#define __UNLOCK_IRQ(lock, ...) \ + do { local_irq_enable(); __UNLOCK(lock, ##__VA_ARGS__); } while (0) -#define __UNLOCK_IRQRESTORE(lock, flags) \ - do { local_irq_restore(flags); __UNLOCK(lock); } while (0) +#define __UNLOCK_IRQRESTORE(lock, flags, ...) \ + do { local_irq_restore(flags); __UNLOCK(lock, ##__VA_ARGS__); } while (0) #define _raw_spin_lock(lock) __LOCK(lock) #define _raw_spin_lock_nested(lock, subclass) __LOCK(lock) -#define _raw_read_lock(lock) __LOCK(lock) +#define _raw_read_lock(lock) __LOCK(lock, shared) #define _raw_write_lock(lock) __LOCK(lock) #define _raw_write_lock_nested(lock, subclass) __LOCK(lock) #define _raw_spin_lock_bh(lock) __LOCK_BH(lock) -#define _raw_read_lock_bh(lock) __LOCK_BH(lock) +#define _raw_read_lock_bh(lock) __LOCK_BH(lock, shared) #define _raw_write_lock_bh(lock) __LOCK_BH(lock) #define _raw_spin_lock_irq(lock) __LOCK_IRQ(lock) -#define _raw_read_lock_irq(lock) __LOCK_IRQ(lock) +#define _raw_read_lock_irq(lock) __LOCK_IRQ(lock, shared) #define _raw_write_lock_irq(lock) __LOCK_IRQ(lock) #define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) -#define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) +#define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags, shared) #define _raw_write_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) -#define _raw_spin_trylock(lock) ({ __LOCK(lock); 1; }) -#define _raw_read_trylock(lock) ({ __LOCK(lock); 1; }) -#define _raw_write_trylock(lock) ({ __LOCK(lock); 1; }) -#define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock); 1; }) +#define _raw_spin_trylock(lock) ({ __LOCK(lock, void); 1; }) +#define _raw_read_trylock(lock) ({ __LOCK(lock, void); 1; }) +#define _raw_write_trylock(lock) ({ __LOCK(lock, void); 1; }) +#define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock, void); 1; }) #define _raw_spin_unlock(lock) __UNLOCK(lock) -#define _raw_read_unlock(lock) __UNLOCK(lock) +#define _raw_read_unlock(lock) __UNLOCK(lock, shared) #define _raw_write_unlock(lock) __UNLOCK(lock) #define _raw_spin_unlock_bh(lock) __UNLOCK_BH(lock) #define _raw_write_unlock_bh(lock) __UNLOCK_BH(lock) -#define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock) +#define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock, shared) #define _raw_spin_unlock_irq(lock) __UNLOCK_IRQ(lock) -#define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock) +#define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock, shared) #define _raw_write_unlock_irq(lock) __UNLOCK_IRQ(lock) #define _raw_spin_unlock_irqrestore(lock, flags) \ __UNLOCK_IRQRESTORE(lock, flags) #define _raw_read_unlock_irqrestore(lock, flags) \ - __UNLOCK_IRQRESTORE(lock, flags) + __UNLOCK_IRQRESTORE(lock, flags, shared) #define _raw_write_unlock_irqrestore(lock, flags) \ __UNLOCK_IRQRESTORE(lock, flags) diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h index eaad4dd2baac..5d9ebc3ec521 100644 --- a/include/linux/spinlock_rt.h +++ b/include/linux/spinlock_rt.h @@ -20,6 +20,7 @@ static inline void __rt_spin_lock_init(spinlock_t *lock, const char *name, do { \ rt_mutex_base_init(&(slock)->lock); \ __rt_spin_lock_init(slock, name, key, percpu); \ + __assert_cap(slock); \ } while (0) #define _spin_lock_init(slock, percpu) \ @@ -40,6 +41,7 @@ extern int rt_spin_trylock_bh(spinlock_t *lock); extern int rt_spin_trylock(spinlock_t *lock); static __always_inline void spin_lock(spinlock_t *lock) + __acquires(lock) { rt_spin_lock(lock); } @@ -82,6 +84,7 @@ static __always_inline void spin_lock(spinlock_t *lock) __spin_lock_irqsave_nested(lock, flags, subclass) static __always_inline void spin_lock_bh(spinlock_t *lock) + __acquires(lock) { /* Investigate: Drop bh when blocking ? */ local_bh_disable(); @@ -89,6 +92,7 @@ static __always_inline void spin_lock_bh(spinlock_t *lock) } static __always_inline void spin_lock_irq(spinlock_t *lock) + __acquires(lock) { rt_spin_lock(lock); } @@ -101,23 +105,27 @@ static __always_inline void spin_lock_irq(spinlock_t *lock) } while (0) static __always_inline void spin_unlock(spinlock_t *lock) + __releases(lock) { rt_spin_unlock(lock); } static __always_inline void spin_unlock_bh(spinlock_t *lock) + __releases(lock) { rt_spin_unlock(lock); local_bh_enable(); } static __always_inline void spin_unlock_irq(spinlock_t *lock) + __releases(lock) { rt_spin_unlock(lock); } static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) + __releases(lock) { rt_spin_unlock(lock); } @@ -132,14 +140,11 @@ static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, __cond_acquire(lock, rt_spin_trylock(lock)) #define spin_trylock_irqsave(lock, flags) \ -({ \ - int __locked; \ - \ - typecheck(unsigned long, flags); \ - flags = 0; \ - __locked = spin_trylock(lock); \ - __locked; \ -}) + __cond_acquire(lock, ({ \ + typecheck(unsigned long, flags); \ + flags = 0; \ + rt_spin_trylock(lock); \ + })) #define spin_is_contended(lock) (((void)(lock), 0)) diff --git a/include/linux/spinlock_types.h b/include/linux/spinlock_types.h index 2dfa35ffec76..2c5db5b5b990 100644 --- a/include/linux/spinlock_types.h +++ b/include/linux/spinlock_types.h @@ -14,7 +14,7 @@ #ifndef CONFIG_PREEMPT_RT /* Non PREEMPT_RT kernels map spinlock to raw_spinlock */ -typedef struct spinlock { +struct_with_capability(spinlock) { union { struct raw_spinlock rlock; @@ -26,7 +26,8 @@ typedef struct spinlock { }; #endif }; -} spinlock_t; +}; +typedef struct spinlock spinlock_t; #define ___SPIN_LOCK_INITIALIZER(lockname) \ { \ @@ -47,12 +48,13 @@ typedef struct spinlock { /* PREEMPT_RT kernels map spinlock to rt_mutex */ #include -typedef struct spinlock { +struct_with_capability(spinlock) { struct rt_mutex_base lock; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} spinlock_t; +}; +typedef struct spinlock spinlock_t; #define __SPIN_LOCK_UNLOCKED(name) \ { \ diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_types_raw.h index 91cb36b65a17..07792ff2c2b5 100644 --- a/include/linux/spinlock_types_raw.h +++ b/include/linux/spinlock_types_raw.h @@ -11,7 +11,7 @@ #include -typedef struct raw_spinlock { +struct_with_capability(raw_spinlock) { arch_spinlock_t raw_lock; #ifdef CONFIG_DEBUG_SPINLOCK unsigned int magic, owner_cpu; @@ -20,7 +20,8 @@ typedef struct raw_spinlock { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; #endif -} raw_spinlock_t; +}; +typedef struct raw_spinlock raw_spinlock_t; #define SPINLOCK_MAGIC 0xdead4ead diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index a0adacce30ff..f63980e134cf 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -5,6 +5,7 @@ */ #include +#include /* * Test that helper macros work as expected. @@ -16,3 +17,130 @@ static void __used test_common_helpers(void) BUILD_BUG_ON(capability_unsafe((void)2, 3) != 3); /* does not swallow commas */ capability_unsafe(do { } while (0)); /* works with void statements */ } + +#define TEST_SPINLOCK_COMMON(class, type, type_init, type_lock, type_unlock, type_trylock, op) \ + struct test_##class##_data { \ + type lock; \ + int counter __var_guarded_by(&lock); \ + int *pointer __ref_guarded_by(&lock); \ + }; \ + static void __used test_##class##_init(struct test_##class##_data *d) \ + { \ + type_init(&d->lock); \ + d->counter = 0; \ + } \ + static void __used test_##class(struct test_##class##_data *d) \ + { \ + unsigned long flags; \ + d->pointer++; \ + type_lock(&d->lock); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock(&d->lock); \ + type_lock##_irq(&d->lock); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock##_irq(&d->lock); \ + type_lock##_bh(&d->lock); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock##_bh(&d->lock); \ + type_lock##_irqsave(&d->lock, flags); \ + op(d->counter); \ + op(*d->pointer); \ + type_unlock##_irqrestore(&d->lock, flags); \ + } \ + static void __used test_##class##_trylock(struct test_##class##_data *d) \ + { \ + if (type_trylock(&d->lock)) { \ + op(d->counter); \ + type_unlock(&d->lock); \ + } \ + } \ + static void __used test_##class##_assert(struct test_##class##_data *d) \ + { \ + lockdep_assert_held(&d->lock); \ + op(d->counter); \ + } \ + static void __used test_##class##_guard(struct test_##class##_data *d) \ + { \ + { guard(class)(&d->lock); op(d->counter); } \ + { guard(class##_irq)(&d->lock); op(d->counter); } \ + { guard(class##_irqsave)(&d->lock); op(d->counter); } \ + } + +#define TEST_OP_RW(x) (x)++ +#define TEST_OP_RO(x) ((void)(x)) + +TEST_SPINLOCK_COMMON(raw_spinlock, + raw_spinlock_t, + raw_spin_lock_init, + raw_spin_lock, + raw_spin_unlock, + raw_spin_trylock, + TEST_OP_RW); +static void __used test_raw_spinlock_trylock_extra(struct test_raw_spinlock_data *d) +{ + unsigned long flags; + + if (raw_spin_trylock_irq(&d->lock)) { + d->counter++; + raw_spin_unlock_irq(&d->lock); + } + if (raw_spin_trylock_irqsave(&d->lock, flags)) { + d->counter++; + raw_spin_unlock_irqrestore(&d->lock, flags); + } + scoped_cond_guard(raw_spinlock_try, return, &d->lock) { + d->counter++; + } +} + +TEST_SPINLOCK_COMMON(spinlock, + spinlock_t, + spin_lock_init, + spin_lock, + spin_unlock, + spin_trylock, + TEST_OP_RW); +static void __used test_spinlock_trylock_extra(struct test_spinlock_data *d) +{ + unsigned long flags; + + if (spin_trylock_irq(&d->lock)) { + d->counter++; + spin_unlock_irq(&d->lock); + } + if (spin_trylock_irqsave(&d->lock, flags)) { + d->counter++; + spin_unlock_irqrestore(&d->lock, flags); + } + scoped_cond_guard(spinlock_try, return, &d->lock) { + d->counter++; + } +} + +TEST_SPINLOCK_COMMON(write_lock, + rwlock_t, + rwlock_init, + write_lock, + write_unlock, + write_trylock, + TEST_OP_RW); +static void __used test_write_trylock_extra(struct test_write_lock_data *d) +{ + unsigned long flags; + + if (write_trylock_irqsave(&d->lock, flags)) { + d->counter++; + write_unlock_irqrestore(&d->lock, flags); + } +} + +TEST_SPINLOCK_COMMON(read_lock, + rwlock_t, + rwlock_init, + read_lock, + read_unlock, + read_trylock, + TEST_OP_RO); From patchwork Thu Feb 6 18:10:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963489 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B13A01E7C11 for ; Thu, 6 Feb 2025 18:18:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865894; cv=none; b=e+rf4u7BMx2PdR1J5DwMXvkGTglh7LjXKELi9a3KD80aS520MbFqnMqS3S/uuDfD7fPvYU+Qv/zekDZVDEFgteEZrh/R4WoECR1qDGfMVIgY7DKz4wi33tMFyEZOhZq+DAFQdhQTQpoftd6BSsRKlocPTPq+gqu7Q0JCEoPMDHc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865894; c=relaxed/simple; bh=V/8+QBogDMyq8h2947266ECzIEzGrAwlE/YZoBQH82U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=P5Qzz+//PRNlC2+3GAEDywMgTJO1hbbAxEKa93rkclWuMH/aO6NE7DXuJ5Pu/zSji3t62OeHh0pimxQ7zIeir+FMJ8JP+LDdI+kSRUYAdvif4RN0XiBBgOEdPU60fYPB1k4DOwlJYESFtrkCnpysv1xekSxpdVfEhIUDJh+s4dU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zqC1cvdL; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zqC1cvdL" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-38dbe6a1e01so790895f8f.0 for ; Thu, 06 Feb 2025 10:18:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865891; x=1739470691; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Y0plDjCJ1B0qZbMeycYJk7FAuIdA1LrCIxvHWeEdHoU=; b=zqC1cvdL5z4g+NzZXRFUwGumd9cF3Bw+U6gYvGOP0gY21Y8RsEaiQv16aOjF9LxCmQ Oek0NSkm7CO8Oq+TYkkSDRl0z2C4BowmKAWZqULyP35oosbqo+ZpFm0VPSN8m1ujrvpJ glMQ4U/CJIJWrQjtlB3sua3FNbdvhnxfDCZy7gXez2GswV0y82ZDPhgluoJjuiW/wBzO sigB4eKBhdH3mqISAiozgVoew+YZqBVFtTVQWDAdVGveVzNzHnCdmXBlJhUGnu7GTu8Y 1DRPG8PBGCxM9EWuXCbEjVG9mfIKvQMwgnoHvkblJYbAmatJY/r9Ca7YaV1SOZOTPtbD hm1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865891; x=1739470691; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Y0plDjCJ1B0qZbMeycYJk7FAuIdA1LrCIxvHWeEdHoU=; b=Y9PCBJsMTbUp/Ell4Sa4FFR+o05LIjTygsTdLW9U+gd1VHMkG4Lv7l1D81lHrbhK93 yH8xMLr8UF5By6/cpJanHnjeXR6P9dO5de07riR/vB7hewvAoaKZG1QKyATPU1yXAGMq OvL/atzg1drqiHuR7qf4OTpmjIqnuTdFbslyPDJVeyopW9M/lF+fCMOhZoh1hOcfSShq 5bwOR7y8iFUyGyZJBbu2v6R2kI1hww0vYmwFfbRPIx6duZlFZBfpvLcbk0IalcNT6JhQ Gy84knsUjRFcgY2rMOyigB+Rzubn9WXV8iu7sNUVeFtEq/JtBD2y0eZfhWfGIOt5uwAj 2EFw== X-Forwarded-Encrypted: i=1; AJvYcCXDjSEWi7flNK5XT5xaRqf122Tmq1oQNJGH2ufgnIeUxsjv5tvlfePeFMR2vvnjqgrFXT4=@vger.kernel.org X-Gm-Message-State: AOJu0Yz0DhRgPfOflWOvtF+6OEsmh1qnl8jCoR6Za20Af2v2INwE6zO3 SVFUSP+JK5WE81/9QXbxSK7Dg+87dyRUN4TMlEVcm5epshzP0bLDK579dDC8mSj0llHskLUS/Q= = X-Google-Smtp-Source: AGHT+IEkg7Kc2bKGRgLgH3oGb3lUCUVYUhWVhdvv0Wl/Ah0jIHP7904jYfscr6wQzIioSc3ZaTf74I70+g== X-Received: from ejcvs8.prod.google.com ([2002:a17:907:a588:b0:aa6:8676:3b2b]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:64af:0:b0:386:605:77e with SMTP id ffacd0b85a97d-38dc933bd7amr24f8f.49.1738865890985; Thu, 06 Feb 2025 10:18:10 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:04 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-11-elver@google.com> Subject: [PATCH RFC 10/24] compiler-capability-analysis: Change __cond_acquires to take return value From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org While Sparse is oblivious to the return value of conditional acquire functions, Clang's capability analysis needs to know the return value which indicates successful acquisition. Add the additional argument, and convert existing uses. No functional change intended. Signed-off-by: Marco Elver --- fs/dlm/lock.c | 2 +- include/linux/compiler-capability-analysis.h | 14 +++++++++----- include/linux/refcount.h | 6 +++--- include/linux/spinlock.h | 6 +++--- include/linux/spinlock_api_smp.h | 8 ++++---- net/ipv4/tcp_sigpool.c | 2 +- 6 files changed, 21 insertions(+), 17 deletions(-) diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c index c8ff88f1cdcf..e39ca02b793e 100644 --- a/fs/dlm/lock.c +++ b/fs/dlm/lock.c @@ -343,7 +343,7 @@ void dlm_hold_rsb(struct dlm_rsb *r) /* TODO move this to lib/refcount.c */ static __must_check bool dlm_refcount_dec_and_write_lock_bh(refcount_t *r, rwlock_t *lock) -__cond_acquires(lock) + __cond_acquires(1, lock) { if (refcount_dec_not_one(r)) return false; diff --git a/include/linux/compiler-capability-analysis.h b/include/linux/compiler-capability-analysis.h index ca63b6513dc3..10c03133ac4d 100644 --- a/include/linux/compiler-capability-analysis.h +++ b/include/linux/compiler-capability-analysis.h @@ -231,7 +231,7 @@ # define __must_hold(x) __attribute__((context(x,1,1))) # define __must_not_hold(x) # define __acquires(x) __attribute__((context(x,0,1))) -# define __cond_acquires(x) __attribute__((context(x,0,-1))) +# define __cond_acquires(ret, x) __attribute__((context(x,0,-1))) # define __releases(x) __attribute__((context(x,1,0))) # define __acquire(x) __context__(x,1) # define __release(x) __context__(x,-1) @@ -277,12 +277,14 @@ /** * __cond_acquires() - function attribute, function conditionally * acquires a capability exclusively + * @ret: value returned by function if capability acquired * @x: capability instance pointer * * Function attribute declaring that the function conditionally acquires the - * given capability instance @x exclusively, but does not release it. + * given capability instance @x exclusively, but does not release it. The + * function return value @ret denotes when the capability is acquired. */ -# define __cond_acquires(x) __try_acquires_cap(1, x) +# define __cond_acquires(ret, x) __try_acquires_cap(ret, x) /** * __releases() - function attribute, function releases a capability exclusively @@ -349,12 +351,14 @@ /** * __cond_acquires_shared() - function attribute, function conditionally * acquires a capability shared + * @ret: value returned by function if capability acquired * @x: capability instance pointer * * Function attribute declaring that the function conditionally acquires the - * given capability instance @x with shared access, but does not release it. + * given capability instance @x with shared access, but does not release it. The + * function return value @ret denotes when the capability is acquired. */ -# define __cond_acquires_shared(x) __try_acquires_shared_cap(1, x) +# define __cond_acquires_shared(ret, x) __try_acquires_shared_cap(ret, x) /** * __releases_shared() - function attribute, function releases a diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 35f039ecb272..f63ce3fadfa3 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -353,9 +353,9 @@ static inline void refcount_dec(refcount_t *r) extern __must_check bool refcount_dec_if_one(refcount_t *r); extern __must_check bool refcount_dec_not_one(refcount_t *r); -extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(lock); -extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(lock); +extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(1, lock); +extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(1, lock); extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, - unsigned long *flags) __cond_acquires(lock); + unsigned long *flags) __cond_acquires(1, lock); #endif /* _LINUX_REFCOUNT_H */ diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 1646a9920fd7..de5118d0e718 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -362,7 +362,7 @@ static __always_inline void spin_lock_bh(spinlock_t *lock) } static __always_inline int spin_trylock(spinlock_t *lock) - __cond_acquires(lock) __no_capability_analysis + __cond_acquires(1, lock) __no_capability_analysis { return raw_spin_trylock(&lock->rlock); } @@ -420,13 +420,13 @@ static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned lo } static __always_inline int spin_trylock_bh(spinlock_t *lock) - __cond_acquires(lock) __no_capability_analysis + __cond_acquires(1, lock) __no_capability_analysis { return raw_spin_trylock_bh(&lock->rlock); } static __always_inline int spin_trylock_irq(spinlock_t *lock) - __cond_acquires(lock) __no_capability_analysis + __cond_acquires(1, lock) __no_capability_analysis { return raw_spin_trylock_irq(&lock->rlock); } diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h index fab02d8bf0c9..9b6f7a5a0705 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -34,8 +34,8 @@ unsigned long __lockfunc _raw_spin_lock_irqsave(raw_spinlock_t *lock) unsigned long __lockfunc _raw_spin_lock_irqsave_nested(raw_spinlock_t *lock, int subclass) __acquires(lock); -int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(lock); -int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(lock); +int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(1, lock); +int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(1, lock); void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock); @@ -84,7 +84,7 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) #endif static inline int __raw_spin_trylock(raw_spinlock_t *lock) - __cond_acquires(lock) + __cond_acquires(1, lock) { preempt_disable(); if (do_raw_spin_trylock(lock)) { @@ -177,7 +177,7 @@ static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock) } static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock) - __cond_acquires(lock) + __cond_acquires(1, lock) { __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); if (do_raw_spin_trylock(lock)) { diff --git a/net/ipv4/tcp_sigpool.c b/net/ipv4/tcp_sigpool.c index d8a4f192873a..10b2e5970c40 100644 --- a/net/ipv4/tcp_sigpool.c +++ b/net/ipv4/tcp_sigpool.c @@ -257,7 +257,7 @@ void tcp_sigpool_get(unsigned int id) } EXPORT_SYMBOL_GPL(tcp_sigpool_get); -int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acquires(RCU_BH) +int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acquires(0, RCU_BH) { struct crypto_ahash *hash; From patchwork Thu Feb 6 18:10:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963490 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 545DB1E98F4 for ; Thu, 6 Feb 2025 18:18:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865897; cv=none; b=AQj5uTrfSAyDVAPSSkFsy55G049RZ8DUohjpJhnOp4NkpnMzNCy5TJFiyiKALKULxGALAxNJqYpbE2y0pZpDdc3mvEM6b+fpzAY6LWO/gaiq0B3SOPExajHijRCsF0+KHCDw2pBZ6oc2W+/Hax4EdzT7km9f5KkGSu5OCVtuhBw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865897; c=relaxed/simple; bh=tXHzFH0hmNiiQM92UIIdG/aT+1rtSn37gLNSxACIiw4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OsmLQ8mUizQ4nWYPivGZIGVZo4YyA9mO+lu5gTSR6b9T67kMV8nahBbICmoU/oNKzyPmUN9WOnmkt3kVlFDnRa0XPXnJukPKPFnLLgLEYeUJwmV+XvWNEDh7QLVvvW2D4DuY/WyXVCBVROwNEQistu9R5ABJgWp6qKx2+1Qa/IE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wOwr3KKh; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wOwr3KKh" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-ab6936cd142so141015366b.1 for ; Thu, 06 Feb 2025 10:18:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865894; x=1739470694; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=745NDxNXHBzGRxy9OOyVrOwCOzE5UpzdpD8KZdhsNVE=; b=wOwr3KKhqV/HeGoKdVJNW1Hw+bQubemKoCJpm2b+bSwGWVAF5wHYfRWdLoYThAibDz TcDrE0p4y7hvBdEL6boM0qvgmCyZSVA1PHki1ViOjSb6fySbx3DJ6GIUgFAGfsrg/n9q Qpg51qGyTWbPm3vC68vjK5h7+hhjucUB4r6FUzC7ikJOJGbK2pwTroLIKbbKmeIaNqJ4 KvNC5ZQyLP2ahcdMJCELx+bQolcVIR8/uWh76sS/7gbc4hd9gieo+ZD7ayRjbCQY1K4P pxcCWu1koGEsSlvooTEK+KT1ahYPXZic53jh7J1sv8+OepaxPgjfn5BfdYxnGbSJPwGl wf6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865894; x=1739470694; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=745NDxNXHBzGRxy9OOyVrOwCOzE5UpzdpD8KZdhsNVE=; b=c87F5wXN0Dqr4x6bT59XhWFxBq5MIgX3LmrotzaPZqFUGzBYZZzomnl9DZxDw9dUxt HWGJocT8mPeXircTzJTolaVnQG1nc/LV49iPjpyooMU5bnnM/Yp3ppAcKNVK5dTnxwPn kM/3/RdS0sPoseFWuoJnVVW0chGP3sbLVeYEpc7Vc40F8/i9UiTgGkXCL8hXA4CU7rIg R2ibuBrjYxf/0anMp6/ubqQz717Qqxi538kizXHCoBSUUqrpnTPBIZbubfn+tTM6gRRs WmzDYPRSHnmB2SAJrJtwzBGT3P3pnM4h10PU6dbwfyJheeprOdy2DQrBeP+W4QYWpmaE Q11g== X-Forwarded-Encrypted: i=1; AJvYcCV05iP3ef8mtT5g9CqTwExLM6pdfM1ueTuX8wHL2sUGdWRppkScIcwtw8U07VgzfITsf4Q=@vger.kernel.org X-Gm-Message-State: AOJu0YywprQWUiowqXofhMdhzP7+aX2RxYb8KDXeRrm3rKSag6VpJwDH t6cVmbI1w6dWndhbESR+d7FB6TdKA/HvdxirfTlR10sak6rLRf1U0pnuxuLf2r0s69qfYm2ctA= = X-Google-Smtp-Source: AGHT+IHMP7tew/ZdA68jN08WfrON1P9W/R7nXqvhUxrwPbRPxt2UtuWHgLWteAVtIaGSz1y85zKMuqXJxA== X-Received: from ejchx5.prod.google.com ([2002:a17:906:8465:b0:ab7:8024:1fb3]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:c147:b0:aaf:117c:e929 with SMTP id a640c23a62f3a-ab75e358d20mr804115566b.57.1738865893605; Thu, 06 Feb 2025 10:18:13 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:05 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-12-elver@google.com> Subject: [PATCH RFC 11/24] locking/mutex: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Add support for Clang's capability analysis for mutex. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/mutex.h | 29 +++++---- include/linux/mutex_types.h | 4 +- lib/test_capability-analysis.c | 64 +++++++++++++++++++ 4 files changed, 82 insertions(+), 17 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 904448605a77..31f76e877be5 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -85,7 +85,7 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/mutex.h b/include/linux/mutex.h index 2bf91b57591b..09ee3b89d342 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -62,6 +62,7 @@ do { \ static struct lock_class_key __key; \ \ __mutex_init((mutex), #mutex, &__key); \ + __assert_cap(mutex); \ } while (0) /** @@ -154,14 +155,14 @@ static inline int __devm_mutex_init(struct device *dev, struct mutex *lock) * Also see Documentation/locking/mutex-design.rst. */ #ifdef CONFIG_DEBUG_LOCK_ALLOC -extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); +extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass) __acquires(lock); extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock); extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock, - unsigned int subclass); + unsigned int subclass) __cond_acquires(0, lock); extern int __must_check mutex_lock_killable_nested(struct mutex *lock, - unsigned int subclass); -extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass); + unsigned int subclass) __cond_acquires(0, lock); +extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass) __acquires(lock); #define mutex_lock(lock) mutex_lock_nested(lock, 0) #define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(lock, 0) @@ -175,10 +176,10 @@ do { \ } while (0) #else -extern void mutex_lock(struct mutex *lock); -extern int __must_check mutex_lock_interruptible(struct mutex *lock); -extern int __must_check mutex_lock_killable(struct mutex *lock); -extern void mutex_lock_io(struct mutex *lock); +extern void mutex_lock(struct mutex *lock) __acquires(lock); +extern int __must_check mutex_lock_interruptible(struct mutex *lock) __cond_acquires(0, lock); +extern int __must_check mutex_lock_killable(struct mutex *lock) __cond_acquires(0, lock); +extern void mutex_lock_io(struct mutex *lock) __acquires(lock); # define mutex_lock_nested(lock, subclass) mutex_lock(lock) # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interruptible(lock) @@ -193,13 +194,13 @@ extern void mutex_lock_io(struct mutex *lock); * * Returns 1 if the mutex has been acquired successfully, and 0 on contention. */ -extern int mutex_trylock(struct mutex *lock); -extern void mutex_unlock(struct mutex *lock); +extern int mutex_trylock(struct mutex *lock) __cond_acquires(1, lock); +extern void mutex_unlock(struct mutex *lock) __releases(lock); -extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); +extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock) __cond_acquires(1, lock); -DEFINE_GUARD(mutex, struct mutex *, mutex_lock(_T), mutex_unlock(_T)) -DEFINE_GUARD_COND(mutex, _try, mutex_trylock(_T)) -DEFINE_GUARD_COND(mutex, _intr, mutex_lock_interruptible(_T) == 0) +DEFINE_LOCK_GUARD_1(mutex, struct mutex, mutex_lock(_T->lock), mutex_unlock(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(mutex, _try, mutex_trylock(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(mutex, _intr, mutex_lock_interruptible(_T->lock) == 0) #endif /* __LINUX_MUTEX_H */ diff --git a/include/linux/mutex_types.h b/include/linux/mutex_types.h index fdf7f515fde8..e1a5ea12d53c 100644 --- a/include/linux/mutex_types.h +++ b/include/linux/mutex_types.h @@ -38,7 +38,7 @@ * - detects multi-task circular deadlocks and prints out all affected * locks and tasks (and only those tasks) */ -struct mutex { +struct_with_capability(mutex) { atomic_long_t owner; raw_spinlock_t wait_lock; #ifdef CONFIG_MUTEX_SPIN_ON_OWNER @@ -59,7 +59,7 @@ struct mutex { */ #include -struct mutex { +struct_with_capability(mutex) { struct rt_mutex_base rtmutex; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index f63980e134cf..3410c04c2b76 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -5,6 +5,7 @@ */ #include +#include #include /* @@ -144,3 +145,66 @@ TEST_SPINLOCK_COMMON(read_lock, read_unlock, read_trylock, TEST_OP_RO); + +struct test_mutex_data { + struct mutex mtx; + int counter __var_guarded_by(&mtx); +}; + +static void __used test_mutex_init(struct test_mutex_data *d) +{ + mutex_init(&d->mtx); + d->counter = 0; +} + +static void __used test_mutex_lock(struct test_mutex_data *d) +{ + mutex_lock(&d->mtx); + d->counter++; + mutex_unlock(&d->mtx); + mutex_lock_io(&d->mtx); + d->counter++; + mutex_unlock(&d->mtx); +} + +static void __used test_mutex_trylock(struct test_mutex_data *d, atomic_t *a) +{ + if (!mutex_lock_interruptible(&d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } + if (!mutex_lock_killable(&d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } + if (mutex_trylock(&d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } + if (atomic_dec_and_mutex_lock(a, &d->mtx)) { + d->counter++; + mutex_unlock(&d->mtx); + } +} + +static void __used test_mutex_assert(struct test_mutex_data *d) +{ + lockdep_assert_held(&d->mtx); + d->counter++; +} + +static void __used test_mutex_guard(struct test_mutex_data *d) +{ + guard(mutex)(&d->mtx); + d->counter++; +} + +static void __used test_mutex_cond_guard(struct test_mutex_data *d) +{ + scoped_cond_guard(mutex_try, return, &d->mtx) { + d->counter++; + } + scoped_cond_guard(mutex_intr, return, &d->mtx) { + d->counter++; + } +} From patchwork Thu Feb 6 18:10:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963491 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB3B21EEA2C for ; Thu, 6 Feb 2025 18:18:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865899; cv=none; b=j7ozxaZaeZoyCiWMHlzeJwIxMys9fnQUNSDiAovBZDin0uwHlHCRpLUxiywT7iyTCaWAvwi2zKTvzSe4xwF/VPs53jCViM7HDktgmeWVNOGg8Cabs3qCjIU8R0FhmtZFo+dCP0xySloLUNtj24JuFjHWVODp8i7cGpukEBH/l4Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865899; c=relaxed/simple; bh=95UjjvZYAJ9cKJ8gjbfWj4DkOEijxPrW0T7Wi8ukuEk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LF3zAw2WtZ//jmJsU6DagjwTURhusDKOLKiR/fziZG/oqG8EIs1qcvk4hSBfiKC4Lpvojqr3FrmI33gLYCeY1kX9uMiu8i3q7O7Lf5SGCk04UMCn5Up8PCGGEnUyvNAQGMkR3amqgSk/YSB/1DGHpzvHsh/4lTiE3FCKgynzqk4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ucGlPYLz; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ucGlPYLz" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5de3d2a3090so339060a12.2 for ; Thu, 06 Feb 2025 10:18:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865896; x=1739470696; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=H4q+9WLE15qKddw85vlrIejrThq+pT1MCiLGjI1MaOU=; b=ucGlPYLzuYWB0CraFX2OjpjWpdB4bu9NDqabcfhP+UWfcLuMhr3qWcaJidrxOjYzga 9bTSmF0Iqtp5U5wc4yuHQbKnB8phKqdCBMiPq9mzfnAXVQBV1AFO9PVRGdyf2s+6s/+a ny5YBXBOuyl45oC+VgRItCfVbAdie4EY2slL1QKlw4PzO23x829LwvMOHomwJGJUWNEl 1HtX/S/+Ys/X6UBDGYxDu2xW28dO3BXOUDfgyfzhkkKpWuBZfyEXZUYgMjCG4UhRLLdv cJk1ZHP9bgDfeWEStflygtxndKUYpGZvxNQBlTLVH1WzbCVYyG5IgxYymUcXWkx/8sQF NoRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865896; x=1739470696; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=H4q+9WLE15qKddw85vlrIejrThq+pT1MCiLGjI1MaOU=; b=Z0YrnUPE1itP5m+O30WEvIkkgAnrj3rp8pptlrf3HmoTMtJ1Zmaka64Sv3DiAwVWld 2DEAqVjmQuCZuQ2B70cfCj3R6HczpuN0PvgBcRsL3dyqe3BcZXD/MdjzUlP7nyR9Efwy LFXFtImA0RBIrxPcOZeAGE2CNpi+nL6NnzXID6phwnWKCTYIeWgoanqkN1mf92JpZs0F etex59jJuYv8pCDpOn1yjobneLPAszEOLxmE+I1hPA1Kcr+a224mhLbJ7YMadyaHI1m/ sgzQaGF5DqYTantpx2rXgdo/+EZw7Vp2RkzfBNupYQQq+hQHL2tFZivHTZ9j/pUpnf1J AWUw== X-Forwarded-Encrypted: i=1; AJvYcCV30RZ1ik7fTe5WtkoJBxhb8To8VGqe7D1Yu0C18U9IJ8/cpWQdD5+DDmNp7QEyWg3GRK8=@vger.kernel.org X-Gm-Message-State: AOJu0YyyqyU79w00IGQ33IaREtZDObjXARC+W+TiexvclXURvaB6ckqF KkXnSfU799/4zLrEkIlTr56NjrqL1ERqJ369i/Ou3KPyw01TCdQ6AVgG8zLBAcfEVw6oem2whg= = X-Google-Smtp-Source: AGHT+IGPOkWYJqLoYHNLLbuzAos5k3kgWOVjs6Uc6epHSddM+3YDj7OblNI++rh3HnWy7IoUvZg6rTtpfQ== X-Received: from edbin10.prod.google.com ([2002:a05:6402:208a:b0:5de:35d9:f60c]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:194b:b0:5db:f423:19c5 with SMTP id 4fb4d7f45d1cf-5de44fea647mr545992a12.5.1738865896253; Thu, 06 Feb 2025 10:18:16 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:06 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-13-elver@google.com> Subject: [PATCH RFC 12/24] locking/seqlock: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Add support for Clang's capability analysis for seqlock_t. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/seqlock.h | 24 +++++++++++ include/linux/seqlock_types.h | 5 ++- lib/test_capability-analysis.c | 43 +++++++++++++++++++ 4 files changed, 71 insertions(+), 3 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 31f76e877be5..8d9336e91ce2 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -85,7 +85,7 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 5ce48eab7a2a..c914eb9714e9 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -816,6 +816,7 @@ static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s) do { \ spin_lock_init(&(sl)->lock); \ seqcount_spinlock_init(&(sl)->seqcount, &(sl)->lock); \ + __assert_cap(sl); \ } while (0) /** @@ -832,6 +833,7 @@ static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s) * Return: count, to be passed to read_seqretry() */ static inline unsigned read_seqbegin(const seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { return read_seqcount_begin(&sl->seqcount); } @@ -848,6 +850,7 @@ static inline unsigned read_seqbegin(const seqlock_t *sl) * Return: true if a read section retry is required, else false */ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) + __releases_shared(sl) __no_capability_analysis { return read_seqcount_retry(&sl->seqcount, start); } @@ -872,6 +875,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) * _irqsave or _bh variants of this function instead. */ static inline void write_seqlock(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { spin_lock(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -885,6 +889,7 @@ static inline void write_seqlock(seqlock_t *sl) * critical section of given seqlock_t. */ static inline void write_sequnlock(seqlock_t *sl) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock(&sl->lock); @@ -898,6 +903,7 @@ static inline void write_sequnlock(seqlock_t *sl) * other write side sections, can be invoked from softirq contexts. */ static inline void write_seqlock_bh(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { spin_lock_bh(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -912,6 +918,7 @@ static inline void write_seqlock_bh(seqlock_t *sl) * write_seqlock_bh(). */ static inline void write_sequnlock_bh(seqlock_t *sl) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_bh(&sl->lock); @@ -925,6 +932,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl) * other write sections, can be invoked from hardirq contexts. */ static inline void write_seqlock_irq(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { spin_lock_irq(&sl->lock); do_write_seqcount_begin(&sl->seqcount.seqcount); @@ -938,12 +946,14 @@ static inline void write_seqlock_irq(seqlock_t *sl) * seqlock_t write side section opened with write_seqlock_irq(). */ static inline void write_sequnlock_irq(seqlock_t *sl) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_irq(&sl->lock); } static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) + __acquires(sl) __no_capability_analysis { unsigned long flags; @@ -976,6 +986,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) */ static inline void write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags) + __releases(sl) __no_capability_analysis { do_write_seqcount_end(&sl->seqcount.seqcount); spin_unlock_irqrestore(&sl->lock, flags); @@ -998,6 +1009,7 @@ write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags) * The opened read section must be closed with read_sequnlock_excl(). */ static inline void read_seqlock_excl(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { spin_lock(&sl->lock); } @@ -1007,6 +1019,7 @@ static inline void read_seqlock_excl(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl(seqlock_t *sl) + __releases_shared(sl) __no_capability_analysis { spin_unlock(&sl->lock); } @@ -1021,6 +1034,7 @@ static inline void read_sequnlock_excl(seqlock_t *sl) * from softirq contexts. */ static inline void read_seqlock_excl_bh(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { spin_lock_bh(&sl->lock); } @@ -1031,6 +1045,7 @@ static inline void read_seqlock_excl_bh(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl_bh(seqlock_t *sl) + __releases_shared(sl) __no_capability_analysis { spin_unlock_bh(&sl->lock); } @@ -1045,6 +1060,7 @@ static inline void read_sequnlock_excl_bh(seqlock_t *sl) * hardirq context. */ static inline void read_seqlock_excl_irq(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { spin_lock_irq(&sl->lock); } @@ -1055,11 +1071,13 @@ static inline void read_seqlock_excl_irq(seqlock_t *sl) * @sl: Pointer to seqlock_t */ static inline void read_sequnlock_excl_irq(seqlock_t *sl) + __releases_shared(sl) __no_capability_analysis { spin_unlock_irq(&sl->lock); } static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl) + __acquires_shared(sl) __no_capability_analysis { unsigned long flags; @@ -1089,6 +1107,7 @@ static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl) */ static inline void read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags) + __releases_shared(sl) __no_capability_analysis { spin_unlock_irqrestore(&sl->lock, flags); } @@ -1125,6 +1144,7 @@ read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags) * parameter of the next read_seqbegin_or_lock() iteration. */ static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq) + __acquires_shared(lock) __no_capability_analysis { if (!(*seq & 1)) /* Even */ *seq = read_seqbegin(lock); @@ -1140,6 +1160,7 @@ static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq) * Return: true if a read section retry is required, false otherwise */ static inline int need_seqretry(seqlock_t *lock, int seq) + __releases_shared(lock) __no_capability_analysis { return !(seq & 1) && read_seqretry(lock, seq); } @@ -1153,6 +1174,7 @@ static inline int need_seqretry(seqlock_t *lock, int seq) * with read_seqbegin_or_lock() and validated by need_seqretry(). */ static inline void done_seqretry(seqlock_t *lock, int seq) + __no_capability_analysis { if (seq & 1) read_sequnlock_excl(lock); @@ -1180,6 +1202,7 @@ static inline void done_seqretry(seqlock_t *lock, int seq) */ static inline unsigned long read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq) + __acquires_shared(lock) __no_capability_analysis { unsigned long flags = 0; @@ -1205,6 +1228,7 @@ read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq) */ static inline void done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags) + __no_capability_analysis { if (seq & 1) read_sequnlock_excl_irqrestore(lock, flags); diff --git a/include/linux/seqlock_types.h b/include/linux/seqlock_types.h index dfdf43e3fa3d..9775d6f1a234 100644 --- a/include/linux/seqlock_types.h +++ b/include/linux/seqlock_types.h @@ -81,13 +81,14 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, mutex) * - Comments on top of seqcount_t * - Documentation/locking/seqlock.rst */ -typedef struct { +struct_with_capability(seqlock) { /* * Make sure that readers don't starve writers on PREEMPT_RT: use * seqcount_spinlock_t instead of seqcount_t. Check __SEQ_LOCK(). */ seqcount_spinlock_t seqcount; spinlock_t lock; -} seqlock_t; +}; +typedef struct seqlock seqlock_t; #endif /* __LINUX_SEQLOCK_TYPES_H */ diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 3410c04c2b76..1e4b90f76420 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -6,6 +6,7 @@ #include #include +#include #include /* @@ -208,3 +209,45 @@ static void __used test_mutex_cond_guard(struct test_mutex_data *d) d->counter++; } } + +struct test_seqlock_data { + seqlock_t sl; + int counter __var_guarded_by(&sl); +}; + +static void __used test_seqlock_init(struct test_seqlock_data *d) +{ + seqlock_init(&d->sl); + d->counter = 0; +} + +static void __used test_seqlock_reader(struct test_seqlock_data *d) +{ + unsigned int seq; + + do { + seq = read_seqbegin(&d->sl); + (void)d->counter; + } while (read_seqretry(&d->sl, seq)); +} + +static void __used test_seqlock_writer(struct test_seqlock_data *d) +{ + unsigned long flags; + + write_seqlock(&d->sl); + d->counter++; + write_sequnlock(&d->sl); + + write_seqlock_irq(&d->sl); + d->counter++; + write_sequnlock_irq(&d->sl); + + write_seqlock_bh(&d->sl); + d->counter++; + write_sequnlock_bh(&d->sl); + + write_seqlock_irqsave(&d->sl, flags); + d->counter++; + write_sequnlock_irqrestore(&d->sl, flags); +} From patchwork Thu Feb 6 18:10:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963492 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 608131F4174 for ; Thu, 6 Feb 2025 18:18:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865902; cv=none; b=RDnWtI9403aJ8K9hdqaVQ+Yga5+IZshEa+xQIcAnrUvdlSafcJJsjj4GTQezI5zUazQtEshF1WIyd6hCAhQaB6K+tHUEickIdBwfAnPxuSyQNLV741cyUdYWWDJnG22ST405eShQgrfTT8nXdsSHo6UD5rddEuXUEwsojGNiIRY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865902; c=relaxed/simple; bh=Y9ecdWWtlfdxtG756H0/OQ3twmMuyLw5QPOn7VZx5xo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Eovo/Porz7KDsSqt7WxkZMSMJ6uxdOPg3RmhCF7PbWOGtqDRS5Fu/ZCxIOqEHt3OPJklKRuDOrRJ5r0YyQNLST1WSnaI1LlQROdS/qGir4Rh0WzHsxPHCsYpRSFyTc3DSpqWPuPj7tQAn2yQge4Z0rliNMH8uaScKV5oFcI0Muw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hZPTZHaf; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hZPTZHaf" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5dc6714f3e8so1735850a12.2 for ; Thu, 06 Feb 2025 10:18:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865899; x=1739470699; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nywgR7jAINvHiuDVB4pmE7YTMO7Utfq2UN01exAeezk=; b=hZPTZHaf5WSPoU5yoeyyjfqebZFC+khtx+219AqDn4G87F5WK08nKW9aOg0w/I7rbO 6B3bQXvYKm43fx7QtrWXBb3gXMuhxPdD+cybNbdsNS36X1BNNaomTroeqtZsumn++iPw XA1Booery4+/REIbSd4fvGc68mNnKi6SMrxv7DoP3iolWp++2pm819CnBFhkkhYcDDXX 4gfO4z4RWJgnGrs+7073pvdNeXEAv/RiG5Rb9eF1FdmwYwdoIQxLvX2q6PDogaQJW9O8 1s98iEBrTHKoA+3eJIODb4YRj/YTvE4t43efauXfVBvCJeMFg/6MH7n44T5z2ymUPsZp CvxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865899; x=1739470699; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nywgR7jAINvHiuDVB4pmE7YTMO7Utfq2UN01exAeezk=; b=bNYNHYxfPZpyEUBAWXh2gT4b2hWQUZIjWEr9HEZ/VspD3W1GxpYueT9XDUs5AlOkzr b7lIheW59lLVbWIh/ZCIpwN3Q/kBEKdBqNbmLkvchENkLouX42D25GvQEROAlG0LK3fb +KQs+U6ZNkYuWv+j/bQHgjYO4sXp9MIh5nIPNIGAhL91mugZrz/ckca9EBOrapoSDSRP ipBq441BB7BDzc937aGJpsUDLoXhhfixQIKAgci7e2pR3VT2gI4KpgrO4UaSiiHiT9DC Y+DiPsgObQyS6AZ78ajkSc8uq7b7j9XsomhI68mv/2VMV4rRQjFOSuUmkhp3upYgG7Dl 690A== X-Forwarded-Encrypted: i=1; AJvYcCW6oLKH11kM/+/pghXDTJ6GuAuHwBsq/KV7AR5Lb28V5Y3n5pFwovJmlhnfQdR4Kv6lBc8=@vger.kernel.org X-Gm-Message-State: AOJu0YyNZAElp8Jji/1oombAZMcjhJUbdGCnksN19qkPPkn8aYx4onIH f/ea2UbgGqRyWB/sAslKuENHbTWmpFMurVoyykxtTIMoK6qJIdfJDldirQxddYor9CIPGvwGnQ= = X-Google-Smtp-Source: AGHT+IFTk3UeF0W3iesDJF4EyACnSXH4BTQBJaMsRlkE8Y124Sbz3T2V3DVtoGu6cpwiCqo9TIV5lxqZAg== X-Received: from edag6.prod.google.com ([2002:a05:6402:3206:b0:5de:3ce0:a49b]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:51cb:b0:5dc:88dd:38aa with SMTP id 4fb4d7f45d1cf-5de45005a73mr490615a12.8.1738865898851; Thu, 06 Feb 2025 10:18:18 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:07 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-14-elver@google.com> Subject: [PATCH RFC 13/24] bit_spinlock: Include missing From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Including into an empty TU will result in the compiler complaining: ./include/linux/bit_spinlock.h:34:4: error: call to undeclared function 'cpu_relax'; <...> 34 | cpu_relax(); | ^ 1 error generated. Include to allow including bit_spinlock.h where is not otherwise included. Signed-off-by: Marco Elver --- include/linux/bit_spinlock.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h index bbc4730a6505..f1174a2fcc4d 100644 --- a/include/linux/bit_spinlock.h +++ b/include/linux/bit_spinlock.h @@ -7,6 +7,8 @@ #include #include +#include /* for cpu_relax() */ + /* * bit-based spin_lock() * From patchwork Thu Feb 6 18:10:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963493 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA4671F55F8 for ; Thu, 6 Feb 2025 18:18:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865904; cv=none; b=W9FyBKWDd51SH+OcHCkTzOwHivMUx/Gu3uTJiyigdi3PRDigDN7Px1uwIvePRYVHDfbWcVU90MEbI53ifSffr2wojGzmnTK+DrvtqVmXLz5S10g55Q4cly8NzBTuOAMfM+dkY8G/L9vt3t42D7OS2xadAlgfCZnZMwOLZS/zAeY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865904; c=relaxed/simple; bh=NUEXWlEAE92KV/GEdPakz1UqBNLjHONT9ssvsb3eGUA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HKMjAa37OkwxKlcjT/5uoIndXkCEWMLXBUwbGsSvBnIh4k73KTVn1zrHngUFLd202o0JAqf2K/Qkc1GKNxHcqCTT9ikmad4cl8Jvr74ALigVkvG89T/jg1IT/UXZ67vnTF5m+85MVPSirqp0JSETpgeQ2bmxoEUBcN8QmQLuw/Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=N4cmvGTq; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="N4cmvGTq" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5d9fb24f87bso1474374a12.0 for ; Thu, 06 Feb 2025 10:18:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865901; x=1739470701; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wzjNorLMqe8QUgEHsGwtuhtrhRqTVkmGkehlX/wsBpI=; b=N4cmvGTqM3W/Lm1R19HQyp9FgnVkbaM7MrWGfLZxalROGa9dXlkEVWcQ372AqdtNOm OWf2hE2aoaCidUZjoATBRStkCH5ku5LfPwtVVAib31bLQsREe1zfcIz7XQMerAhra3Mq Y8oP4Lj+vDNfjL7C92g6lH06Yuw5elYYOK5CH173PvV9cGS47674FbHZ3MA20iMTZUbk Vgsqw5oFJXRQeXSsb0bIDH4vxQVDx9KrXbq4InE8HCKFFF92DYs5rvZaHyZjK5pdfaUp mOXpmjDNcy9eO85cLXxibm4JqPLiM2DlkKCeK7Ymo5D5TNBfMtOswMijPGMQrWcIhMKz VenA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865901; x=1739470701; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wzjNorLMqe8QUgEHsGwtuhtrhRqTVkmGkehlX/wsBpI=; b=pGlknB6DEt4/PMLdKFyg5kX/Z/ZBvJzCWvGGovZKJdP2UXvq+Kyh7p2TczTFKJmj0U yVCvVjmETOVAeZsmgQJencYKrdaZ6FwRPMCIiHjatofqbIowF4jAimHX21EYrAXaRG8R /KH6d/1cr2DWUW2vz8vkg72k5kIOwo32z+HC8BZ0O76JzPa1qpFZWu1w2UY7StxXW2vP tKCdPulugpnk7Xj8ZbEA0Q/RhZv2pzkFYWKFsy6MePUd7ewuVndtRodmGIfBJthb0JC/ IzX6Bk2/Gf7eD2uQxo8XOJd8RNYTpBWT2XH+ypeLq8qj5GU/Y7JK7q8WcKferO4IexGT GIvw== X-Forwarded-Encrypted: i=1; AJvYcCWuy/FF0jlWHqW4pVxCHCBpikemVQRIPlmESTv7uagzQHDUG9s+1fbiZ7mK7jyAi7lXEjw=@vger.kernel.org X-Gm-Message-State: AOJu0YzMJH7EJQ772lgTixifwhMmjbdOsX6NucJbn0WvZa3cNRTJ4bN6 79U/FmwLllhJz04kE5rokzt+S0aPXK6aG678B5dRzViWH2yvFBAX6omiL1//u27Hxk1fcYxcLg= = X-Google-Smtp-Source: AGHT+IFsRC9lsbg4c4TRqzZ93jQ5joCg0R4FF8MgHcqadtERExxxGFc9QGo4Fx50fBZiM7U81D6CptTy4g== X-Received: from edag33.prod.google.com ([2002:a05:6402:3221:b0:5dc:74ee:c4dd]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:3903:b0:5dc:da2f:9cda with SMTP id 4fb4d7f45d1cf-5de450e1eccmr537344a12.27.1738865901391; Thu, 06 Feb 2025 10:18:21 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:08 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-15-elver@google.com> Subject: [PATCH RFC 14/24] bit_spinlock: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org The annotations for bit_spinlock.h have simply been using "bitlock" as the token. For Sparse, that was likely sufficient in most cases. But Clang's capability analysis is more precise, and we need to ensure we can distinguish different bitlocks. To do so, add a token capability, and a macro __bitlock(bitnum, addr) that is used to construct unique per-bitlock tokens. Add the appropriate test. is implicitly included through other includes, and requires 2 annotations to indicate that acquisition (without release) and release (without prior acquisition) of its bitlock is intended. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 3 ++- include/linux/bit_spinlock.h | 22 +++++++++++++--- include/linux/list_bl.h | 2 ++ lib/test_capability-analysis.c | 26 +++++++++++++++++++ 4 files changed, 48 insertions(+), 5 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 8d9336e91ce2..a34dfe7b0b09 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -85,7 +85,8 @@ Supported Kernel Primitives ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Currently the following synchronization primitives are supported: -`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`. +`raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, +`bit_spinlock`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h index f1174a2fcc4d..57114b44ce5d 100644 --- a/include/linux/bit_spinlock.h +++ b/include/linux/bit_spinlock.h @@ -9,6 +9,16 @@ #include /* for cpu_relax() */ +/* + * For static capability analysis, we need a unique token for each possible bit + * that can be used as a bit_spinlock. The easiest way to do that is to create a + * fake capability that we can cast to with the __bitlock(bitnum, addr) macro + * below, which will give us unique instances for each (bit, addr) pair that the + * static analysis can use. + */ +struct_with_capability(__capability_bitlock) { }; +#define __bitlock(bitnum, addr) (struct __capability_bitlock *)(bitnum + (addr)) + /* * bit-based spin_lock() * @@ -16,6 +26,7 @@ * are significantly faster. */ static inline void bit_spin_lock(int bitnum, unsigned long *addr) + __acquires(__bitlock(bitnum, addr)) { /* * Assuming the lock is uncontended, this never enters @@ -34,13 +45,14 @@ static inline void bit_spin_lock(int bitnum, unsigned long *addr) preempt_disable(); } #endif - __acquire(bitlock); + __acquire(__bitlock(bitnum, addr)); } /* * Return true if it was acquired */ static inline int bit_spin_trylock(int bitnum, unsigned long *addr) + __cond_acquires(1, __bitlock(bitnum, addr)) { preempt_disable(); #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) @@ -49,7 +61,7 @@ static inline int bit_spin_trylock(int bitnum, unsigned long *addr) return 0; } #endif - __acquire(bitlock); + __acquire(__bitlock(bitnum, addr)); return 1; } @@ -57,6 +69,7 @@ static inline int bit_spin_trylock(int bitnum, unsigned long *addr) * bit-based spin_unlock() */ static inline void bit_spin_unlock(int bitnum, unsigned long *addr) + __releases(__bitlock(bitnum, addr)) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); @@ -65,7 +78,7 @@ static inline void bit_spin_unlock(int bitnum, unsigned long *addr) clear_bit_unlock(bitnum, addr); #endif preempt_enable(); - __release(bitlock); + __release(__bitlock(bitnum, addr)); } /* @@ -74,6 +87,7 @@ static inline void bit_spin_unlock(int bitnum, unsigned long *addr) * protecting the rest of the flags in the word. */ static inline void __bit_spin_unlock(int bitnum, unsigned long *addr) + __releases(__bitlock(bitnum, addr)) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); @@ -82,7 +96,7 @@ static inline void __bit_spin_unlock(int bitnum, unsigned long *addr) __clear_bit_unlock(bitnum, addr); #endif preempt_enable(); - __release(bitlock); + __release(__bitlock(bitnum, addr)); } /* diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h index ae1b541446c9..df9eebe6afca 100644 --- a/include/linux/list_bl.h +++ b/include/linux/list_bl.h @@ -144,11 +144,13 @@ static inline void hlist_bl_del_init(struct hlist_bl_node *n) } static inline void hlist_bl_lock(struct hlist_bl_head *b) + __acquires(__bitlock(0, b)) { bit_spin_lock(0, (unsigned long *)b); } static inline void hlist_bl_unlock(struct hlist_bl_head *b) + __releases(__bitlock(0, b)) { __bit_spin_unlock(0, (unsigned long *)b); } diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 1e4b90f76420..fc8dcad2a994 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -4,6 +4,7 @@ * positive errors when compiled with Clang's capability analysis. */ +#include #include #include #include @@ -251,3 +252,28 @@ static void __used test_seqlock_writer(struct test_seqlock_data *d) d->counter++; write_sequnlock_irqrestore(&d->sl, flags); } + +struct test_bit_spinlock_data { + unsigned long bits; + int counter __var_guarded_by(__bitlock(3, &bits)); +}; + +static void __used test_bit_spin_lock(struct test_bit_spinlock_data *d) +{ + /* + * Note, the analysis seems to have false negatives, because it won't + * precisely recognize the bit of the fake __bitlock() token. + */ + bit_spin_lock(3, &d->bits); + d->counter++; + bit_spin_unlock(3, &d->bits); + + bit_spin_lock(3, &d->bits); + d->counter++; + __bit_spin_unlock(3, &d->bits); + + if (bit_spin_trylock(3, &d->bits)) { + d->counter++; + bit_spin_unlock(3, &d->bits); + } +} From patchwork Thu Feb 6 18:10:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963494 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 702541F7561 for ; Thu, 6 Feb 2025 18:18:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865907; cv=none; b=Lh6SFImTfqE/h5l5K2XNK+slRJ6sapNgIVSmJEeDrWWc0fazVKGlYpZP7O544DJZJ/W/eave2KW+gK3vGDBfN4yXmewQfsuS31kSNIGtFxIFkdjqPMnsY9JPTRwRYwA1+UAku69efyj2MnXHMVnvKXgriKD8mVsNkIc1lvnhEYw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865907; c=relaxed/simple; bh=JyPZFv/1H35rfYbz8+mBYSIuX2lQG3yoeqLSp4QFW6k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FabPLsd2IqKGrLTDUIaqjWaCD8z7hrBJS2UGwiPWzOHME5csy+oHZy0FPJUdiZT0NCN12Nqehn4xu9i+BG2RNcyYISfyJrOqCjd8AhdRawux8UDBli1xv7ayaepSOstSAL5hZWKuFQ85pgrBRhwtRfrTccDeF2tR2xh1nuDCMsA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ebrbeSIL; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ebrbeSIL" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-ab6d5363a4bso137033066b.3 for ; Thu, 06 Feb 2025 10:18:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865904; x=1739470704; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JOzjkOrq5kterB3FAixUHWQIVtw7H/sF/ZXjRLk43E0=; b=ebrbeSILly5W0ANS54ud7YMtmnfxMdW1UbtkGV+Wr6flRRd0HSrYwRu6T/8N35GWtw jHKHLPZO8FreCe+KcdPLBeUaBseBvJM1jHaN8xiTbLfFTdeci8tnin2LwSbxuA/pLfSd xxAi92AXH7zsqcudZ+hRWHKeOPAp03blwfP60MbCdIuVihTBgdngv7bSP2+5VE5WcUz2 fY0jwrJsRVSsXgvleOCEODpEJNxawNQd2vYVJPJdsRuGYYZKc4Ed6Exfk1aWY7sM2qa7 uNfanmN+TJGg+bVeh3/qLyegp3CZ4w8gGoh+MpU2oU5OQfgsIoRdL+xU76G+wLdK37nf M+6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865904; x=1739470704; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JOzjkOrq5kterB3FAixUHWQIVtw7H/sF/ZXjRLk43E0=; b=cU89Pi4mP8yK4iieI2/FivTA4CaRWxTUlF10GqMNPEUAT3eHZOH8Xejn7Gma++wZnR qazFrcpNN6+xEiEMLZGdEHm2ZJc/iaRvuJCaCHc81AqUI9u7rNJ74AfKhrHIqJpkRVNB cJMKu1RMSCNxztpDg3sRBvpXqovCLBonX0pjtGjVu8AIEVhbh4XPoraeRmDwCKf34V3V xkT5hpmE283zd8+f1tkooRVEQ9iZa4fv+IVKCQf/WWeHzCHp50nT7DgiyMZc+seET3Ot SxX2IKueDumsGEWorbsmxi9nidlNgCfEKOz3N3TcH+4Vaon5e6L2d5Nmnfx5CCXcmppx 6Vyg== X-Forwarded-Encrypted: i=1; AJvYcCVj0lHAVYXmWyEvl5wMeyXQuaZcr3eTdBR93XWRrtus+HVEpsNuAATtJOJSJeFgSzJXUQE=@vger.kernel.org X-Gm-Message-State: AOJu0YwboTw8xM6ivqaZCuYloHp/JxCW+MAF6277VaEx3IzQJB+CLK5F /jJdqFzWa/wB0PR8xF6rsvbpteU4Ode7Hj9RHw5hxiwgxVFDXbKLzik+dnmneYRsTtilYGigQg= = X-Google-Smtp-Source: AGHT+IFglMxHiMcB6nXLGSS3tW+M08QRN/fbJzu7qadpesBJMHfXBWVx17ONgkOvj87YID/Ots6zhUnX4g== X-Received: from edben24.prod.google.com ([2002:a05:6402:5298:b0:5dc:37ed:79fc]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7ba8:b0:ab6:ed8a:601f with SMTP id a640c23a62f3a-ab75e23496emr716074566b.12.1738865903775; Thu, 06 Feb 2025 10:18:23 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:09 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-16-elver@google.com> Subject: [PATCH RFC 15/24] rcu: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Improve the existing annotations to properly support Clang's capability analysis. The old annotations distinguished between RCU, RCU_BH, and RCU_SCHED. However, it does not make sense to acquire rcu_read_lock_bh() after rcu_read_lock() - annotate the _bh() and _sched() variants to also acquire 'RCU', so that Clang (and also Sparse) can warn about it. The above change also simplified introducing annotations, where it would not matter if RCU, RCU_BH, or RCU_SCHED is acquired: through the introduction of __rcu_guarded, we can use Clang's capability analysis to warn if a pointer is dereferenced without any of the RCU locks held, or updated without the appropriate helpers. The primitives rcu_assign_pointer() and friends are wrapped with capability_unsafe(), which enforces using them to update RCU-protected pointers marked with __rcu_guarded. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/cleanup.h | 4 + include/linux/rcupdate.h | 73 +++++++++++++------ lib/test_capability-analysis.c | 68 +++++++++++++++++ 4 files changed, 123 insertions(+), 24 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index a34dfe7b0b09..73dd28a23b11 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -86,7 +86,7 @@ Supported Kernel Primitives Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`. +`bit_spinlock`, RCU. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h index 93a166549add..7d70d308357a 100644 --- a/include/linux/cleanup.h +++ b/include/linux/cleanup.h @@ -404,6 +404,10 @@ static inline class_##_name##_t class_##_name##_constructor(void) \ return _t; \ } +#define DECLARE_LOCK_GUARD_0_ATTRS(_name, _lock, _unlock) \ +static inline class_##_name##_t class_##_name##_constructor(void) _lock;\ +static inline void class_##_name##_destructor(class_##_name##_t *_T) _unlock + #define DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, ...) \ __DEFINE_CLASS_IS_CONDITIONAL(_name, false); \ __DEFINE_UNLOCK_GUARD(_name, _type, _unlock, __VA_ARGS__) \ diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 48e5c03df1dd..ee68095ba9f0 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -31,6 +31,16 @@ #include #include +token_capability(RCU); +token_capability_instance(RCU, RCU_SCHED); +token_capability_instance(RCU, RCU_BH); + +/* + * A convenience macro that can be used for RCU-protected globals or struct + * members; adds type qualifier __rcu, and also enforces __var_guarded_by(RCU). + */ +#define __rcu_guarded __rcu __var_guarded_by(RCU) + #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b)) #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) @@ -431,7 +441,8 @@ static inline void rcu_preempt_sleep_check(void) { } // See RCU_LOCKDEP_WARN() for an explanation of the double call to // debug_lockdep_rcu_enabled(). -static inline bool lockdep_assert_rcu_helper(bool c) +static inline bool lockdep_assert_rcu_helper(bool c, const struct __capability_RCU *cap) + __asserts_shared_cap(RCU) __asserts_shared_cap(cap) { return debug_lockdep_rcu_enabled() && (c || !rcu_is_watching() || !rcu_lockdep_current_cpu_online()) && @@ -444,7 +455,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * Splats if lockdep is enabled and there is no rcu_read_lock() in effect. */ #define lockdep_assert_in_rcu_read_lock() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map))) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map), RCU)) /** * lockdep_assert_in_rcu_read_lock_bh - WARN if not protected by rcu_read_lock_bh() @@ -454,7 +465,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * actual rcu_read_lock_bh() is required. */ #define lockdep_assert_in_rcu_read_lock_bh() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map))) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map), RCU_BH)) /** * lockdep_assert_in_rcu_read_lock_sched - WARN if not protected by rcu_read_lock_sched() @@ -464,7 +475,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * instead an actual rcu_read_lock_sched() is required. */ #define lockdep_assert_in_rcu_read_lock_sched() \ - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map))) + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map), RCU_SCHED)) /** * lockdep_assert_in_rcu_reader - WARN if not within some type of RCU reader @@ -482,17 +493,17 @@ static inline bool lockdep_assert_rcu_helper(bool c) WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map) && \ !lock_is_held(&rcu_bh_lock_map) && \ !lock_is_held(&rcu_sched_lock_map) && \ - preemptible())) + preemptible(), RCU)) #else /* #ifdef CONFIG_PROVE_RCU */ #define RCU_LOCKDEP_WARN(c, s) do { } while (0 && (c)) #define rcu_sleep_check() do { } while (0) -#define lockdep_assert_in_rcu_read_lock() do { } while (0) -#define lockdep_assert_in_rcu_read_lock_bh() do { } while (0) -#define lockdep_assert_in_rcu_read_lock_sched() do { } while (0) -#define lockdep_assert_in_rcu_reader() do { } while (0) +#define lockdep_assert_in_rcu_read_lock() __assert_shared_cap(RCU) +#define lockdep_assert_in_rcu_read_lock_bh() __assert_shared_cap(RCU_BH) +#define lockdep_assert_in_rcu_read_lock_sched() __assert_shared_cap(RCU_SCHED) +#define lockdep_assert_in_rcu_reader() __assert_shared_cap(RCU) #endif /* #else #ifdef CONFIG_PROVE_RCU */ @@ -512,11 +523,11 @@ static inline bool lockdep_assert_rcu_helper(bool c) #endif /* #else #ifdef __CHECKER__ */ #define __unrcu_pointer(p, local) \ -({ \ +capability_unsafe( \ typeof(*p) *local = (typeof(*p) *__force)(p); \ rcu_check_sparse(p, __rcu); \ ((typeof(*p) __force __kernel *)(local)); \ -}) +) /** * unrcu_pointer - mark a pointer as not being RCU protected * @p: pointer needing to lose its __rcu property @@ -592,7 +603,7 @@ static inline bool lockdep_assert_rcu_helper(bool c) * other macros that it invokes. */ #define rcu_assign_pointer(p, v) \ -do { \ +capability_unsafe( \ uintptr_t _r_a_p__v = (uintptr_t)(v); \ rcu_check_sparse(p, __rcu); \ \ @@ -600,7 +611,7 @@ do { \ WRITE_ONCE((p), (typeof(p))(_r_a_p__v)); \ else \ smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \ -} while (0) +) /** * rcu_replace_pointer() - replace an RCU pointer, returning its old value @@ -843,9 +854,10 @@ do { \ * only when acquiring spinlocks that are subject to priority inheritance. */ static __always_inline void rcu_read_lock(void) + __acquires_shared(RCU) { __rcu_read_lock(); - __acquire(RCU); + __acquire_shared(RCU); rcu_lock_acquire(&rcu_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock() used illegally while idle"); @@ -874,11 +886,12 @@ static __always_inline void rcu_read_lock(void) * See rcu_read_lock() for more information. */ static inline void rcu_read_unlock(void) + __releases_shared(RCU) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock() used illegally while idle"); rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */ - __release(RCU); + __release_shared(RCU); __rcu_read_unlock(); } @@ -897,9 +910,11 @@ static inline void rcu_read_unlock(void) * was invoked from some other task. */ static inline void rcu_read_lock_bh(void) + __acquires_shared(RCU) __acquires_shared(RCU_BH) { local_bh_disable(); - __acquire(RCU_BH); + __acquire_shared(RCU); + __acquire_shared(RCU_BH); rcu_lock_acquire(&rcu_bh_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock_bh() used illegally while idle"); @@ -911,11 +926,13 @@ static inline void rcu_read_lock_bh(void) * See rcu_read_lock_bh() for more information. */ static inline void rcu_read_unlock_bh(void) + __releases_shared(RCU) __releases_shared(RCU_BH) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock_bh() used illegally while idle"); rcu_lock_release(&rcu_bh_lock_map); - __release(RCU_BH); + __release_shared(RCU_BH); + __release_shared(RCU); local_bh_enable(); } @@ -935,9 +952,11 @@ static inline void rcu_read_unlock_bh(void) * rcu_read_lock_sched() was invoked from an NMI handler. */ static inline void rcu_read_lock_sched(void) + __acquires_shared(RCU) __acquires_shared(RCU_SCHED) { preempt_disable(); - __acquire(RCU_SCHED); + __acquire_shared(RCU); + __acquire_shared(RCU_SCHED); rcu_lock_acquire(&rcu_sched_lock_map); RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_lock_sched() used illegally while idle"); @@ -945,9 +964,11 @@ static inline void rcu_read_lock_sched(void) /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ static inline notrace void rcu_read_lock_sched_notrace(void) + __acquires_shared(RCU) __acquires_shared(RCU_SCHED) { preempt_disable_notrace(); - __acquire(RCU_SCHED); + __acquire_shared(RCU); + __acquire_shared(RCU_SCHED); } /** @@ -956,18 +977,22 @@ static inline notrace void rcu_read_lock_sched_notrace(void) * See rcu_read_lock_sched() for more information. */ static inline void rcu_read_unlock_sched(void) + __releases_shared(RCU) __releases_shared(RCU_SCHED) { RCU_LOCKDEP_WARN(!rcu_is_watching(), "rcu_read_unlock_sched() used illegally while idle"); rcu_lock_release(&rcu_sched_lock_map); - __release(RCU_SCHED); + __release_shared(RCU_SCHED); + __release_shared(RCU); preempt_enable(); } /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ static inline notrace void rcu_read_unlock_sched_notrace(void) + __releases_shared(RCU) __releases_shared(RCU_SCHED) { - __release(RCU_SCHED); + __release_shared(RCU_SCHED); + __release_shared(RCU); preempt_enable_notrace(); } @@ -1010,10 +1035,10 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) * ordering guarantees for either the CPU or the compiler. */ #define RCU_INIT_POINTER(p, v) \ - do { \ + capability_unsafe( \ rcu_check_sparse(p, __rcu); \ WRITE_ONCE(p, RCU_INITIALIZER(v)); \ - } while (0) + ) /** * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer @@ -1172,4 +1197,6 @@ DEFINE_LOCK_GUARD_0(rcu, } while (0), rcu_read_unlock()) +DECLARE_LOCK_GUARD_0_ATTRS(rcu, __acquires_shared(RCU), __releases_shared(RCU)); + #endif /* __LINUX_RCUPDATE_H */ diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index fc8dcad2a994..f5a1dda6ca38 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -277,3 +278,70 @@ static void __used test_bit_spin_lock(struct test_bit_spinlock_data *d) bit_spin_unlock(3, &d->bits); } } + +/* + * Test that we can mark a variable guarded by RCU, and we can dereference and + * write to the pointer with RCU's primitives. + */ +struct test_rcu_data { + long __rcu_guarded *data; +}; + +static void __used test_rcu_guarded_reader(struct test_rcu_data *d) +{ + rcu_read_lock(); + (void)rcu_dereference(d->data); + rcu_read_unlock(); + + rcu_read_lock_bh(); + (void)rcu_dereference(d->data); + rcu_read_unlock_bh(); + + rcu_read_lock_sched(); + (void)rcu_dereference(d->data); + rcu_read_unlock_sched(); +} + +static void __used test_rcu_guard(struct test_rcu_data *d) +{ + guard(rcu)(); + (void)rcu_dereference(d->data); +} + +static void __used test_rcu_guarded_updater(struct test_rcu_data *d) +{ + rcu_assign_pointer(d->data, NULL); + RCU_INIT_POINTER(d->data, NULL); + (void)unrcu_pointer(d->data); +} + +static void wants_rcu_held(void) __must_hold_shared(RCU) { } +static void wants_rcu_held_bh(void) __must_hold_shared(RCU_BH) { } +static void wants_rcu_held_sched(void) __must_hold_shared(RCU_SCHED) { } + +static void __used test_rcu_lock_variants(void) +{ + rcu_read_lock(); + wants_rcu_held(); + rcu_read_unlock(); + + rcu_read_lock_bh(); + wants_rcu_held_bh(); + rcu_read_unlock_bh(); + + rcu_read_lock_sched(); + wants_rcu_held_sched(); + rcu_read_unlock_sched(); +} + +static void __used test_rcu_assert_variants(void) +{ + lockdep_assert_in_rcu_read_lock(); + wants_rcu_held(); + + lockdep_assert_in_rcu_read_lock_bh(); + wants_rcu_held_bh(); + + lockdep_assert_in_rcu_read_lock_sched(); + wants_rcu_held_sched(); +} From patchwork Thu Feb 6 18:10:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963495 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C83281F790B for ; Thu, 6 Feb 2025 18:18:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865909; cv=none; b=rv4RojOYD4u3MRJYtJ5UrDqhA71jtfnCUlx8KBh9Q4vOVMO0AJ8VXjygQM+paNqd8FTnssqRqvxN38NF3cBCo5EnLJBA772+ZWBholG7npcXjQeq3fhX2kE76J5GohAO+9Xlj7L81iq7ZfB9BraM80XJbH67vXKEVO7FOHdje9g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865909; c=relaxed/simple; bh=9NvpT1xaNJVbVqSm8G/+XOo08oZlb8dEjdT0tJE3yMs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RopJUWkAap1x4YuSNdfJEYcB9WDnf83YmSIPDt63nYLpDxhHDvLfMwQjFiXVgnHGMQqA9fmJ87LKliGcpCzO7yhmif+uf4a/YE5cidh8h2vvjtDnC6d9qKx1KYr0t+WIgBBXazxcZhSz4kVLra3FM7aitmu2bRpODlQ0ctuI6KI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=P5Ktb/gZ; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="P5Ktb/gZ" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-aa67f18cb95so136448466b.1 for ; Thu, 06 Feb 2025 10:18:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865906; x=1739470706; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MQXyMw84n+KN7BKVw/OtNJwL/he84+ejRwU4F35ugF4=; b=P5Ktb/gZNFX/xvBTGLBEvLoCScSWyrrt8ZxPV97etNVly1lVvMIFdwE7n4kIgNb3Yk abw9O2IPRiybtujuwrB9VQpO7ZAGBV/rW+xjVM+6ONDA8kIj2jTCrzNOY4DuVaFtNE6V o9NP4AcnEGNQSEpnRozG5OlEPOmPGkeEZbKRrB1ZkHjgXdXKOw3dyhClhAvDzBjawHMG ymRtpzwJv+0CaVQszdG/8l2tbXTsq7M1XV0kwoXWLJK8UOwjYSsXrh9XtVWDPLj0dkd1 B6dCCOLMsc2x+WAQ1tn3lcNC+YP/jvQtqCqRI5hYvwTQbtVuhFqnq4/l0t+ieZalm9xX 35TA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865906; x=1739470706; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MQXyMw84n+KN7BKVw/OtNJwL/he84+ejRwU4F35ugF4=; b=fJULPEbN5l9x1PKL3zP9l0PavEgw1riWtITgTO/WiDWVNTMYkojerqYzaB6dQXxYAm mFSmNmvZlq9pthb4WpAc79/xOpO5rZynH+bKGCtF9uQe5dU+oJMm24Jt9JbHpN/d8TK2 97JrqM4h0pB/8CTUm/mNC2z0zrlqJ7c6yiBqQ5DLvAnCJCkK6l4lYpbMPi2MX6gzNsvR Goi+H60xuF5FEZyrfs7tA94cjyDpfeJ4UQLVkL+uF8uqbVk3BnZnh/fjFkS+N9x8jxUT etfHHaiVO43EKvxTV8NxcML7Y43zAsqQh1fsrpkWLORuElpZkiFWu8DLn8KP/VjUiG+F pGrg== X-Forwarded-Encrypted: i=1; AJvYcCUb9/ntKLnyAaSn8zZ5YGKvovDX2gv5EH9++hmd4I7PPa0nMyTz+kfL50ae9H/Q3W/lJPw=@vger.kernel.org X-Gm-Message-State: AOJu0Ywgk0TadpQY5++MHxqktTwA9WB1ZUc3MVU+bPmdfU8ikRGNGnRk w2o3WhGixx84Je5l/qDZqn52w8Qj7aug0zhRYJx3DdHPICUfrmB1y4q9y9j7yX8fXD79ygfA2w= = X-Google-Smtp-Source: AGHT+IHZYxOsD+xJIbdoA+mMUNGhu0UIVcbIEjPgFMgubaFg9/8FZuVYQVxxJZ4RkmKxQO8jomuTw/hsWQ== X-Received: from ejcss11.prod.google.com ([2002:a17:907:c00b:b0:ab7:822d:f553]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:856:b0:ab7:6606:a8d5 with SMTP id a640c23a62f3a-ab76606b5camr644091966b.48.1738865906398; Thu, 06 Feb 2025 10:18:26 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:10 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-17-elver@google.com> Subject: [PATCH RFC 16/24] srcu: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Add support for Clang's capability analysis for SRCU. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/srcu.h | 61 +++++++++++++------ lib/test_capability-analysis.c | 24 ++++++++ 3 files changed, 66 insertions(+), 21 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 73dd28a23b11..3766ac466470 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -86,7 +86,7 @@ Supported Kernel Primitives Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU. +`bit_spinlock`, RCU, SRCU (`srcu_struct`). For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/srcu.h b/include/linux/srcu.h index d7ba46e74f58..560310643c54 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -21,7 +21,7 @@ #include #include -struct srcu_struct; +struct_with_capability(srcu_struct); #ifdef CONFIG_DEBUG_LOCK_ALLOC @@ -60,14 +60,14 @@ int init_srcu_struct(struct srcu_struct *ssp); void call_srcu(struct srcu_struct *ssp, struct rcu_head *head, void (*func)(struct rcu_head *head)); void cleanup_srcu_struct(struct srcu_struct *ssp); -int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp); +int __srcu_read_lock(struct srcu_struct *ssp) __acquires_shared(ssp); +void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases_shared(ssp); #ifdef CONFIG_TINY_SRCU #define __srcu_read_lock_lite __srcu_read_lock #define __srcu_read_unlock_lite __srcu_read_unlock #else // #ifdef CONFIG_TINY_SRCU -int __srcu_read_lock_lite(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) __releases(ssp); +int __srcu_read_lock_lite(struct srcu_struct *ssp) __acquires_shared(ssp); +void __srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) __releases_shared(ssp); #endif // #else // #ifdef CONFIG_TINY_SRCU void synchronize_srcu(struct srcu_struct *ssp); @@ -110,14 +110,16 @@ static inline bool same_state_synchronize_srcu(unsigned long oldstate1, unsigned } #ifdef CONFIG_NEED_SRCU_NMI_SAFE -int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp); +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires_shared(ssp); +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases_shared(ssp); #else static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) + __acquires_shared(ssp) { return __srcu_read_lock(ssp); } static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) + __releases_shared(ssp) { __srcu_read_unlock(ssp, idx); } @@ -189,6 +191,14 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp) #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */ +/* + * No-op helper to denote that ssp must be held. Because SRCU-protected pointers + * should still be marked with __rcu_guarded, and we do not want to mark them + * with __var_guarded_by(ssp) as it would complicate annotations for writers, we + * choose the following strategy: srcu_dereference_check() calls this helper + * that checks that the passed ssp is held, and then fake-acquires 'RCU'. + */ +static inline void __srcu_read_lock_must_hold(const struct srcu_struct *ssp) __must_hold_shared(ssp) { } /** * srcu_dereference_check - fetch SRCU-protected pointer for later dereferencing @@ -202,9 +212,15 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp) * to 1. The @c argument will normally be a logical expression containing * lockdep_is_held() calls. */ -#define srcu_dereference_check(p, ssp, c) \ - __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ - (c) || srcu_read_lock_held(ssp), __rcu) +#define srcu_dereference_check(p, ssp, c) \ +({ \ + __srcu_read_lock_must_hold(ssp); \ + __acquire_shared_cap(RCU); \ + __auto_type __v = __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ + (c) || srcu_read_lock_held(ssp), __rcu); \ + __release_shared_cap(RCU); \ + __v; \ +}) /** * srcu_dereference - fetch SRCU-protected pointer for later dereferencing @@ -247,7 +263,8 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp) * invoke srcu_read_unlock() from one task and the matching srcu_read_lock() * from another. */ -static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_read_lock(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; @@ -274,7 +291,8 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) * where RCU is watching, that is, from contexts where it would be legal * to invoke rcu_read_lock(). Otherwise, lockdep will complain. */ -static inline int srcu_read_lock_lite(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_read_lock_lite(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; @@ -295,7 +313,8 @@ static inline int srcu_read_lock_lite(struct srcu_struct *ssp) __acquires(ssp) * then none of the other flavors may be used, whether before, during, * or after. */ -static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; @@ -307,7 +326,8 @@ static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp /* Used by tracing, cannot be traced and cannot invoke lockdep. */ static inline notrace int -srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp) +srcu_read_lock_notrace(struct srcu_struct *ssp) + __acquires_shared(ssp) { int retval; @@ -337,7 +357,8 @@ srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp) * Calls to srcu_down_read() may be nested, similar to the manner in * which calls to down_read() may be nested. */ -static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp) +static inline int srcu_down_read(struct srcu_struct *ssp) + __acquires_shared(ssp) { WARN_ON_ONCE(in_nmi()); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); @@ -352,7 +373,7 @@ static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp) * Exit an SRCU read-side critical section. */ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); @@ -368,7 +389,7 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) * Exit a light-weight SRCU read-side critical section. */ static inline void srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_LITE); @@ -384,7 +405,7 @@ static inline void srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) * Exit an SRCU read-side critical section, but in an NMI-safe manner. */ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NMI); @@ -394,7 +415,7 @@ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) /* Used by tracing, cannot be traced and cannot call lockdep. */ static inline notrace void -srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp) +srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases_shared(ssp) { srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); __srcu_read_unlock(ssp, idx); @@ -409,7 +430,7 @@ srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp) * the same context as the maching srcu_down_read(). */ static inline void srcu_up_read(struct srcu_struct *ssp, int idx) - __releases(ssp) + __releases_shared(ssp) { WARN_ON_ONCE(idx & ~0x1); WARN_ON_ONCE(in_nmi()); diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index f5a1dda6ca38..8bc8c3e6cb5c 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -10,6 +10,7 @@ #include #include #include +#include /* * Test that helper macros work as expected. @@ -345,3 +346,26 @@ static void __used test_rcu_assert_variants(void) lockdep_assert_in_rcu_read_lock_sched(); wants_rcu_held_sched(); } + +struct test_srcu_data { + struct srcu_struct srcu; + long __rcu_guarded *data; +}; + +static void __used test_srcu(struct test_srcu_data *d) +{ + init_srcu_struct(&d->srcu); + + int idx = srcu_read_lock(&d->srcu); + long *data = srcu_dereference(d->data, &d->srcu); + (void)data; + srcu_read_unlock(&d->srcu, idx); + + rcu_assign_pointer(d->data, NULL); +} + +static void __used test_srcu_guard(struct test_srcu_data *d) +{ + guard(srcu)(&d->srcu); + (void)srcu_dereference(d->data, &d->srcu); +} From patchwork Thu Feb 6 18:10:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963496 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9B6A81F8AE5 for ; Thu, 6 Feb 2025 18:18:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865913; cv=none; b=SsH6hc7asSBnAxxCTX2Eka29Zl2Z5PRrd7ctpWOHQLG78ICxUTLpLA0+4vqd8K66i/XtJ0BmqRa6iWJrGP/+iLNK8nMLlL1OCzHnRb4Q2ZQF46XobbCPaSJ71FCkMqlnhWfYnBegl6G+w6HVVxj+Zy4xBMVEXrQ8ed4wcUutkJc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865913; c=relaxed/simple; bh=eUsEA8AKaMySMoDYSStoEnIC6yQaJkf66VdQ0LAAkhM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=b3f8ALUjKWJVxNy5bNdMoNyQsIAZ1Ua/Awl3U/qNI+774+Ztw2aO31pYJAHK60BkIyrLJdIDdJNqnW89EVQdChZZeW7HN+17pyJW1UzOOEIgocfHLvjnRyQ6vliGPRSyBxpeyIMqhWJ+jrJzpNIUpcFD+MXy+qHVyqZLG/Dp1OM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0BZZumzI; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0BZZumzI" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-aa67fcbb549so132408566b.0 for ; Thu, 06 Feb 2025 10:18:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865909; x=1739470709; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rczBmZP1xxTJDBYPo5xjGBbOgXwfpsKdWxPKt0+mHDg=; b=0BZZumzI16fwyyHccw7JiF1zmLs7MjWIGoLLf0A5shUDVgRSF33DoLfKuhAgVd8cwS Rm9DabHSKxAv8Z+Mzn0iCjlWNLRycGZ0voeTVoVhgKHZkYRPf8np9EsZRPALDLcozY3r m3U2+YOqDVuW/UYlDGg5yZj4GR3w4otclzxsZG4iqzU1whzEoHOl300OPG+kbW6auDsB h1bVOcPp9HFFLDZONkYYCCxv5T5VqASidd3p9aDw8qP17ZNWU/0aBB+xESWePpX6xE+w FL5klIKqz68LwPkueDFQ+j79a55okeiG9R8xcbg0K3sF3ombuOd5/AKcKCa9ca7M2e6O M7vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865909; x=1739470709; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rczBmZP1xxTJDBYPo5xjGBbOgXwfpsKdWxPKt0+mHDg=; b=Bf4umhwZBy1wrptHLC7X9ylPuUvCNcyVPdZNHxJkRGaaUltDcCIbgbSktxaWyUwfkt 8xeNkTWt4bHyEOy2FBcE5EQXXIpSFhUdIr60YjmbPMc0pvAJqNTxb4h+IhrRiEvEcu5/ sS+PvZDrogcxyUveCO8RLyvdb5f++bSGiH3eapAthtnNzNc6jJRxRzskiuscm2YWa5wZ mYnmqFkUeW8adSGezLuWRdKkp9lCegm2GF4o48lrHbT/eyI1TiYEr9ot04Tomg2axXt+ /h/nCb0nufR8Lb7LWchHztsHj8JH6ZQ4oJah5odlz2w8BYf9MlnkTpuUkNawzjjHBGch iHoQ== X-Forwarded-Encrypted: i=1; AJvYcCXaQzvawjs3TKKZSOFw69P74P7Sw+73OdacMSc62o99cNx6Vv+zYIFS67dMireV4XkVB3s=@vger.kernel.org X-Gm-Message-State: AOJu0YyqJsaqYtYZFgQ8SPTXyg+bObf2XNt1rjfmk3ZnSPiwLgUhRv5d Bqv9+LzhbbL7b9HroI7yML5eDP04QyJtM8kYBegTngw2xD8HiN4XGit4Xhf+gbIPl2JdimsXQQ= = X-Google-Smtp-Source: AGHT+IGVPfCAaawQkSTPD4Xv6V79uv64bqYTAjRa+/Jl4z9KcDjw7LPQAkOYBilNooqt7ZxMEpeteFTuAQ== X-Received: from ejcth7.prod.google.com ([2002:a17:907:8e07:b0:ab6:c785:9cc6]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7e92:b0:ab7:8520:e953 with SMTP id a640c23a62f3a-ab78520ea84mr97837866b.55.1738865908835; Thu, 06 Feb 2025 10:18:28 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:11 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-18-elver@google.com> Subject: [PATCH RFC 17/24] kref: Add capability-analysis annotations From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Mark functions that conditionally acquire the passed lock. Signed-off-by: Marco Elver --- include/linux/kref.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/kref.h b/include/linux/kref.h index 88e82ab1367c..c1bd26936f41 100644 --- a/include/linux/kref.h +++ b/include/linux/kref.h @@ -81,6 +81,7 @@ static inline int kref_put(struct kref *kref, void (*release)(struct kref *kref) static inline int kref_put_mutex(struct kref *kref, void (*release)(struct kref *kref), struct mutex *mutex) + __cond_acquires(1, mutex) { if (refcount_dec_and_mutex_lock(&kref->refcount, mutex)) { release(kref); @@ -102,6 +103,7 @@ static inline int kref_put_mutex(struct kref *kref, static inline int kref_put_lock(struct kref *kref, void (*release)(struct kref *kref), spinlock_t *lock) + __cond_acquires(1, lock) { if (refcount_dec_and_lock(&kref->refcount, lock)) { release(kref); From patchwork Thu Feb 6 18:10:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963497 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 464A71F8EEE for ; Thu, 6 Feb 2025 18:18:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865915; cv=none; b=OscKBun36qROFIKC3e6NVJ5iQyMDPGK1kSWK9wV+Tdkl2/ea+l/b3o6ef24J10ysxp+X1EoMzLHhWtntz0Ccgajho5f4Y4JI5kfFaraWPXd6OTF/7JWEl+NpEb9KMu6DKsnZerb8AGgDDs+oLdjt6eK3KsR/jR4LWidEWwnEmsY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865915; c=relaxed/simple; bh=RqHDLtuTafGGUahN4EZGXu8MD0HDtS+tp7xa1pcqjDw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mu2D5ewxBd/L5gSB0foa28DWlrHweNdSV1AtMzeIadHE18hpn6oCO7fVIGf3tCiApUfQazc2uTNKWsqN151pRLJ9RJK9xmCQ3WcaAz/qqNUtYmxmJ06yLjIX4rNZLRjanuZW5dHR1PVWiOpEicDOi/ZwWK1ERmLdwcYbpIDxxJ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=16qPCvCs; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="16qPCvCs" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5d9f21e17cfso1527025a12.0 for ; Thu, 06 Feb 2025 10:18:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865911; x=1739470711; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9TIY+XnKqDQPKONLGJPgemro/50zRzCvFBM/gxQTHJA=; b=16qPCvCsZjw0Ks1UBRKRn2PV/4kzNmxoat7xqEf00PCjiAGMEBE6eHRHsATa7go32D c/2xy+qtuWtvs6QgUPXK2wY1LeJDLjifeMv5qvi8bjs0LWT6gH2gkJdzmcarwRIk4X7Q s3gs1RBOtRWjkkZEU0iFj3laQAs+Pvyh/2zdDeIj9bdVXcPFydDSfOUXR2+IpVSxWPBc VjikdYNtJND3Emob9V1HTuVHtf39lMUO0tRt0eMTXrgTB0+rhVGlJzzcHe4+/tZvscxt VR9dOxSWP6isuJmemWvA+Po+BA+NwuYziUaFbMZt7dTwLy3NVv7I4SLOosu3NKZqSZbd hHqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865911; x=1739470711; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9TIY+XnKqDQPKONLGJPgemro/50zRzCvFBM/gxQTHJA=; b=pZ18QcQx93Ygl8UfyxtiyJXAa06eE4fPmOmATEng5poR27f7hIGPY/HUHUZ7Ti29v/ zCXKpOlz0iuGyE0M2ScBZhmbODwfe1zA2M5BDcopMWfaftBqg8uMJIDpdNYqrC9lsI8N 03rUkEHJb5cfpln56cjFtcndj365tVQIrgwwF2zxHz6Rr8z1m4WVoq2jRu4OnusRNjXR QCazGVbaNRPpp+ax7HefpDRvqGzvHtciGN2KPSGWvv/au2m/N1cs+PCm/yBvl+LgDSch tM/E3ZW0YjLavIgKKTEasxmw4zzGIDR4Xr7ft+gbBcZfLLBaE8jvmJGZZUePjKA2tyfO d1uQ== X-Forwarded-Encrypted: i=1; AJvYcCURK/JszfICJcyVUecf5zT0jyqgW8E19ym+mN6DuolngShxcT+tlZxiOkw/VL2QWB0J1xo=@vger.kernel.org X-Gm-Message-State: AOJu0YyDyKVzz0l2enn9ZkEEWcYnk/MWdMz/DaZkjyCTs+Ruqhpi7SyB ipMxkDg8L9+nPe/7bXRU997alDZ5gYpPXZ5NL9Ni9A8Ce8v0nwF9n7C1dvBinDFlcQbCqTqIpg= = X-Google-Smtp-Source: AGHT+IEvPeOFhagqmwgnyrai4ZSejzn5v54agYXyK+aaPZflaDyrJ0WIw+bD5/x8N7Qpgf8i3HkDoVStPw== X-Received: from edbij8.prod.google.com ([2002:a05:6402:1588:b0:5dd:2e6f:2549]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:42d6:b0:5dc:5860:6881 with SMTP id 4fb4d7f45d1cf-5de45023562mr572207a12.19.1738865911608; Thu, 06 Feb 2025 10:18:31 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:12 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-19-elver@google.com> Subject: [PATCH RFC 18/24] locking/rwsem: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Add support for Clang's capability analysis for rw_semaphore. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/rwsem.h | 56 +++++++++------- lib/test_capability-analysis.c | 64 +++++++++++++++++++ 3 files changed, 97 insertions(+), 25 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 3766ac466470..719986739b0e 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -86,7 +86,7 @@ Supported Kernel Primitives Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU, SRCU (`srcu_struct`). +`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index c8b543d428b0..0c84e3072370 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -45,7 +45,7 @@ * reduce the chance that they will share the same cacheline causing * cacheline bouncing problem. */ -struct rw_semaphore { +struct_with_capability(rw_semaphore) { atomic_long_t count; /* * Write owner or one of the read owners as well flags regarding @@ -76,11 +76,13 @@ static inline int rwsem_is_locked(struct rw_semaphore *sem) } static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem) + __asserts_cap(sem) { WARN_ON(atomic_long_read(&sem->count) == RWSEM_UNLOCKED_VALUE); } static inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem) + __asserts_cap(sem) { WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED)); } @@ -119,6 +121,7 @@ do { \ static struct lock_class_key __key; \ \ __init_rwsem((sem), #sem, &__key); \ + __assert_cap(sem); \ } while (0) /* @@ -136,7 +139,7 @@ static inline int rwsem_is_contended(struct rw_semaphore *sem) #include -struct rw_semaphore { +struct_with_capability(rw_semaphore) { struct rwbase_rt rwbase; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; @@ -160,6 +163,7 @@ do { \ static struct lock_class_key __key; \ \ __init_rwsem((sem), #sem, &__key); \ + __assert_cap(sem); \ } while (0) static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem) @@ -168,11 +172,13 @@ static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem) } static __always_inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem) + __asserts_cap(sem) { WARN_ON(!rwsem_is_locked(sem)); } static __always_inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem) + __asserts_cap(sem) { WARN_ON(!rw_base_is_write_locked(&sem->rwbase)); } @@ -190,6 +196,7 @@ static __always_inline int rwsem_is_contended(struct rw_semaphore *sem) */ static inline void rwsem_assert_held(const struct rw_semaphore *sem) + __asserts_cap(sem) { if (IS_ENABLED(CONFIG_LOCKDEP)) lockdep_assert_held(sem); @@ -198,6 +205,7 @@ static inline void rwsem_assert_held(const struct rw_semaphore *sem) } static inline void rwsem_assert_held_write(const struct rw_semaphore *sem) + __asserts_cap(sem) { if (IS_ENABLED(CONFIG_LOCKDEP)) lockdep_assert_held_write(sem); @@ -208,47 +216,47 @@ static inline void rwsem_assert_held_write(const struct rw_semaphore *sem) /* * lock for reading */ -extern void down_read(struct rw_semaphore *sem); -extern int __must_check down_read_interruptible(struct rw_semaphore *sem); -extern int __must_check down_read_killable(struct rw_semaphore *sem); +extern void down_read(struct rw_semaphore *sem) __acquires_shared(sem); +extern int __must_check down_read_interruptible(struct rw_semaphore *sem) __cond_acquires_shared(0, sem); +extern int __must_check down_read_killable(struct rw_semaphore *sem) __cond_acquires_shared(0, sem); /* * trylock for reading -- returns 1 if successful, 0 if contention */ -extern int down_read_trylock(struct rw_semaphore *sem); +extern int down_read_trylock(struct rw_semaphore *sem) __cond_acquires_shared(1, sem); /* * lock for writing */ -extern void down_write(struct rw_semaphore *sem); -extern int __must_check down_write_killable(struct rw_semaphore *sem); +extern void down_write(struct rw_semaphore *sem) __acquires(sem); +extern int __must_check down_write_killable(struct rw_semaphore *sem) __cond_acquires(0, sem); /* * trylock for writing -- returns 1 if successful, 0 if contention */ -extern int down_write_trylock(struct rw_semaphore *sem); +extern int down_write_trylock(struct rw_semaphore *sem) __cond_acquires(1, sem); /* * release a read lock */ -extern void up_read(struct rw_semaphore *sem); +extern void up_read(struct rw_semaphore *sem) __releases_shared(sem); /* * release a write lock */ -extern void up_write(struct rw_semaphore *sem); +extern void up_write(struct rw_semaphore *sem) __releases(sem); -DEFINE_GUARD(rwsem_read, struct rw_semaphore *, down_read(_T), up_read(_T)) -DEFINE_GUARD_COND(rwsem_read, _try, down_read_trylock(_T)) -DEFINE_GUARD_COND(rwsem_read, _intr, down_read_interruptible(_T) == 0) +DEFINE_LOCK_GUARD_1(rwsem_read, struct rw_semaphore, down_read(_T->lock), up_read(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_read, _try, down_read_trylock(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_read, _intr, down_read_interruptible(_T->lock) == 0) -DEFINE_GUARD(rwsem_write, struct rw_semaphore *, down_write(_T), up_write(_T)) -DEFINE_GUARD_COND(rwsem_write, _try, down_write_trylock(_T)) +DEFINE_LOCK_GUARD_1(rwsem_write, struct rw_semaphore, down_write(_T->lock), up_write(_T->lock)) +DEFINE_LOCK_GUARD_1_COND(rwsem_write, _try, down_write_trylock(_T->lock)) /* * downgrade write lock to read lock */ -extern void downgrade_write(struct rw_semaphore *sem); +extern void downgrade_write(struct rw_semaphore *sem) __releases(sem) __acquires_shared(sem); #ifdef CONFIG_DEBUG_LOCK_ALLOC /* @@ -264,11 +272,11 @@ extern void downgrade_write(struct rw_semaphore *sem); * lockdep_set_class() at lock initialization time. * See Documentation/locking/lockdep-design.rst for more details.) */ -extern void down_read_nested(struct rw_semaphore *sem, int subclass); -extern int __must_check down_read_killable_nested(struct rw_semaphore *sem, int subclass); -extern void down_write_nested(struct rw_semaphore *sem, int subclass); -extern int down_write_killable_nested(struct rw_semaphore *sem, int subclass); -extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest_lock); +extern void down_read_nested(struct rw_semaphore *sem, int subclass) __acquires_shared(sem); +extern int __must_check down_read_killable_nested(struct rw_semaphore *sem, int subclass) __cond_acquires_shared(0, sem); +extern void down_write_nested(struct rw_semaphore *sem, int subclass) __acquires(sem); +extern int down_write_killable_nested(struct rw_semaphore *sem, int subclass) __cond_acquires(0, sem); +extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest_lock) __acquires(sem); # define down_write_nest_lock(sem, nest_lock) \ do { \ @@ -282,8 +290,8 @@ do { \ * [ This API should be avoided as much as possible - the * proper abstraction for this case is completions. ] */ -extern void down_read_non_owner(struct rw_semaphore *sem); -extern void up_read_non_owner(struct rw_semaphore *sem); +extern void down_read_non_owner(struct rw_semaphore *sem) __acquires_shared(sem); +extern void up_read_non_owner(struct rw_semaphore *sem) __releases_shared(sem); #else # define down_read_nested(sem, subclass) down_read(sem) # define down_read_killable_nested(sem, subclass) down_read_killable(sem) diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 8bc8c3e6cb5c..4638d220f474 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -255,6 +256,69 @@ static void __used test_seqlock_writer(struct test_seqlock_data *d) write_sequnlock_irqrestore(&d->sl, flags); } +struct test_rwsem_data { + struct rw_semaphore sem; + int counter __var_guarded_by(&sem); +}; + +static void __used test_rwsem_init(struct test_rwsem_data *d) +{ + init_rwsem(&d->sem); + d->counter = 0; +} + +static void __used test_rwsem_reader(struct test_rwsem_data *d) +{ + down_read(&d->sem); + (void)d->counter; + up_read(&d->sem); + + if (down_read_trylock(&d->sem)) { + (void)d->counter; + up_read(&d->sem); + } +} + +static void __used test_rwsem_writer(struct test_rwsem_data *d) +{ + down_write(&d->sem); + d->counter++; + up_write(&d->sem); + + down_write(&d->sem); + d->counter++; + downgrade_write(&d->sem); + (void)d->counter; + up_read(&d->sem); + + if (down_write_trylock(&d->sem)) { + d->counter++; + up_write(&d->sem); + } +} + +static void __used test_rwsem_assert(struct test_rwsem_data *d) +{ + rwsem_assert_held_nolockdep(&d->sem); + d->counter++; +} + +static void __used test_rwsem_guard(struct test_rwsem_data *d) +{ + { guard(rwsem_read)(&d->sem); (void)d->counter; } + { guard(rwsem_write)(&d->sem); d->counter++; } +} + +static void __used test_rwsem_cond_guard(struct test_rwsem_data *d) +{ + scoped_cond_guard(rwsem_read_try, return, &d->sem) { + (void)d->counter; + } + scoped_cond_guard(rwsem_write_try, return, &d->sem) { + d->counter++; + } +} + struct test_bit_spinlock_data { unsigned long bits; int counter __var_guarded_by(__bitlock(3, &bits)); From patchwork Thu Feb 6 18:10:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963498 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA0C82066D7 for ; Thu, 6 Feb 2025 18:18:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865918; cv=none; b=iZH1wlgf1GKmPzhhTQyFza472eR8zvJYUZiQPcdBubNv8CJsHkfCVtATEdn/doqGjqIzPUG64nkd1dZXK8YjH13jjsfOz/V8zvoUhnI5H+q472/dL5V7jDI17foqul3SuHU2lQyXsWA6VmBIqUNCNCh0fGXQgmyuHX7fLBtUoEk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865918; c=relaxed/simple; bh=L1EG/Pog/Bknty3ffiiK6+CRlNE7aUcIt8NuXQ0NddY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Oqrq/hkpNBFf0sBlAUJp14y0FrlZybGCi9P74XQPWBh+U71d/5ikVk816xdANudCc1v9bcvBjtlpMw12eW0FqPYG2ekkiqcxHRcrJ7cPR/3LdtBJ90FrHBmp4X3+m93M/AMI478aX3/L2FJ8omI5HrzQpnc1fWeqQJhBQyguofk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OaCaW8wx; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OaCaW8wx" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-ab693930fc5so167445766b.1 for ; Thu, 06 Feb 2025 10:18:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865914; x=1739470714; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/4B3dc+uw3kIUoAQrEfaYRIJGMzUFqPSDxrvx8H6TaQ=; b=OaCaW8wxR30KrYECejUg+jwXmaBa+/tqpjJBLR/m86M4DqHw1aeSYEZ8vchx0MtaL3 SUKaMyzzp9KouPx/Ag0WXu9u+ka3E+9AsN29HrNI9mgvfTTQQ4T1t19Uvm/+S5jQf8lU KpkBTZ03wxUUriJUmvObAggQHgkc5ulijYHPnmbxSzJfP6WBpLH87bwTEE0f9yKY3JWT Gy0R1gszxw1W+hACh5rKCibSDWgACuSrV/CKgcoh7ddLrSBZsS3aockX9C1dtCbObrIw 2IL6c9BTDBVpG2dhGPq2ZOKP4ij8QiycZny5Ry6hG+eRMg+eCuj4KkEBWWoyKl0c6kbB 5xzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865914; x=1739470714; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/4B3dc+uw3kIUoAQrEfaYRIJGMzUFqPSDxrvx8H6TaQ=; b=fUOf1GS7WwnRjG40zz3Thux1RVlw3pM6QPyypxky4gip3YKnnSGhDqD3YzpmRFztwF aYyG4zv4VoMSqzXIZfR2QYFYcsqSQftispK7cK8CY4OnppNwB2MqQkxkj7IQl3ci8aqx EcICvHnF14Zsg5TfL8Xt/UGr0M2rICFcwLDU/5ZOFOqrjsNqifm5Tc59AEs5Kh4csLqz PaDIJZmevxg0ogHnCu3aI+DXSsK63dz4bETHrEqZ0sb50qRs36M2XSptKxC3q0ctqiiv KGL/PhcNhcy/tNxUNvj95SgvzuUO0uUaH316Q+dfpMAbc0YGXfjyURvNA6IqhmK/KFOn 9E0g== X-Forwarded-Encrypted: i=1; AJvYcCUmxLX/AB7wekIP1SEujgAZUGrZpB1/dIA20mcx6G/YXH9XH57quGTd6JaDsPZCfweeXEc=@vger.kernel.org X-Gm-Message-State: AOJu0YzGGnf2z1qh95A3Fo0AEFg9o3vxYTIzJEjaQpJEheXaknawBj6W Lv8XwjqSnlAZBjnU+DcG68w2TVRAPp1qx0UHhtuo4X1cqfi5EbxNGQtn3Nt+4JgmEHrS35/TLg= = X-Google-Smtp-Source: AGHT+IGoGX/frEP3WONTFmzfntA7mN3pNwgZKUAdg2rEUwf3T8agPxmffbfAd5hLQ1tVZvBZMMK7mObHWw== X-Received: from ejctl25.prod.google.com ([2002:a17:907:c319:b0:aa6:a222:16ac]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:6094:b0:ab6:f4e7:52f9 with SMTP id a640c23a62f3a-ab75e26494emr827537866b.25.1738865914032; Thu, 06 Feb 2025 10:18:34 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:13 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-20-elver@google.com> Subject: [PATCH RFC 19/24] locking/local_lock: Support Clang's capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Add support for Clang's capability analysis for local_lock_t. Signed-off-by: Marco Elver --- .../dev-tools/capability-analysis.rst | 2 +- include/linux/local_lock.h | 18 ++++---- include/linux/local_lock_internal.h | 41 ++++++++++++++--- lib/test_capability-analysis.c | 46 +++++++++++++++++++ 4 files changed, 90 insertions(+), 17 deletions(-) diff --git a/Documentation/dev-tools/capability-analysis.rst b/Documentation/dev-tools/capability-analysis.rst index 719986739b0e..1e9ce018e30e 100644 --- a/Documentation/dev-tools/capability-analysis.rst +++ b/Documentation/dev-tools/capability-analysis.rst @@ -86,7 +86,7 @@ Supported Kernel Primitives Currently the following synchronization primitives are supported: `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, -`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`. +`bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`. For capabilities with an initialization function (e.g., `spin_lock_init()`), calling this function on the capability instance before initializing any diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h index 091dc0b6bdfb..63fadcf66216 100644 --- a/include/linux/local_lock.h +++ b/include/linux/local_lock.h @@ -51,12 +51,12 @@ #define local_unlock_irqrestore(lock, flags) \ __local_unlock_irqrestore(lock, flags) -DEFINE_GUARD(local_lock, local_lock_t __percpu*, - local_lock(_T), - local_unlock(_T)) -DEFINE_GUARD(local_lock_irq, local_lock_t __percpu*, - local_lock_irq(_T), - local_unlock_irq(_T)) +DEFINE_LOCK_GUARD_1(local_lock, local_lock_t __percpu, + local_lock(_T->lock), + local_unlock(_T->lock)) +DEFINE_LOCK_GUARD_1(local_lock_irq, local_lock_t __percpu, + local_lock_irq(_T->lock), + local_unlock_irq(_T->lock)) DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu, local_lock_irqsave(_T->lock, _T->flags), local_unlock_irqrestore(_T->lock, _T->flags), @@ -68,8 +68,8 @@ DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu, #define local_unlock_nested_bh(_lock) \ __local_unlock_nested_bh(_lock) -DEFINE_GUARD(local_lock_nested_bh, local_lock_t __percpu*, - local_lock_nested_bh(_T), - local_unlock_nested_bh(_T)) +DEFINE_LOCK_GUARD_1(local_lock_nested_bh, local_lock_t __percpu, + local_lock_nested_bh(_T->lock), + local_unlock_nested_bh(_T->lock)) #endif diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h index 8dd71fbbb6d2..031de28d8ffb 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -8,12 +8,13 @@ #ifndef CONFIG_PREEMPT_RT -typedef struct { +struct_with_capability(local_lock) { #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; struct task_struct *owner; #endif -} local_lock_t; +}; +typedef struct local_lock local_lock_t; #ifdef CONFIG_DEBUG_LOCK_ALLOC # define LOCAL_LOCK_DEBUG_INIT(lockname) \ @@ -60,6 +61,7 @@ do { \ 0, LD_WAIT_CONFIG, LD_WAIT_INV, \ LD_LOCK_PERCPU); \ local_lock_debug_init(lock); \ + __assert_cap(lock); \ } while (0) #define __spinlock_nested_bh_init(lock) \ @@ -71,40 +73,47 @@ do { \ 0, LD_WAIT_CONFIG, LD_WAIT_INV, \ LD_LOCK_NORMAL); \ local_lock_debug_init(lock); \ + __assert_cap(lock); \ } while (0) #define __local_lock(lock) \ do { \ preempt_disable(); \ local_lock_acquire(this_cpu_ptr(lock)); \ + __acquire(lock); \ } while (0) #define __local_lock_irq(lock) \ do { \ local_irq_disable(); \ local_lock_acquire(this_cpu_ptr(lock)); \ + __acquire(lock); \ } while (0) #define __local_lock_irqsave(lock, flags) \ do { \ local_irq_save(flags); \ local_lock_acquire(this_cpu_ptr(lock)); \ + __acquire(lock); \ } while (0) #define __local_unlock(lock) \ do { \ + __release(lock); \ local_lock_release(this_cpu_ptr(lock)); \ preempt_enable(); \ } while (0) #define __local_unlock_irq(lock) \ do { \ + __release(lock); \ local_lock_release(this_cpu_ptr(lock)); \ local_irq_enable(); \ } while (0) #define __local_unlock_irqrestore(lock, flags) \ do { \ + __release(lock); \ local_lock_release(this_cpu_ptr(lock)); \ local_irq_restore(flags); \ } while (0) @@ -113,19 +122,37 @@ do { \ do { \ lockdep_assert_in_softirq(); \ local_lock_acquire(this_cpu_ptr(lock)); \ + __acquire(lock); \ } while (0) #define __local_unlock_nested_bh(lock) \ - local_lock_release(this_cpu_ptr(lock)) + do { \ + __release(lock); \ + local_lock_release(this_cpu_ptr(lock)); \ + } while (0) #else /* !CONFIG_PREEMPT_RT */ +#include + /* * On PREEMPT_RT local_lock maps to a per CPU spinlock, which protects the * critical section while staying preemptible. */ typedef spinlock_t local_lock_t; +/* + * Because the compiler only knows about the base per-CPU variable, use this + * helper function to make the compiler think we lock/unlock the @base variable, + * and hide the fact we actually pass the per-CPU instance @pcpu to lock/unlock + * functions. + */ +static inline local_lock_t *__local_lock_alias(local_lock_t __percpu *base, local_lock_t *pcpu) + __returns_cap(base) +{ + return pcpu; +} + #define INIT_LOCAL_LOCK(lockname) __LOCAL_SPIN_LOCK_UNLOCKED((lockname)) #define __local_lock_init(l) \ @@ -136,7 +163,7 @@ typedef spinlock_t local_lock_t; #define __local_lock(__lock) \ do { \ migrate_disable(); \ - spin_lock(this_cpu_ptr((__lock))); \ + spin_lock(__local_lock_alias(__lock, this_cpu_ptr((__lock)))); \ } while (0) #define __local_lock_irq(lock) __local_lock(lock) @@ -150,7 +177,7 @@ typedef spinlock_t local_lock_t; #define __local_unlock(__lock) \ do { \ - spin_unlock(this_cpu_ptr((__lock))); \ + spin_unlock(__local_lock_alias(__lock, this_cpu_ptr((__lock)))); \ migrate_enable(); \ } while (0) @@ -161,12 +188,12 @@ typedef spinlock_t local_lock_t; #define __local_lock_nested_bh(lock) \ do { \ lockdep_assert_in_softirq_func(); \ - spin_lock(this_cpu_ptr(lock)); \ + spin_lock(__local_lock_alias(lock, this_cpu_ptr(lock))); \ } while (0) #define __local_unlock_nested_bh(lock) \ do { \ - spin_unlock(this_cpu_ptr((lock))); \ + spin_unlock(__local_lock_alias(lock, this_cpu_ptr((lock)))); \ } while (0) #endif /* CONFIG_PREEMPT_RT */ diff --git a/lib/test_capability-analysis.c b/lib/test_capability-analysis.c index 4638d220f474..dd3fccff2352 100644 --- a/lib/test_capability-analysis.c +++ b/lib/test_capability-analysis.c @@ -6,7 +6,9 @@ #include #include +#include #include +#include #include #include #include @@ -433,3 +435,47 @@ static void __used test_srcu_guard(struct test_srcu_data *d) guard(srcu)(&d->srcu); (void)srcu_dereference(d->data, &d->srcu); } + +struct test_local_lock_data { + local_lock_t lock; + int counter __var_guarded_by(&lock); +}; + +static DEFINE_PER_CPU(struct test_local_lock_data, test_local_lock_data) = { + .lock = INIT_LOCAL_LOCK(lock), +}; + +static void __used test_local_lock_init(struct test_local_lock_data *d) +{ + local_lock_init(&d->lock); + d->counter = 0; +} + +static void __used test_local_lock(void) +{ + unsigned long flags; + + local_lock(&test_local_lock_data.lock); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock(&test_local_lock_data.lock); + + local_lock_irq(&test_local_lock_data.lock); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock_irq(&test_local_lock_data.lock); + + local_lock_irqsave(&test_local_lock_data.lock, flags); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock_irqrestore(&test_local_lock_data.lock, flags); + + local_lock_nested_bh(&test_local_lock_data.lock); + this_cpu_add(test_local_lock_data.counter, 1); + local_unlock_nested_bh(&test_local_lock_data.lock); +} + +static void __used test_local_lock_guard(void) +{ + { guard(local_lock)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } + { guard(local_lock_irq)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } + { guard(local_lock_irqsave)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } + { guard(local_lock_nested_bh)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } +} From patchwork Thu Feb 6 18:10:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963499 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 471E9214A9C for ; Thu, 6 Feb 2025 18:18:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865920; cv=none; b=UnddYq+7vKF6r6xJdTqjvCR0hTrT3iYAS0IQ+S+sxMJEP+xaUMnQNyUv3CRm8QkCYBckF2VF2H0rm7zcV2EVeWjMzuG/qto8fGqVsJ15kJ+O0rmvEUHR2fydL7gzq/Q6IA2mwgjPCKaWaSmzvF7eWtSFea4gO2ZB3jQJBJWAPx4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865920; c=relaxed/simple; bh=5EVSb50dMQhWyPFGHfjsGLdP+2gDBxuAj9DD3qD/9wA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=L4ki7xS9tWxSoBh2hNmGL4iQehx/q1CQZ1N3VYs8L+n2rQzpquR5Af6VYefoPp4rqjGYnoKZ/Cdf4oluu25r5d4bI6WjVYiLbJP2efPSynpKKqXcWc4o3py1moTaTyEzIiD52iLhbKrlUmW71G2F1thLuQgKhxcTI0U4o+SCKPM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hWm2xkLr; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hWm2xkLr" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-aa67f18cb95so136461966b.1 for ; Thu, 06 Feb 2025 10:18:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865916; x=1739470716; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Gbrljv8lkIJmai+iapf5Yd4+selXZvjO0vfpPpqYqbM=; b=hWm2xkLr0AuomA1CuAzFdTGmmJFf6o3LwE2jzSXCGikNBsZdkXJn/huy/JlLPrZJME lMTLbSq9zTlwjYK/ufoUkZfLEtiY7ya1WjWVfeLTGJVEYzPqNjo0wfa6rQA61eWwmVTd dZk3PGNFgT2pzarJnzMSGlfAX4WmYowIP/NHqa42UrKiEDJC2fmtORhuP22SZJvUFFS6 I5vyFQIzRnE2/LBBhdV21PhqmEahjmpFz6xnfVVxCqxSASESUjr4nio9WSpzNLerqlu9 CuvEk1622Pl/tonAdgy+4Drx9W6ihR6+wXFpMMuyg3pzeBiFx8XX9S9OIRyjOYMGEE/i awTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865916; x=1739470716; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Gbrljv8lkIJmai+iapf5Yd4+selXZvjO0vfpPpqYqbM=; b=gonA9XN1S/999gB1PJG2wmyXZzEvC++cvmMTCOzdVgCh5Kb/h/Q4JsRdYFx2bPqtqK MPhUE233I9nr+Wcl9pufbKFRzwTewtG3BbtHoNocYC15imen5Gb4eiYpdXZjiYcfQ2Ha qftKc4+r5VaQcLegGIbjoZkRMshKMnYSNnhcAaPfxcXB1/KhOIW1Sm/iOhVPLmbqzQkC 1xvDW5jea2kzc8BdPxDyqfH0ObMOZ5RxHcRBSXs/AE+Jhw5vjfa3zUhdLhdS+O0vtBbu gIdOVwSZlHOT7FbBFyIm+bLx7lJUMInFD7UI8WkR4lGXedaqEmY5I9YWrPcHgX4JOeXE Cxdg== X-Forwarded-Encrypted: i=1; AJvYcCVa2z91EcQ/kN7602fVFEfiZ0ukUCehBcwsUiB4Z8YDKSkjfr4XcQ7EII2pUBaaxkgC+Ns=@vger.kernel.org X-Gm-Message-State: AOJu0YxiCV3ecXn60k0Cfiv5qER8sSczM+CKSFltuMRrz4crB9gmj2f1 aQN3ZatLvfUqD1akkR3wzbtsmZsuvHg+v110jUkqz4kuhC/p2SPOCyo6xsU7Tlf0bfsQrXDOuA= = X-Google-Smtp-Source: AGHT+IG6fYjyQDhG15FL/7jfcoC1TInNHtYu+hLdicEFGjYbLL+jlfGEcskN2tLnzzmKsflIazl4pQ1juA== X-Received: from ejcvq6.prod.google.com ([2002:a17:907:a4c6:b0:aa6:bd80:4523]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:6d1e:b0:ab2:f8e9:723c with SMTP id a640c23a62f3a-ab75e210266mr866257866b.5.1738865916587; Thu, 06 Feb 2025 10:18:36 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:14 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-21-elver@google.com> Subject: [PATCH RFC 20/24] debugfs: Make debugfs_cancellation a capability struct From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org When compiling include/linux/debugfs.h with CAPABILITY_ANALYSIS enabled, we can see this error: ./include/linux/debugfs.h:239:17: error: use of undeclared identifier 'cancellation' 239 | void __acquires(cancellation) Move the __acquires(..) attribute after the declaration, so that the compiler can see the cancellation function argument, as well as making struct debugfs_cancellation a real capability to benefit from Clang's capability analysis. Signed-off-by: Marco Elver --- include/linux/debugfs.h | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h index fa2568b4380d..c6a429381887 100644 --- a/include/linux/debugfs.h +++ b/include/linux/debugfs.h @@ -240,18 +240,16 @@ ssize_t debugfs_read_file_str(struct file *file, char __user *user_buf, * @cancel: callback to call * @cancel_data: extra data for the callback to call */ -struct debugfs_cancellation { +struct_with_capability(debugfs_cancellation) { struct list_head list; void (*cancel)(struct dentry *, void *); void *cancel_data; }; -void __acquires(cancellation) -debugfs_enter_cancellation(struct file *file, - struct debugfs_cancellation *cancellation); -void __releases(cancellation) -debugfs_leave_cancellation(struct file *file, - struct debugfs_cancellation *cancellation); +void debugfs_enter_cancellation(struct file *file, + struct debugfs_cancellation *cancellation) __acquires(cancellation); +void debugfs_leave_cancellation(struct file *file, + struct debugfs_cancellation *cancellation) __releases(cancellation); #else From patchwork Thu Feb 6 18:10:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963500 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A6398215F42 for ; Thu, 6 Feb 2025 18:18:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865923; cv=none; b=gzt7BOi3CzfI1Pp71dJE+zSGSjNvA4T3SDui9RGIgTmsNZ1VGnHbfPnpC17vLEQsKr7PonGfDCEVXOgNvz2xYfWgGMk/bz0xlieLM+hdNxP7ewLA0iq8J4pObI0i9lxYX5BtHaFVsbBNRyVfvky+UPjnlmTrLYjV9Zga86WxF2E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865923; c=relaxed/simple; bh=YJDtuAQrt10Gj5yTP5oP/HfXJALSbUnpY7JrPkCy+uE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=M/BoY3G1fgErFwvqSBfqpVNnGfHrQDqDCTEv8nI50/FJEWoXmgfMW387eMkLHemsflJAZFsHQZFBtwVp/0qMRJDRxJ+VPIKOywqklAps2y5XYApkiVbj5BvHtdMaykNflUWsr06CRadVE3PsM5lJm2MW4DdLICgLMRjIyLaRtqY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qd0cRgEV; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qd0cRgEV" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5d3f4cbbbbcso1755579a12.1 for ; Thu, 06 Feb 2025 10:18:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865919; x=1739470719; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/Q/r6EDjEaKBVCObAXq5d4B0c7MeKIT3YrYe1pikBVY=; b=qd0cRgEVC6zuPxo6FpV/WeD4RxHIEp7+/BfhnjUJ2Mm2duAP6e8wU7uQxcVCzFw22E R1woPR1J/0KDF+NbWsut8ViICdx3HhJPtEoSnEOE6TUCZ4YjFNf1PmHwqVHt1a99mgqy BRe+5RVyFu+3hI+AanzV6rCLDi/cx4ubOJMN1DoJh+qPQS8ZSbQhl6Y41PCEnzqXmGnl WrG0iFjeyhPiCEVsdL07YSV6m9AC5BzzeXyTuJOGeXVVk9HZilcZfAm4cCBz+q8VZr84 DYGLV4TVxd9U13sPGFveZLwA/o+R/5vkMQWZ3OQCgH1yVsaTnXeoon29UhKIyTZUboyX pJsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865919; x=1739470719; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/Q/r6EDjEaKBVCObAXq5d4B0c7MeKIT3YrYe1pikBVY=; b=EmAvFujk3Ifq7JLtpyzFdyWbn5lMQjfatzhRzWspXeAl98AlILb26vcsVtwnvPRphB tmazx5alwYfF9uPDaF9O3NZePNz7xoDxfpC5ylfPt2O2fhzH/I28NDawh6EBHBprdxqY E0ziHzwNB6n18J0uco6nejHhaN3WfY71+OCEsMGoCgXHYArDZY3/LYLf6cOyYG7xKwpk br/0WktY76yuhoA8pDQLxbUK1QvasWkrvMWGl/Kvs2dFAQpo+9IvcTzBLycLD3y+D9mu ylQDdUMrgUOIgX+M25XbpccVDHc2ruEAweSnVfFtreOGgU4t+g5EedrBK8+IvRDY+KLf meAQ== X-Forwarded-Encrypted: i=1; AJvYcCViTVILjKffUKriKYYQd71xrwDGLUPyJH/F89vOXuyzWtuJ4tQDt1FCRH7FukCOOi4j5eo=@vger.kernel.org X-Gm-Message-State: AOJu0YyGuL4Pvk1a9AQnz/gC1nyUOwXFXP3Ofa+ap+TgJAI++dPm5r3A H/peU4vgwf9rNIFAUMLkukCOcfCC+mL35ADn/Rt1jdtcabs6t25uDTycUrESXbWYx3UYvn6MZA= = X-Google-Smtp-Source: AGHT+IFMaBtqEK9sUYJXAvF7j4Xpi4+3px4oh9ewLScLsByAAKS4VeuOgIK+QXLT9+HMwjIVF5k6RoBd8Q== X-Received: from edag6.prod.google.com ([2002:a05:6402:3206:b0:5de:3ce0:a49b]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:3583:b0:5da:9d3:bc23 with SMTP id 4fb4d7f45d1cf-5de4508a0b9mr436219a12.24.1738865919115; Thu, 06 Feb 2025 10:18:39 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:15 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-22-elver@google.com> Subject: [PATCH RFC 21/24] kfence: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Enable capability analysis for the KFENCE subsystem. Notable, kfence_handle_page_fault() required minor restructure, which also fixed a subtle race; arguably that function is more readable now. Signed-off-by: Marco Elver --- mm/kfence/Makefile | 2 ++ mm/kfence/core.c | 24 +++++++++++++++++------- mm/kfence/kfence.h | 18 ++++++++++++------ mm/kfence/kfence_test.c | 4 ++++ mm/kfence/report.c | 8 ++++++-- 5 files changed, 41 insertions(+), 15 deletions(-) diff --git a/mm/kfence/Makefile b/mm/kfence/Makefile index 2de2a58d11a1..b3640bdc3c69 100644 --- a/mm/kfence/Makefile +++ b/mm/kfence/Makefile @@ -1,5 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 +CAPABILITY_ANALYSIS := y + obj-y := core.o report.o CFLAGS_kfence_test.o := -fno-omit-frame-pointer -fno-optimize-sibling-calls diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 102048821c22..c2d1ffd20a1f 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -7,6 +7,8 @@ #define pr_fmt(fmt) "kfence: " fmt +disable_capability_analysis(); + #include #include #include @@ -34,6 +36,8 @@ #include +enable_capability_analysis(); + #include "kfence.h" /* Disables KFENCE on the first warning assuming an irrecoverable error. */ @@ -132,8 +136,8 @@ struct kfence_metadata *kfence_metadata __read_mostly; static struct kfence_metadata *kfence_metadata_init __read_mostly; /* Freelist with available objects. */ -static struct list_head kfence_freelist = LIST_HEAD_INIT(kfence_freelist); -static DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */ +DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */ +static struct list_head kfence_freelist __var_guarded_by(&kfence_freelist_lock) = LIST_HEAD_INIT(kfence_freelist); /* * The static key to set up a KFENCE allocation; or if static keys are not used @@ -253,6 +257,7 @@ static bool kfence_unprotect(unsigned long addr) } static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *meta) + __must_hold(&meta->lock) { unsigned long offset = (meta - kfence_metadata + 1) * PAGE_SIZE * 2; unsigned long pageaddr = (unsigned long)&__kfence_pool[offset]; @@ -288,6 +293,7 @@ static inline bool kfence_obj_allocated(const struct kfence_metadata *meta) static noinline void metadata_update_state(struct kfence_metadata *meta, enum kfence_object_state next, unsigned long *stack_entries, size_t num_stack_entries) + __must_hold(&meta->lock) { struct kfence_track *track = next == KFENCE_OBJECT_ALLOCATED ? &meta->alloc_track : &meta->free_track; @@ -485,7 +491,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g alloc_covered_add(alloc_stack_hash, 1); /* Set required slab fields. */ - slab = virt_to_slab((void *)meta->addr); + slab = virt_to_slab(addr); slab->slab_cache = cache; slab->objects = 1; @@ -514,6 +520,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool zombie) { struct kcsan_scoped_access assert_page_exclusive; + u32 alloc_stack_hash; unsigned long flags; bool init; @@ -546,9 +553,10 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z /* Mark the object as freed. */ metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); init = slab_want_init_on_free(meta->cache); + alloc_stack_hash = meta->alloc_stack_hash; raw_spin_unlock_irqrestore(&meta->lock, flags); - alloc_covered_add(meta->alloc_stack_hash, -1); + alloc_covered_add(alloc_stack_hash, -1); /* Check canary bytes for memory corruption. */ check_canary(meta); @@ -593,6 +601,7 @@ static void rcu_guarded_free(struct rcu_head *h) * which partial initialization succeeded. */ static unsigned long kfence_init_pool(void) + __no_capability_analysis { unsigned long addr; struct page *pages; @@ -1192,6 +1201,7 @@ bool kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs { const int page_index = (addr - (unsigned long)__kfence_pool) / PAGE_SIZE; struct kfence_metadata *to_report = NULL; + unsigned long unprotected_page = 0; enum kfence_error_type error_type; unsigned long flags; @@ -1225,9 +1235,8 @@ bool kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs if (!to_report) goto out; - raw_spin_lock_irqsave(&to_report->lock, flags); - to_report->unprotected_page = addr; error_type = KFENCE_ERROR_OOB; + unprotected_page = addr; /* * If the object was freed before we took the look we can still @@ -1239,7 +1248,6 @@ bool kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs if (!to_report) goto out; - raw_spin_lock_irqsave(&to_report->lock, flags); error_type = KFENCE_ERROR_UAF; /* * We may race with __kfence_alloc(), and it is possible that a @@ -1251,6 +1259,8 @@ bool kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs out: if (to_report) { + raw_spin_lock_irqsave(&to_report->lock, flags); + to_report->unprotected_page = unprotected_page; kfence_report_error(addr, is_write, regs, to_report, error_type); raw_spin_unlock_irqrestore(&to_report->lock, flags); } else { diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index dfba5ea06b01..27829d70baf6 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -9,6 +9,8 @@ #ifndef MM_KFENCE_KFENCE_H #define MM_KFENCE_KFENCE_H +disable_capability_analysis(); + #include #include #include @@ -16,6 +18,8 @@ #include "../slab.h" /* for struct kmem_cache */ +enable_capability_analysis(); + /* * Get the canary byte pattern for @addr. Use a pattern that varies based on the * lower 3 bits of the address, to detect memory corruptions with higher @@ -34,6 +38,8 @@ /* Maximum stack depth for reports. */ #define KFENCE_STACK_DEPTH 64 +extern raw_spinlock_t kfence_freelist_lock; + /* KFENCE object states. */ enum kfence_object_state { KFENCE_OBJECT_UNUSED, /* Object is unused. */ @@ -53,7 +59,7 @@ struct kfence_track { /* KFENCE metadata per guarded allocation. */ struct kfence_metadata { - struct list_head list; /* Freelist node; access under kfence_freelist_lock. */ + struct list_head list __var_guarded_by(&kfence_freelist_lock); /* Freelist node. */ struct rcu_head rcu_head; /* For delayed freeing. */ /* @@ -91,13 +97,13 @@ struct kfence_metadata { * In case of an invalid access, the page that was unprotected; we * optimistically only store one address. */ - unsigned long unprotected_page; + unsigned long unprotected_page __var_guarded_by(&lock); /* Allocation and free stack information. */ - struct kfence_track alloc_track; - struct kfence_track free_track; + struct kfence_track alloc_track __var_guarded_by(&lock); + struct kfence_track free_track __var_guarded_by(&lock); /* For updating alloc_covered on frees. */ - u32 alloc_stack_hash; + u32 alloc_stack_hash __var_guarded_by(&lock); #ifdef CONFIG_MEMCG struct slabobj_ext obj_exts; #endif @@ -141,6 +147,6 @@ enum kfence_error_type { void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *regs, const struct kfence_metadata *meta, enum kfence_error_type type); -void kfence_print_object(struct seq_file *seq, const struct kfence_metadata *meta); +void kfence_print_object(struct seq_file *seq, const struct kfence_metadata *meta) __must_hold(&meta->lock); #endif /* MM_KFENCE_KFENCE_H */ diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c index 00034e37bc9f..67eca6e9a8de 100644 --- a/mm/kfence/kfence_test.c +++ b/mm/kfence/kfence_test.c @@ -11,6 +11,8 @@ * Marco Elver */ +disable_capability_analysis(); + #include #include #include @@ -26,6 +28,8 @@ #include +enable_capability_analysis(); + #include "kfence.h" /* May be overridden by . */ diff --git a/mm/kfence/report.c b/mm/kfence/report.c index 10e6802a2edf..bbee90d0034d 100644 --- a/mm/kfence/report.c +++ b/mm/kfence/report.c @@ -5,6 +5,8 @@ * Copyright (C) 2020, Google LLC. */ +disable_capability_analysis(); + #include #include @@ -22,6 +24,8 @@ #include +enable_capability_analysis(); + #include "kfence.h" /* May be overridden by . */ @@ -106,6 +110,7 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries static void kfence_print_stack(struct seq_file *seq, const struct kfence_metadata *meta, bool show_alloc) + __must_hold(&meta->lock) { const struct kfence_track *track = show_alloc ? &meta->alloc_track : &meta->free_track; u64 ts_sec = track->ts_nsec; @@ -207,8 +212,6 @@ void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *r if (WARN_ON(type != KFENCE_ERROR_INVALID && !meta)) return; - if (meta) - lockdep_assert_held(&meta->lock); /* * Because we may generate reports in printk-unfriendly parts of the * kernel, such as scheduler code, the use of printk() could deadlock. @@ -263,6 +266,7 @@ void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *r stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, 0); if (meta) { + lockdep_assert_held(&meta->lock); pr_err("\n"); kfence_print_object(NULL, meta); } From patchwork Thu Feb 6 18:10:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963501 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F18221E097 for ; Thu, 6 Feb 2025 18:18:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865925; cv=none; b=jBpdQ49AGM9h738WjZ/rLkBFKTFevtiOChZpdQnpCESrpMCWL120mT5xRT4Q6GHL4N/b99uqMsI6EDda1/KDK1bs/PLulcA7R9Vzi5/r6284cEz2OHg6xdBqSsjkT/GniydSQ58z0mL1utZVUxQ9mDr6xOU7kNvxvRx5b8UKA8s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865925; c=relaxed/simple; bh=62DVLhLLiJSVkhZ+NOavdiQcCNV6tDH/K6t59rw7Blo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rnYWY2VmKakg4YIsJiLHn4S9Cq4LFklPxONYR44yJe1paRor/b6BgeE7dka/qdcKBOJxz1YFh2+TqK1TGsZpn7BqqEuqGQzUvJR4hkMn2N8teCa8vT2zbkyIl7j7AVWxT06FeG8ZCpKP7ZmzYHjv3CbzlR86zF+yYpWVRqVa8fM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bX5W/qff; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bX5W/qff" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5dc6714f3e8so1736253a12.2 for ; Thu, 06 Feb 2025 10:18:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865921; x=1739470721; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xc/nsPoKHTE+OnZKDVqwWYhiGzXohY6mN55B1zkjrXo=; b=bX5W/qffrxXL3BnTYdkHIpx5m4+S+fduu+kQpV3kTkV9tPAu6SC+pjpQXL90qf333w 9p0U7qHGCkDe7lmJcFcg89eFkATsPKvtYzSzHoH/429/xbMS96+vuTl4JYvjsr7R+Khc 1D/IGanV9UVm99nZUBrG9cy179AffmdrGHhd3zIkhY+iAojETfwXhByfsT2NdVt0v6H1 5nYXgEoliH21LPYdPoSDh0IpK0m8YGk+o6PJRaSgILeFwBITUptFyq2YiPUcUlSzwolP G9EjxYKHfSA1Y8qpvlUPEBQSpzn2dQuymfwoNlAWf0tnRk0GTSDGtOwiMPJxlyZXAzXl dMVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865921; x=1739470721; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xc/nsPoKHTE+OnZKDVqwWYhiGzXohY6mN55B1zkjrXo=; b=Fg7HmkB69GYtFbDHs2Qu/WG3/o/UZUNiUx7GPoIkagtLnDd3RJmIkifXfmCYvIpCkP Mp5MoH65k/eQRVnKWOnzudrM5wOg6Dlv/PnehS9qgsP8SN65ws8OMG2+djlm0ZZp6lMg W9GSOFNzQ80K0BUcnyCKNWF8IOPMuUkHnzaNqeuGUtQw5L9BpLpPyuKTmh1ZMNicwpkJ 2k85BL0zJI3TcJJeXe1S3bpKbQdsdmcvV/nSU38T2wpgwRj+1tKJwDIKD1HxTLHED4/m 75Otxlp0aqqLheFjAOKRBneyFsEu7cjYvFkLqfhwP2nt/3tLvQOHJnRzr10jA/i09wqe 4Qdg== X-Forwarded-Encrypted: i=1; AJvYcCVV1SA+3mjLDiL7dFkoB7T5xFwqfsG1La8PQInXA6/y94mRwm7hohXRZdZpexSAwRT/JwI=@vger.kernel.org X-Gm-Message-State: AOJu0Yzf2OIVw+rjQsq3p4Bl4Zvk8ftLrV60Mk4rYaE9LwN1X3Bq6kit 6TYSYIWr//BrModr3bWQv1vxMBEdXMsKqk8Qdkp5VdIcebcPkCD0GzLw7E6QSO0muNSNISfM7g= = X-Google-Smtp-Source: AGHT+IGMLO3VG3x3Ju4enmaIZ1q2blYdg0P0/rVgQoMBt/NBEB2is3323Ue5scgPotcIXUb1LrdBgJTq1g== X-Received: from edah39.prod.google.com ([2002:a05:6402:ea7:b0:5dc:92bc:54c2]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:2390:b0:5db:f26d:fff8 with SMTP id 4fb4d7f45d1cf-5de4508dcc3mr323309a12.22.1738865921739; Thu, 06 Feb 2025 10:18:41 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:16 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-23-elver@google.com> Subject: [PATCH RFC 22/24] kcov: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Enable capability analysis for the KCOV subsystem. Signed-off-by: Marco Elver --- kernel/Makefile | 2 ++ kernel/kcov.c | 40 +++++++++++++++++++++++++++++----------- 2 files changed, 31 insertions(+), 11 deletions(-) diff --git a/kernel/Makefile b/kernel/Makefile index 87866b037fbe..7e399998532d 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -39,6 +39,8 @@ KASAN_SANITIZE_kcov.o := n KCSAN_SANITIZE_kcov.o := n UBSAN_SANITIZE_kcov.o := n KMSAN_SANITIZE_kcov.o := n + +CAPABILITY_ANALYSIS_kcov.o := y CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack) -fno-stack-protector obj-y += sched/ diff --git a/kernel/kcov.c b/kernel/kcov.c index 187ba1b80bda..d89c933fe682 100644 --- a/kernel/kcov.c +++ b/kernel/kcov.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 #define pr_fmt(fmt) "kcov: " fmt +disable_capability_analysis(); + #define DISABLE_BRANCH_PROFILING #include #include @@ -27,6 +29,8 @@ #include #include +enable_capability_analysis(); + #define kcov_debug(fmt, ...) pr_debug("%s: " fmt, __func__, ##__VA_ARGS__) /* Number of 64-bit words written per one comparison: */ @@ -55,13 +59,13 @@ struct kcov { refcount_t refcount; /* The lock protects mode, size, area and t. */ spinlock_t lock; - enum kcov_mode mode; + enum kcov_mode mode __var_guarded_by(&lock); /* Size of arena (in long's). */ - unsigned int size; + unsigned int size __var_guarded_by(&lock); /* Coverage buffer shared with user space. */ - void *area; + void *area __var_guarded_by(&lock); /* Task for which we collect coverage, or NULL. */ - struct task_struct *t; + struct task_struct *t __var_guarded_by(&lock); /* Collecting coverage from remote (background) threads. */ bool remote; /* Size of remote area (in long's). */ @@ -391,6 +395,7 @@ void kcov_task_init(struct task_struct *t) } static void kcov_reset(struct kcov *kcov) + __must_hold(&kcov->lock) { kcov->t = NULL; kcov->mode = KCOV_MODE_INIT; @@ -400,6 +405,7 @@ static void kcov_reset(struct kcov *kcov) } static void kcov_remote_reset(struct kcov *kcov) + __must_hold(&kcov->lock) { int bkt; struct kcov_remote *remote; @@ -419,6 +425,7 @@ static void kcov_remote_reset(struct kcov *kcov) } static void kcov_disable(struct task_struct *t, struct kcov *kcov) + __must_hold(&kcov->lock) { kcov_task_reset(t); if (kcov->remote) @@ -435,8 +442,11 @@ static void kcov_get(struct kcov *kcov) static void kcov_put(struct kcov *kcov) { if (refcount_dec_and_test(&kcov->refcount)) { - kcov_remote_reset(kcov); - vfree(kcov->area); + /* Capability-safety: no references left, object being destroyed. */ + capability_unsafe( + kcov_remote_reset(kcov); + vfree(kcov->area); + ); kfree(kcov); } } @@ -491,6 +501,7 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma) unsigned long size, off; struct page *page; unsigned long flags; + unsigned long *area; spin_lock_irqsave(&kcov->lock, flags); size = kcov->size * sizeof(unsigned long); @@ -499,10 +510,11 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma) res = -EINVAL; goto exit; } + area = kcov->area; spin_unlock_irqrestore(&kcov->lock, flags); vm_flags_set(vma, VM_DONTEXPAND); for (off = 0; off < size; off += PAGE_SIZE) { - page = vmalloc_to_page(kcov->area + off); + page = vmalloc_to_page(area + off); res = vm_insert_page(vma, vma->vm_start + off, page); if (res) { pr_warn_once("kcov: vm_insert_page() failed\n"); @@ -522,10 +534,10 @@ static int kcov_open(struct inode *inode, struct file *filep) kcov = kzalloc(sizeof(*kcov), GFP_KERNEL); if (!kcov) return -ENOMEM; + spin_lock_init(&kcov->lock); kcov->mode = KCOV_MODE_DISABLED; kcov->sequence = 1; refcount_set(&kcov->refcount, 1); - spin_lock_init(&kcov->lock); filep->private_data = kcov; return nonseekable_open(inode, filep); } @@ -556,6 +568,7 @@ static int kcov_get_mode(unsigned long arg) * vmalloc fault handling path is instrumented. */ static void kcov_fault_in_area(struct kcov *kcov) + __must_hold(&kcov->lock) { unsigned long stride = PAGE_SIZE / sizeof(unsigned long); unsigned long *area = kcov->area; @@ -584,6 +597,7 @@ static inline bool kcov_check_handle(u64 handle, bool common_valid, static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd, unsigned long arg) + __must_hold(&kcov->lock) { struct task_struct *t; unsigned long flags, unused; @@ -814,6 +828,7 @@ static inline bool kcov_mode_enabled(unsigned int mode) } static void kcov_remote_softirq_start(struct task_struct *t) + __must_hold(&kcov_percpu_data.lock) { struct kcov_percpu_data *data = this_cpu_ptr(&kcov_percpu_data); unsigned int mode; @@ -831,6 +846,7 @@ static void kcov_remote_softirq_start(struct task_struct *t) } static void kcov_remote_softirq_stop(struct task_struct *t) + __must_hold(&kcov_percpu_data.lock) { struct kcov_percpu_data *data = this_cpu_ptr(&kcov_percpu_data); @@ -896,10 +912,12 @@ void kcov_remote_start(u64 handle) /* Put in kcov_remote_stop(). */ kcov_get(kcov); /* - * Read kcov fields before unlock to prevent races with - * KCOV_DISABLE / kcov_remote_reset(). + * Read kcov fields before unlocking kcov_remote_lock to prevent races + * with KCOV_DISABLE and kcov_remote_reset(); cannot acquire kcov->lock + * here, because it might lead to deadlock given kcov_remote_lock is + * acquired _after_ kcov->lock elsewhere. */ - mode = kcov->mode; + mode = capability_unsafe(kcov->mode); sequence = kcov->sequence; if (in_task()) { size = kcov->remote_size; From patchwork Thu Feb 6 18:10:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963502 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A30242206AF for ; Thu, 6 Feb 2025 18:18:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865927; cv=none; b=emQ6pVkrqYEiS8BqFIhgEJJHNM2nOG6twewTIYrDX1PCX3Td6kFjlR9/pX20Bwqe8JN4+QLhhNf0WqjGWN1Q8jgEYEpjBO7kIVzvKHQg3aQzrEqjcPj7/B5MGLXhmFk3JDCbQ5AP0ybNN53/jSZ7dLbd8Kosj2us06sggjaz3LA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865927; c=relaxed/simple; bh=7Rgf3ZTg3VDeZEL33luK3Fx0Q0AHhneR5A6+XhHvm/o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mDl+SBquz6IZnf6h9ZjQ2hKsYutqbVBnipNzWCxQC2+eSm9pWwVIeFFdKyuP9He0PKOjf2L99M3C+jsXiVew3juuy3DkDsqALHN8pYtMxMdnvncM2Y8CHd9tTjLgSSncsPoqo4MbqiUpjpbLT70dWXv9vW2LX3/nup3Zrn+SI9U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=v7Kl4Ott; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="v7Kl4Ott" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-ab6936cd142so141052866b.1 for ; Thu, 06 Feb 2025 10:18:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865924; x=1739470724; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cTAbjs+ZIolkWTfm9mQklQQ/rgalTGxg3MlfIrb1brs=; b=v7Kl4OttnAV55cMlvUkZ5aQGf2nd8oWcJQwWpsC00unC40AFFcI3J2zTH//YakNiiL AENEgrq8jgJcnLxCfYte4KvOUr/GRObakLzcY4xaj11ceADB86GlQEfKkEtlZBWTbLa0 2rsyGsBjkgLn8FAjR13ulvSogNi+okL83F7AOACjx02Q/nPawkiPka0ziPJvSDuk1oIy 27LLd1hL7YLI5RmxbdMpCTtS9OWVrlAOr2ONO42ICEWqnmT+tc8Rw1yLN5UbuIAG2Ssb 7x8TbBqQXLVujaFYLURyNQOrwFVUQPNQMR7tCM9tWodZwQrXIClrx2an7pAtv7cUDaXk kk8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865924; x=1739470724; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cTAbjs+ZIolkWTfm9mQklQQ/rgalTGxg3MlfIrb1brs=; b=IqjLxCNs1IxTOuUVNwblWThdhIQkdB9K77/1aswRm8aNcSyw6vv1Vmoy8DEuUwqBji 28+lj+mrE4F+GdpPdfiX73+DqdhaPUCfyIiHubhCb+QICTXXJwZ55c3GfcTBY6s36A/m 9gKjVZqCSa4i4kxMyfwaz0iakHPqijxNAylH3jJXm6lK5haBUyvu/dHGNGY/KV98RcBD bgaQ0hxDNA8xD7jMwld6LurcE2uzmCz88LP4tFlM+tS1DNdnVBSjuexVIGqqVPAxxTk6 l/XSW0YopGprYyMe8MSUW14bbnKg20npOjs6/4NriJObwvOsugSCOWm2SAnODFxYy3ow Ynpw== X-Forwarded-Encrypted: i=1; AJvYcCUx8fdd7n01RG1SYyTRuhf4ZQCLROiUFV6EHLAziYhkR8kPolcVOHSQlbHWjMmCWV7CGBg=@vger.kernel.org X-Gm-Message-State: AOJu0YxR1uHG5T3gGWG3rtrf0kZLaVYIPZnwgcpaICxQtqru1E3uxvvY yikOHcGXtWSfES2xKQq88zKg01PevKKZx0DyI8pDqGiLXZvpLn71ltETykVnRCJOd7LwEMd98A= = X-Google-Smtp-Source: AGHT+IFq6I8SlF529XaSIG0v2kxPMDuCk08v3rpn5V0Sdgy9vzchYWtKtzABIJUIV4QBXvz8ORyS0aC2Jg== X-Received: from edbfj20.prod.google.com ([2002:a05:6402:2b94:b0:5dc:4848:561d]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:9724:b0:aae:869f:c4ad with SMTP id a640c23a62f3a-ab75e212f41mr882204166b.7.1738865924120; Thu, 06 Feb 2025 10:18:44 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:17 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-24-elver@google.com> Subject: [PATCH RFC 23/24] stackdepot: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Enable capability analysis for stackdepot. Signed-off-by: Marco Elver --- lib/Makefile | 1 + lib/stackdepot.c | 24 ++++++++++++++++++------ 2 files changed, 19 insertions(+), 6 deletions(-) diff --git a/lib/Makefile b/lib/Makefile index 1dbb59175eb0..f40ba93c9a94 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -270,6 +270,7 @@ obj-$(CONFIG_POLYNOMIAL) += polynomial.o # Prevent the compiler from calling builtins like memcmp() or bcmp() from this # file. CFLAGS_stackdepot.o += -fno-builtin +CAPABILITY_ANALYSIS_stackdepot.o := y obj-$(CONFIG_STACKDEPOT) += stackdepot.o KASAN_SANITIZE_stackdepot.o := n # In particular, instrumenting stackdepot.c with KMSAN will result in infinite diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 245d5b416699..6664146d1f31 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -14,6 +14,8 @@ #define pr_fmt(fmt) "stackdepot: " fmt +disable_capability_analysis(); + #include #include #include @@ -36,6 +38,8 @@ #include #include +enable_capability_analysis(); + #define DEPOT_POOLS_CAP 8192 /* The pool_index is offset by 1 so the first record does not have a 0 handle. */ #define DEPOT_MAX_POOLS \ @@ -61,18 +65,18 @@ static unsigned int stack_bucket_number_order; /* Hash mask for indexing the table. */ static unsigned int stack_hash_mask; +/* The lock must be held when performing pool or freelist modifications. */ +static DEFINE_RAW_SPINLOCK(pool_lock); /* Array of memory regions that store stack records. */ -static void *stack_pools[DEPOT_MAX_POOLS]; +static void *stack_pools[DEPOT_MAX_POOLS] __var_guarded_by(&pool_lock); /* Newly allocated pool that is not yet added to stack_pools. */ static void *new_pool; /* Number of pools in stack_pools. */ static int pools_num; /* Offset to the unused space in the currently used pool. */ -static size_t pool_offset = DEPOT_POOL_SIZE; +static size_t pool_offset __var_guarded_by(&pool_lock) = DEPOT_POOL_SIZE; /* Freelist of stack records within stack_pools. */ -static LIST_HEAD(free_stacks); -/* The lock must be held when performing pool or freelist modifications. */ -static DEFINE_RAW_SPINLOCK(pool_lock); +static __var_guarded_by(&pool_lock) LIST_HEAD(free_stacks); /* Statistics counters for debugfs. */ enum depot_counter_id { @@ -242,6 +246,7 @@ EXPORT_SYMBOL_GPL(stack_depot_init); * Initializes new stack pool, and updates the list of pools. */ static bool depot_init_pool(void **prealloc) + __must_hold(&pool_lock) { lockdep_assert_held(&pool_lock); @@ -289,6 +294,7 @@ static bool depot_init_pool(void **prealloc) /* Keeps the preallocated memory to be used for a new stack depot pool. */ static void depot_keep_new_pool(void **prealloc) + __must_hold(&pool_lock) { lockdep_assert_held(&pool_lock); @@ -308,6 +314,7 @@ static void depot_keep_new_pool(void **prealloc) * the current pre-allocation. */ static struct stack_record *depot_pop_free_pool(void **prealloc, size_t size) + __must_hold(&pool_lock) { struct stack_record *stack; void *current_pool; @@ -342,6 +349,7 @@ static struct stack_record *depot_pop_free_pool(void **prealloc, size_t size) /* Try to find next free usable entry from the freelist. */ static struct stack_record *depot_pop_free(void) + __must_hold(&pool_lock) { struct stack_record *stack; @@ -379,6 +387,7 @@ static inline size_t depot_stack_record_size(struct stack_record *s, unsigned in /* Allocates a new stack in a stack depot pool. */ static struct stack_record * depot_alloc_stack(unsigned long *entries, unsigned int nr_entries, u32 hash, depot_flags_t flags, void **prealloc) + __must_hold(&pool_lock) { struct stack_record *stack = NULL; size_t record_size; @@ -437,6 +446,7 @@ depot_alloc_stack(unsigned long *entries, unsigned int nr_entries, u32 hash, dep } static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) + __must_not_hold(&pool_lock) { const int pools_num_cached = READ_ONCE(pools_num); union handle_parts parts = { .handle = handle }; @@ -453,7 +463,8 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) return NULL; } - pool = stack_pools[pool_index]; + /* @pool_index either valid, or user passed in corrupted value. */ + pool = capability_unsafe(stack_pools[pool_index]); if (WARN_ON(!pool)) return NULL; @@ -466,6 +477,7 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) /* Links stack into the freelist. */ static void depot_free_stack(struct stack_record *stack) + __must_not_hold(&pool_lock) { unsigned long flags; From patchwork Thu Feb 6 18:10:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13963503 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11717221DA3 for ; Thu, 6 Feb 2025 18:18:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865930; cv=none; b=mdbtZSQmo1TtS9tzr+N5kZe9LOuOoH955QEZeNduN5XVw7A5PkOSM81KPZGvjBxsYBt6qCMp5xqGVnQf0tPspx7xQrHwSQ4fjd5L+Y7FBZz4mSfHPlrI578fyWK6YW1/l8QGF+rSyQ6k/yYtxFoaHZdhpcDGlPlbB46+rVVHXJI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738865930; c=relaxed/simple; bh=U9Zquo+I28n7D6l89QyMvjgCk9CUHGORATxUF8MOC08=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bw7jVbUFSI3yFvdBSveQm18oidZF6kvbzIpZNWRCl473Bbtujc8UvVaTb6qrVM80h6oM5Gr75IGfKNQgBhkXUiLp9VK+3dpYDNOpcJa+LQzCjsv1r5CE0UW+qU2IBGQvx58oe3xa5SjICAF/s0JEXYVXZHlrgHOuH0sxnbBlNQw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--elver.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=K9kTI7MY; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--elver.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="K9kTI7MY" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-ab6e2b653a0so145212866b.3 for ; Thu, 06 Feb 2025 10:18:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738865926; x=1739470726; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=M3npqKmcD7idosYB3Rql5Qlei3J/L1gfZsrIz0sfHUY=; b=K9kTI7MY6mQK9tOS73I2v9K0v65AtDd4n2pwmcc2R29p6EGO8u8elONw3yS7WssT8K 2FHQRGGn5qIIdvXVfYQH2et5AXGmhlmxy0BmuU3xEwqdcM/o8ba5J2zJcA1CL/xpeBdx F4jVmtxMSilRSRL6zzEIXnSCGo4sRE06XlD2vkwUvbNqSfXTTu17vThH93vHfWfHUhqw lBF8udw97YWafgY7Z7tih/DRm0fv7V9kgZyjbQX0MAWKUwbaKoFg8Ks9267UqO8bkIcL cmLrOtlVN8sW1+T3WWfve5PL4QtjE3xGXyCfPiCfGQ27JIqGDQrC1CJqUmZNGH6BnJr6 kiFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738865926; x=1739470726; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=M3npqKmcD7idosYB3Rql5Qlei3J/L1gfZsrIz0sfHUY=; b=Vq1VhG3gu8U89lGmBLxGcT7NgYqsXwRX5CocWTr+Pb3+2vb8SK4BJu7Tg3NFXPK2CA OrBT8iicAnY9KB8mpY0hxiZ9DtEKkW4RoyY0v/m/swjVUjeT4hyKNkCZ2rBr6qajtj65 TtIktqT7XGpAW7rG6S4Z1Jeje2MXxmVT6AEq/K/lj6EeaPkCRjHKR9zK/nKBZ+wefPMm 3JkeD6VpRMo3WM2eRTaf4xDjU34eBUY3o0lAJNLuM3lOR4Dd2yZPUwZSf32Ur8L80fvq ZijiyTorWhuB8NZor37R1sExBZXIqIxYtXyJ6zTGfFaZpTEtpZ4kTc/AQpv2WkljWWwK TTJA== X-Forwarded-Encrypted: i=1; AJvYcCX82BzA1KSzKeJWq7oKrHDN91NY5o8BQPOE/c3q8sfvml4YAXdxnVPbLvtOdOxAjVf0aqs=@vger.kernel.org X-Gm-Message-State: AOJu0YxdIiiaKMQNpMoCOS+pTdnkiGLW8SXPUCOh8lY8+WqxFY14WJHN oTcZgaROark24dn2KphyweSF48phb+Zbp0Pk8YmwjnYcheZ0gbDv0w1pXpGxTtibQgimSoIC/g= = X-Google-Smtp-Source: AGHT+IHaR4OtYgVkoAVUDStYqYv+9thvNuzjxTMXa8LJQryBI8+iut5U512tyfYG9g1hUPYb+umuGfZNFg== X-Received: from edben5.prod.google.com ([2002:a05:6402:5285:b0:5dc:c943:7c1]) (user=elver job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:dc8d:b0:aac:622:8f6 with SMTP id a640c23a62f3a-ab75e26558fmr633944866b.17.1738865926699; Thu, 06 Feb 2025 10:18:46 -0800 (PST) Date: Thu, 6 Feb 2025 19:10:18 +0100 In-Reply-To: <20250206181711.1902989-1-elver@google.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250206181711.1902989-1-elver@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206181711.1902989-25-elver@google.com> Subject: [PATCH RFC 24/24] rhashtable: Enable capability analysis From: Marco Elver To: elver@google.com Cc: "Paul E. McKenney" , Alexander Potapenko , Bart Van Assche , Bill Wendling , Boqun Feng , Dmitry Vyukov , Frederic Weisbecker , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Joel Fernandes , Jonathan Corbet , Josh Triplett , Justin Stitt , Kees Cook , Mark Rutland , Mathieu Desnoyers , Miguel Ojeda , Nathan Chancellor , Neeraj Upadhyay , Nick Desaulniers , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, rcu@vger.kernel.org, linux-crypto@vger.kernel.org Enable capability analysis for rhashtable, which was used as an initial test as it contains a combination of RCU, mutex, and bit_spinlock usage. Users of rhashtable now also benefit from annotations on the API, which will now warn if the RCU read lock is not held where required. Signed-off-by: Marco Elver --- include/linux/rhashtable.h | 14 +++++++++++--- lib/Makefile | 2 ++ lib/rhashtable.c | 12 +++++++++--- 3 files changed, 22 insertions(+), 6 deletions(-) diff --git a/include/linux/rhashtable.h b/include/linux/rhashtable.h index 8463a128e2f4..c6374691ccc7 100644 --- a/include/linux/rhashtable.h +++ b/include/linux/rhashtable.h @@ -245,16 +245,17 @@ void *rhashtable_insert_slow(struct rhashtable *ht, const void *key, void rhashtable_walk_enter(struct rhashtable *ht, struct rhashtable_iter *iter); void rhashtable_walk_exit(struct rhashtable_iter *iter); -int rhashtable_walk_start_check(struct rhashtable_iter *iter) __acquires(RCU); +int rhashtable_walk_start_check(struct rhashtable_iter *iter) __acquires_shared(RCU); static inline void rhashtable_walk_start(struct rhashtable_iter *iter) + __acquires_shared(RCU) { (void)rhashtable_walk_start_check(iter); } void *rhashtable_walk_next(struct rhashtable_iter *iter); void *rhashtable_walk_peek(struct rhashtable_iter *iter); -void rhashtable_walk_stop(struct rhashtable_iter *iter) __releases(RCU); +void rhashtable_walk_stop(struct rhashtable_iter *iter) __releases_shared(RCU); void rhashtable_free_and_destroy(struct rhashtable *ht, void (*free_fn)(void *ptr, void *arg), @@ -325,6 +326,7 @@ static inline struct rhash_lock_head __rcu **rht_bucket_insert( static inline unsigned long rht_lock(struct bucket_table *tbl, struct rhash_lock_head __rcu **bkt) + __acquires(__bitlock(0, bkt)) { unsigned long flags; @@ -337,6 +339,7 @@ static inline unsigned long rht_lock(struct bucket_table *tbl, static inline unsigned long rht_lock_nested(struct bucket_table *tbl, struct rhash_lock_head __rcu **bucket, unsigned int subclass) + __acquires(__bitlock(0, bucket)) { unsigned long flags; @@ -349,6 +352,7 @@ static inline unsigned long rht_lock_nested(struct bucket_table *tbl, static inline void rht_unlock(struct bucket_table *tbl, struct rhash_lock_head __rcu **bkt, unsigned long flags) + __releases(__bitlock(0, bkt)) { lock_map_release(&tbl->dep_map); bit_spin_unlock(0, (unsigned long *)bkt); @@ -402,13 +406,14 @@ static inline void rht_assign_unlock(struct bucket_table *tbl, struct rhash_lock_head __rcu **bkt, struct rhash_head *obj, unsigned long flags) + __releases(__bitlock(0, bkt)) { if (rht_is_a_nulls(obj)) obj = NULL; lock_map_release(&tbl->dep_map); rcu_assign_pointer(*bkt, (void *)obj); preempt_enable(); - __release(bitlock); + __release(__bitlock(0, bkt)); local_irq_restore(flags); } @@ -589,6 +594,7 @@ static inline int rhashtable_compare(struct rhashtable_compare_arg *arg, static inline struct rhash_head *__rhashtable_lookup( struct rhashtable *ht, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhashtable_compare_arg arg = { .ht = ht, @@ -642,6 +648,7 @@ static inline struct rhash_head *__rhashtable_lookup( static inline void *rhashtable_lookup( struct rhashtable *ht, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhash_head *he = __rhashtable_lookup(ht, key, params); @@ -692,6 +699,7 @@ static inline void *rhashtable_lookup_fast( static inline struct rhlist_head *rhltable_lookup( struct rhltable *hlt, const void *key, const struct rhashtable_params params) + __must_hold_shared(RCU) { struct rhash_head *he = __rhashtable_lookup(&hlt->ht, key, params); diff --git a/lib/Makefile b/lib/Makefile index f40ba93c9a94..c7004270ad5f 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -45,6 +45,8 @@ lib-$(CONFIG_MIN_HEAP) += min_heap.o lib-y += kobject.o klist.o obj-y += lockref.o +CAPABILITY_ANALYSIS_rhashtable.o := y + obj-y += bcd.o sort.o parser.o debug_locks.o random32.o \ bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \ list_sort.o uuid.o iov_iter.o clz_ctz.o \ diff --git a/lib/rhashtable.c b/lib/rhashtable.c index 3e555d012ed6..47a61e214621 100644 --- a/lib/rhashtable.c +++ b/lib/rhashtable.c @@ -11,6 +11,10 @@ * pointer as suggested by Josh Triplett */ +#include + +disable_capability_analysis(); + #include #include #include @@ -22,10 +26,11 @@ #include #include #include -#include #include #include +enable_capability_analysis(); + #define HASH_DEFAULT_SIZE 64UL #define HASH_MIN_SIZE 4U @@ -358,6 +363,7 @@ static int rhashtable_rehash_table(struct rhashtable *ht) static int rhashtable_rehash_alloc(struct rhashtable *ht, struct bucket_table *old_tbl, unsigned int size) + __must_hold(&ht->mutex) { struct bucket_table *new_tbl; int err; @@ -392,6 +398,7 @@ static int rhashtable_rehash_alloc(struct rhashtable *ht, * bucket locks or concurrent RCU protected lookups and traversals. */ static int rhashtable_shrink(struct rhashtable *ht) + __must_hold(&ht->mutex) { struct bucket_table *old_tbl = rht_dereference(ht->tbl, ht); unsigned int nelems = atomic_read(&ht->nelems); @@ -724,7 +731,7 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_exit); * resize events and always continue. */ int rhashtable_walk_start_check(struct rhashtable_iter *iter) - __acquires(RCU) + __acquires_shared(RCU) { struct rhashtable *ht = iter->ht; bool rhlist = ht->rhlist; @@ -940,7 +947,6 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_peek); * hash table. */ void rhashtable_walk_stop(struct rhashtable_iter *iter) - __releases(RCU) { struct rhashtable *ht; struct bucket_table *tbl = iter->walker.tbl;