From patchwork Sat Oct 7 20:35:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13412483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CF14E95A6D for ; Sat, 7 Oct 2023 20:36:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344172AbjJGUgE (ORCPT ); Sat, 7 Oct 2023 16:36:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344160AbjJGUgD (ORCPT ); Sat, 7 Oct 2023 16:36:03 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05BE7BC; Sat, 7 Oct 2023 13:35:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=gK2PkVQedGYkiXZ+S6dDW/QdpinQQuQZI6/6Ogj39CE=; b=WDbKmpqHsfQKSeC/TKnzhmuJiI XlF8tSu64opY8lCrqTKdNZn4EJxfi2q/PChuCviUe9nlITFxCNCS8H6sg23IVfp1KEV4oa3Uabg4v k/8SyjPEtT9LSef6M4Fq2ekcazpitwdP4ZUd/vMn5CV1MqbJnh/xZvWaVpuiRjU3ljPr6twQONFF8 NPVe2/HFVL0mnVB8pEWRl6UlGUioGDu/ZwK+lDNBHqEMNs0Gu5XNP0n60hTF12hgDjq5gOsVl8Smz Ha3qXka1ZbBhtaqJ99sTPBCiH6gvEXWtXloFKTA0k3TS48IMtX2jSAs2ZqNaDwUuU5DyZDgB5P+kr f9RhCEsQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qpE1Y-005mLA-Ih; Sat, 07 Oct 2023 20:35:44 +0000 From: "Matthew Wilcox (Oracle)" To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chandan Babu R , "Darrick J . Wong" , linux-xfs@vger.kernel.org, Mateusz Guzik Subject: [PATCH v2 1/5] locking: Add rwsem_assert_held() and rwsem_assert_held_write() Date: Sat, 7 Oct 2023 21:35:39 +0100 Message-Id: <20231007203543.1377452-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231007203543.1377452-1-willy@infradead.org> References: <20231007203543.1377452-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Modelled after lockdep_assert_held() and lockdep_assert_held_write(), but are always active, even when lockdep is disabled. Of course, they don't test that _this_ thread is the owner, but it's sufficient to catch many bugs and doesn't incur the same performance penalty as lockdep. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rwbase_rt.h | 9 +++++++-- include/linux/rwsem.h | 42 ++++++++++++++++++++++++++++++++++----- 2 files changed, 44 insertions(+), 7 deletions(-) diff --git a/include/linux/rwbase_rt.h b/include/linux/rwbase_rt.h index 1d264dd08625..a04acd85705b 100644 --- a/include/linux/rwbase_rt.h +++ b/include/linux/rwbase_rt.h @@ -26,12 +26,17 @@ struct rwbase_rt { } while (0) -static __always_inline bool rw_base_is_locked(struct rwbase_rt *rwb) +static __always_inline bool rw_base_is_locked(const struct rwbase_rt *rwb) { return atomic_read(&rwb->readers) != READER_BIAS; } -static __always_inline bool rw_base_is_contended(struct rwbase_rt *rwb) +static inline void rw_base_assert_held_write(const struct rwbase_rt *rwb) +{ + BUG_ON(atomic_read(&rwb->readers) != WRITER_BIAS); +} + +static __always_inline bool rw_base_is_contended(const struct rwbase_rt *rwb) { return atomic_read(&rwb->readers) > 0; } diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index 1dd530ce8b45..048149f781b3 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -66,14 +66,24 @@ struct rw_semaphore { #endif }; -/* In all implementations count != 0 means locked */ +#define RWSEM_UNLOCKED_VALUE 0UL +#define RWSEM_WRITER_LOCKED (1UL << 0) +#define __RWSEM_COUNT_INIT(name) .count = ATOMIC_LONG_INIT(RWSEM_UNLOCKED_VALUE) + static inline int rwsem_is_locked(struct rw_semaphore *sem) { - return atomic_long_read(&sem->count) != 0; + return atomic_long_read(&sem->count) != RWSEM_UNLOCKED_VALUE; } -#define RWSEM_UNLOCKED_VALUE 0L -#define __RWSEM_COUNT_INIT(name) .count = ATOMIC_LONG_INIT(RWSEM_UNLOCKED_VALUE) +static inline void __rwsem_assert_held(const struct rw_semaphore *sem) +{ + WARN_ON(atomic_long_read(&sem->count) == RWSEM_UNLOCKED_VALUE); +} + +static inline void __rwsem_assert_held_write(const struct rw_semaphore *sem) +{ + WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED)); +} /* Common initializer macros and functions */ @@ -152,11 +162,21 @@ do { \ __init_rwsem((sem), #sem, &__key); \ } while (0) -static __always_inline int rwsem_is_locked(struct rw_semaphore *sem) +static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem) { return rw_base_is_locked(&sem->rwbase); } +static inline void __rwsem_assert_held(const struct rw_semaphore *sem) +{ + BUG_ON(!rwsem_is_locked(sem)); +} + +static inline void __rwsem_assert_held_write(const struct rw_semaphore *sem) +{ + rw_base_assert_held_write(sem); +} + static __always_inline int rwsem_is_contended(struct rw_semaphore *sem) { return rw_base_is_contended(&sem->rwbase); @@ -169,6 +189,18 @@ static __always_inline int rwsem_is_contended(struct rw_semaphore *sem) * the RT specific variant. */ +static inline void rwsem_assert_held(const struct rw_semaphore *sem) +{ + lockdep_assert_held(sem); + __rwsem_assert_held(sem); +} + +static inline void rwsem_assert_held_write(const struct rw_semaphore *sem) +{ + lockdep_assert_held_write(sem); + __rwsem_assert_held_write(sem); +} + /* * lock for reading */