From patchwork Thu Jun 30 13:59:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Aring X-Patchwork-Id: 12901885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12A1BC43334 for ; Thu, 30 Jun 2022 14:14:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236985AbiF3OOb (ORCPT ); Thu, 30 Jun 2022 10:14:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237080AbiF3ON6 (ORCPT ); Thu, 30 Jun 2022 10:13:58 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0CE39419A0 for ; Thu, 30 Jun 2022 06:59:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1656597588; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YL9f1II4Mw4H7t2E0klzVA5qKcGu0RnSH4xCbZTGJWU=; b=YO4eepO7KrJ/BGFOU2/A+wBBSCB1EEaK8dzrgMKdC4U0kF9PuFYBxU/CWZsk7JKmMSNx8h bR81l/E2gBJ3VePUWRoyTnbCYfMifp9vbR9mFD5xg3afvcYn6ktp/NvhmkVxlwX5rKsXcM 3fQ1B9N284H6tfWAs4rUy8N7HeHLAzo= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-22--w1_ynfMMyqKXXq4Nnz9sA-1; Thu, 30 Jun 2022 09:59:45 -0400 X-MC-Unique: -w1_ynfMMyqKXXq4Nnz9sA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E44E31C004ED; Thu, 30 Jun 2022 13:59:44 +0000 (UTC) Received: from fs-i40c-03.fs.lab.eng.bos.redhat.com (fs-i40c-03.fs.lab.eng.bos.redhat.com [10.16.224.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9166A815B; Thu, 30 Jun 2022 13:59:44 +0000 (UTC) From: Alexander Aring To: will@kernel.org Cc: peterz@infradead.org, boqun.feng@gmail.com, mark.rutland@arm.com, thunder.leizhen@huawei.com, jacob.e.keller@intel.com, akpm@linux-foundation.org, linux-sparse@vger.kernel.org, cluster-devel@redhat.com, luc.vanoostenryck@gmail.com, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, aahringo@redhat.com Subject: [RFC 1/2] refcount: add __cond_lock() for conditional lock refcount API Date: Thu, 30 Jun 2022 09:59:33 -0400 Message-Id: <20220630135934.1799248-2-aahringo@redhat.com> In-Reply-To: <20220630135934.1799248-1-aahringo@redhat.com> References: <20220630135934.1799248-1-aahringo@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-sparse@vger.kernel.org This patch adds the __cond_lock() macro to refcounts conditional lock API. Currently sparse cannot detect the conditional lock handling of refcount_dec_and_lock() functionality and prints a context imbalance warning like: warning: context imbalance in 'put_rsb' - unexpected unlock with this patch and having the refcount_dec_and_lock() functionality inside the if condition to decide whenever doing unlock or not the warning disappears. The patch follows a similar naming scheme like raw_spin_trylock() by adding a "raw_" prefix to refcount_dec_and_lock() functionality and introduce a macro for the replaced functions that uses __cond_lock() to signal that an acquire depends on the return value of the passed function. A cast to bool seems to be necessary because __cond_lock() does return a non-boolean scalar type. The __must_check annotation was tested and is still working with this patch applied. Signed-off-by: Alexander Aring --- include/linux/refcount.h | 21 ++++++++++++++++----- lib/refcount.c | 23 ++++++++++++----------- 2 files changed, 28 insertions(+), 16 deletions(-) diff --git a/include/linux/refcount.h b/include/linux/refcount.h index b8a6e387f8f9..be7b970ce475 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -361,9 +361,20 @@ static inline void refcount_dec(refcount_t *r) extern __must_check bool refcount_dec_if_one(refcount_t *r); extern __must_check bool refcount_dec_not_one(refcount_t *r); -extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock); -extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock); -extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r, - spinlock_t *lock, - unsigned long *flags); +extern __must_check bool raw_refcount_dec_and_mutex_lock(refcount_t *r, + struct mutex *lock); +#define refcount_dec_and_mutex_lock(r, lock) \ + ((bool)(__cond_lock(lock, raw_refcount_dec_and_mutex_lock(r, lock)))) + +extern __must_check bool raw_refcount_dec_and_lock(refcount_t *r, + spinlock_t *lock); +#define refcount_dec_and_lock(r, lock) \ + ((bool)(__cond_lock(lock, raw_refcount_dec_and_lock(r, lock)))) + +extern __must_check bool raw_refcount_dec_and_lock_irqsave(refcount_t *r, + spinlock_t *lock, + unsigned long *flags); +#define refcount_dec_and_lock_irqsave(r, lock, flags) \ + ((bool)(__cond_lock(lock, raw_refcount_dec_and_lock_irqsave(r, lock, flags)))) + #endif /* _LINUX_REFCOUNT_H */ diff --git a/lib/refcount.c b/lib/refcount.c index a207a8f22b3c..1a8c7b9aba23 100644 --- a/lib/refcount.c +++ b/lib/refcount.c @@ -110,7 +110,7 @@ EXPORT_SYMBOL(refcount_dec_not_one); * Return: true and hold mutex if able to decrement refcount to 0, false * otherwise */ -bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) +bool raw_refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) { if (refcount_dec_not_one(r)) return false; @@ -123,11 +123,11 @@ bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) return true; } -EXPORT_SYMBOL(refcount_dec_and_mutex_lock); +EXPORT_SYMBOL(raw_refcount_dec_and_mutex_lock); /** - * refcount_dec_and_lock - return holding spinlock if able to decrement - * refcount to 0 + * raw_refcount_dec_and_lock - return holding spinlock if able to decrement + * refcount to 0 * @r: the refcount * @lock: the spinlock to be locked * @@ -141,7 +141,7 @@ EXPORT_SYMBOL(refcount_dec_and_mutex_lock); * Return: true and hold spinlock if able to decrement refcount to 0, false * otherwise */ -bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) +bool raw_refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) { if (refcount_dec_not_one(r)) return false; @@ -154,11 +154,12 @@ bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) return true; } -EXPORT_SYMBOL(refcount_dec_and_lock); +EXPORT_SYMBOL(raw_refcount_dec_and_lock); /** - * refcount_dec_and_lock_irqsave - return holding spinlock with disabled - * interrupts if able to decrement refcount to 0 + * raw_refcount_dec_and_lock_irqsave - return holding spinlock with disabled + * interrupts if able to decrement + * refcount to 0 * @r: the refcount * @lock: the spinlock to be locked * @flags: saved IRQ-flags if the is acquired @@ -169,8 +170,8 @@ EXPORT_SYMBOL(refcount_dec_and_lock); * Return: true and hold spinlock if able to decrement refcount to 0, false * otherwise */ -bool refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, - unsigned long *flags) +bool raw_refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, + unsigned long *flags) { if (refcount_dec_not_one(r)) return false; @@ -183,4 +184,4 @@ bool refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, return true; } -EXPORT_SYMBOL(refcount_dec_and_lock_irqsave); +EXPORT_SYMBOL(raw_refcount_dec_and_lock_irqsave); From patchwork Thu Jun 30 13:59:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Aring X-Patchwork-Id: 12901886 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CB74C433EF for ; Thu, 30 Jun 2022 14:14:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237098AbiF3OOh (ORCPT ); Thu, 30 Jun 2022 10:14:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237099AbiF3OOE (ORCPT ); Thu, 30 Jun 2022 10:14:04 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 96815427D8 for ; Thu, 30 Jun 2022 06:59:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1656597590; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+GZL7A1rDCS9BWoMyYGxRSSo338VZcyjs0gmjggSCak=; b=Le2lecJ533PcPsywj3pR/fqh/roqD3JptnNJ21rwHU6SmZUgRiswZ+odYzndl0xv8eeWfD 9L7PZA3JeLLuXg6QZq8z2Cvg1aXEMnLP4jIBmvYv+m5ntNEgCMNsamvMC1ov/89uGHbtQy gRd0VrMoHvNghapDVFjb/mf8suqaZrk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-357-YLw-229DPCCjkNgIoXGUaA-1; Thu, 30 Jun 2022 09:59:46 -0400 X-MC-Unique: YLw-229DPCCjkNgIoXGUaA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 49F7189C7DC; Thu, 30 Jun 2022 13:59:45 +0000 (UTC) Received: from fs-i40c-03.fs.lab.eng.bos.redhat.com (fs-i40c-03.fs.lab.eng.bos.redhat.com [10.16.224.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id EC25C815B; Thu, 30 Jun 2022 13:59:44 +0000 (UTC) From: Alexander Aring To: will@kernel.org Cc: peterz@infradead.org, boqun.feng@gmail.com, mark.rutland@arm.com, thunder.leizhen@huawei.com, jacob.e.keller@intel.com, akpm@linux-foundation.org, linux-sparse@vger.kernel.org, cluster-devel@redhat.com, luc.vanoostenryck@gmail.com, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, aahringo@redhat.com Subject: [RFC 2/2] kref: move kref_put_lock() callback to caller Date: Thu, 30 Jun 2022 09:59:34 -0400 Message-Id: <20220630135934.1799248-3-aahringo@redhat.com> In-Reply-To: <20220630135934.1799248-1-aahringo@redhat.com> References: <20220630135934.1799248-1-aahringo@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-sparse@vger.kernel.org This patch moves the release callback call to the caller of kref_put_lock() functionality. Since refcount_dec_and_lock() uses __cond_lock() we get the following warning for e.g. net/sunrpc/svcauth.c: warning: context imbalance in 'auth_domain_put' - wrong count at exit The warning occurs now because it seems that before __cond_lock() change sparse was able to detect the correct locking behaviour. Now it thinks there is an additional lock acquire. However the __cond_lock() instrumentation in refcount_dec_and_lock() was making it possible to avoid sparse warnings by evaluating the return value and unlock the lock if conditional necessary. This patch solves the problem by just do the passed release callback call based by the return value of kref_put_lock() and not inside of kref_put_lock() and evaluating the return value of refcount_dec_and_lock() that surprisingly sparse can recognize. It seems it's only possible to have the one way or the other. This patch changes the kref_put_lock() in way that it works like refcount_dec_and_lock() way with __cond_lock(). Signed-off-by: Alexander Aring --- include/linux/kref.h | 24 ++++++++---------------- 1 file changed, 8 insertions(+), 16 deletions(-) diff --git a/include/linux/kref.h b/include/linux/kref.h index d32e21a2538c..a70d45940d55 100644 --- a/include/linux/kref.h +++ b/include/linux/kref.h @@ -68,27 +68,19 @@ static inline int kref_put(struct kref *kref, void (*release)(struct kref *kref) return 0; } -static inline int kref_put_mutex(struct kref *kref, - void (*release)(struct kref *kref), - struct mutex *lock) +static inline bool raw_kref_put_mutex(struct kref *kref, struct mutex *lock) { - if (refcount_dec_and_mutex_lock(&kref->refcount, lock)) { - release(kref); - return 1; - } - return 0; + return refcount_dec_and_mutex_lock(&kref->refcount, lock); } +#define kref_put_mutex(kref, release, lock) \ + ((raw_kref_put_mutex(kref, lock)) ? ({ release(kref); 1; }) : 0) -static inline int kref_put_lock(struct kref *kref, - void (*release)(struct kref *kref), - spinlock_t *lock) +static inline bool raw_kref_put_lock(struct kref *kref, spinlock_t *lock) { - if (refcount_dec_and_lock(&kref->refcount, lock)) { - release(kref); - return 1; - } - return 0; + return refcount_dec_and_lock(&kref->refcount, lock); } +#define kref_put_lock(kref, release, lock) \ + ((raw_kref_put_lock(kref, lock)) ? ({ release(kref); 1; }) : 0) /** * kref_get_unless_zero - Increment refcount for object unless it is zero.