From patchwork Sun Aug 27 17:54:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 13367371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 656DBC83F18 for ; Sun, 27 Aug 2023 17:56:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A958C10E1DB; Sun, 27 Aug 2023 17:56:12 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id E059C10E1D8; Sun, 27 Aug 2023 17:56:08 +0000 (UTC) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 3CE0D6603102; Sun, 27 Aug 2023 18:56:06 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693158967; bh=n0kkcz8juKwSjNPbXLfDiIJlOKY0+tcGaJqGLsx5hTs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Alg5JnsKjDIWjT6Aixw2aHXpBSqCqXy2+Ld4MIuRsWqHHxbUtCVRj7trHoq3wWblp oH8AfzkEZmEx3D+wXq+hXvIVU/qP/i0np8yK4SXRPeoJm7SQR4SnjAachAknuwYMtl s8eMuuR8I89BrP4ims1nhUn1/HcuIv0S+kJRufKPfxth0yN05hTwaIJnF4H5LdmC4f Nc40AnviggFijAcxyRSYhCJG34AhburJoG/83P8gQXTEvyBmoTpxAm8jmEVFJQ5SlV dmQsVfWs/wBLJdmin/+Q2EzhrvS7fYpilIgPK5oMzzGXbLQXkGvNDRdjno+LlEq5Dw /6UsLJdSonicw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Date: Sun, 27 Aug 2023 20:54:36 +0300 Message-ID: <20230827175449.1766701-11-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230827175449.1766701-1-dmitry.osipenko@collabora.com> References: <20230827175449.1766701-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v15 10/23] locking/refcount, kref: Add kref_put_ww_mutex() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: intel-gfx@lists.freedesktop.org, kernel@collabora.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Introduce kref_put_ww_mutex() helper that will handle the wait-wound mutex auto-locking on kref_put(). This helper is wanted by DRM drivers that extensively use dma-reservation locking which in turns uses ww-mutex. Signed-off-by: Dmitry Osipenko --- include/linux/kref.h | 12 ++++++++++++ include/linux/refcount.h | 5 +++++ lib/refcount.c | 34 ++++++++++++++++++++++++++++++++++ 3 files changed, 51 insertions(+) diff --git a/include/linux/kref.h b/include/linux/kref.h index d32e21a2538c..b2d8dc6e9ae0 100644 --- a/include/linux/kref.h +++ b/include/linux/kref.h @@ -90,6 +90,18 @@ static inline int kref_put_lock(struct kref *kref, return 0; } +static inline int kref_put_ww_mutex(struct kref *kref, + void (*release)(struct kref *kref), + struct ww_mutex *lock, + struct ww_acquire_ctx *ctx) +{ + if (refcount_dec_and_ww_mutex_lock(&kref->refcount, lock, ctx)) { + release(kref); + return 1; + } + return 0; +} + /** * kref_get_unless_zero - Increment refcount for object unless it is zero. * @kref: object. diff --git a/include/linux/refcount.h b/include/linux/refcount.h index a62fcca97486..be9ad272bc77 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -99,6 +99,8 @@ #include struct mutex; +struct ww_mutex; +struct ww_acquire_ctx; /** * typedef refcount_t - variant of atomic_t specialized for reference counts @@ -366,4 +368,7 @@ extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, unsigned long *flags) __cond_acquires(lock); +extern __must_check bool refcount_dec_and_ww_mutex_lock(refcount_t *r, + struct ww_mutex *lock, + struct ww_acquire_ctx *ctx) __cond_acquires(&lock->base); #endif /* _LINUX_REFCOUNT_H */ diff --git a/lib/refcount.c b/lib/refcount.c index a207a8f22b3c..3f6fd0ceed02 100644 --- a/lib/refcount.c +++ b/lib/refcount.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #define REFCOUNT_WARN(str) WARN_ONCE(1, "refcount_t: " str ".\n") @@ -184,3 +185,36 @@ bool refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, return true; } EXPORT_SYMBOL(refcount_dec_and_lock_irqsave); + +/** + * refcount_dec_and_ww_mutex_lock - return holding ww-mutex if able to + * decrement refcount to 0 + * @r: the refcount + * @lock: the ww-mutex to be locked + * @ctx: wait-wound context + * + * Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to + * decrement when saturated at REFCOUNT_SATURATED. + * + * Provides release memory ordering, such that prior loads and stores are done + * before, and provides a control dependency such that free() must come after. + * See the comment on top. + * + * Return: true and hold ww-mutex lock if able to decrement refcount to 0, + * false otherwise + */ +bool refcount_dec_and_ww_mutex_lock(refcount_t *r, struct ww_mutex *lock, + struct ww_acquire_ctx *ctx) +{ + if (refcount_dec_not_one(r)) + return false; + + ww_mutex_lock(lock, ctx); + if (!refcount_dec_and_test(r)) { + ww_mutex_unlock(lock); + return false; + } + + return true; +} +EXPORT_SYMBOL(refcount_dec_and_ww_mutex_lock);