From patchwork Sun Aug 18 06:35:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13767348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1E54BC52D7C for ; Sun, 18 Aug 2024 06:46:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6gOW+9DFDfvc59tscnOEuJAhRvEpKvrcN19n2IIDvRI=; b=vkRFtJWRRbYeSI V4e7058JOaCZMhxtn7Gl+Y5XaQvsUZz9ScpJXB8pyh6sP4gynX2w8vMSPZmdj05C5364yb2vDYan3 lQFSox87QrF2BogIYWVf05Cs8vHgakANTvQcE2kWLnKfIxk1j2NGt7cN3iRDl5FGMOaIPhzc1Yri+ NQm24UYWXPzratnEJ7cOmMQ82hYZ474tzUnTGCZNb6pQOQxA2XaAdgjflLbBgvywJznHz/IYLwz/t 72Cnx1ewEnWaFa0DiRn2o/Zh4jXpg0UCFX4JJkhoNaG1/yBxAIInxDgxKRIvvjNyj50GhqF/Sh2TF n8sWYomDen8uUO3RjYXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sfZfv-0000000GEfl-07OQ; Sun, 18 Aug 2024 06:46:03 +0000 Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sfZfr-0000000GEev-2Cl6 for linux-riscv@lists.infradead.org; Sun, 18 Aug 2024 06:46:01 +0000 Received: by mail-wr1-x432.google.com with SMTP id ffacd0b85a97d-3719896b7c8so1004433f8f.3 for ; Sat, 17 Aug 2024 23:45:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1723963557; x=1724568357; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=J1DJp3g8ecAC/DPl4n2lm0ysWSqC430kG0AnG9TGYTU=; b=UskcowtEZoI01xqjDhET94SZfkPyppizYyTilpM6q5+FN2veoMUy+lnIrtyHk4nIO3 /+Jg5bC/IqbfzvWYl8QUn854AwW5/CDTB6hdAFTu6hqjC7WQj6JAlHZ/YQfSstamSfFP NUWtpNrFg9khsRMXMkisH4av2gKh18E+vl3pZcFPF08GgYsLv9ldWjWtf8Mbpi8Oldtj 2xlQ0ErNy4WbyxEHQO59cy14j+rBqZEDa8wzuEmJXf4sgGRy27Wk885F3708/Z3RZ+RD 2hGVDLDjThUMrcMlSyThoT/JQKARSDtz3pTsAXZAJ0VUSAAEjecF81QWAlZBScynkeb8 b/ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723963557; x=1724568357; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J1DJp3g8ecAC/DPl4n2lm0ysWSqC430kG0AnG9TGYTU=; b=aKToenzanzf3pBvqIKDG8SVi5SqK1jypjFcXvScRx7id5dwoW1VpOSSustYMJv/P4Z hB5MSEx1ACT7TzuMBpvqJkkZHhx1WYgWwWCxO5m0ceTqmBC9wXUr8RkosF99hGl31/Ph LroHN+68p4gVs+SBoS/rSjWQV+3CIqy6OHoZO/XXZ1cU8bgHSxXz1Iue5Nqf9ePJJAb9 34n6+MuaV1lL+kdhkJy6Liy01KN2+BW7m/qEsK05zfC02nliKCDOSrXC9GpKqia+oJEE M1JVvwEbmJkdB4KqiCBNhbu887MG+Gl4omdR/oJMIGEes2UpNtvWOdPvsmSUp4HR9pdU roLw== X-Forwarded-Encrypted: i=1; AJvYcCViZbt80xTOk6QJwXGpOvXLT7dNZNwa7wf1oDI/yznenEQf0SPFxIPhXCibkcnhF9ZccM40eFZTughKIsd9nenKIHiQygGNXgPcXSPh50Nm X-Gm-Message-State: AOJu0YzRH0XoT7KtHEH9d79uy7FPjoJxKh7zZHsQHRSEg8Ii8UOlqXgW qGhBr3JlRo5scSWU4IwxK3Ch+G1DDU4lCanwmYBbVeg7omtDISsyTGpIYAe8Chc= X-Google-Smtp-Source: AGHT+IHPeK5VmE03PU4h9/i27orpAxZrVCijc8uGk6SFEjR1/wG06pKyXQz/+RBNMzRG1dmIOL6ROQ== X-Received: by 2002:a5d:6911:0:b0:368:3717:10aa with SMTP id ffacd0b85a97d-371a73dbb2fmr2311786f8f.11.1723963557033; Sat, 17 Aug 2024 23:45:57 -0700 (PDT) Received: from alex-rivos.guest.squarehotel.net ([130.93.157.50]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-37189849cdbsm7338955f8f.28.2024.08.17.23.45.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 17 Aug 2024 23:45:56 -0700 (PDT) From: Alexandre Ghiti To: Jonathan Corbet , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Andrea Parri , Nathan Chancellor , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , Arnd Bergmann , Leonardo Bras , Guo Ren , linux-doc@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org Cc: Guo Ren Subject: [PATCH v5 10/13] asm-generic: ticket-lock: Add separate ticket-lock.h Date: Sun, 18 Aug 2024 08:35:35 +0200 Message-Id: <20240818063538.6651-11-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240818063538.6651-1-alexghiti@rivosinc.com> References: <20240818063538.6651-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240817_234559_618310_203B974F X-CRM114-Status: GOOD ( 21.56 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Add a separate ticket-lock.h to include multiple spinlock versions and select one at compile time or runtime. Reviewed-by: Leonardo Bras Suggested-by: Arnd Bergmann Link: https://lore.kernel.org/linux-riscv/CAK8P3a2rnz9mQqhN6-e0CGUUv9rntRELFdxt_weiD7FxH7fkfQ@mail.gmail.com/ Signed-off-by: Guo Ren Signed-off-by: Guo Ren Acked-by: Peter Zijlstra (Intel) Reviewed-by: Andrew Jones --- include/asm-generic/spinlock.h | 87 +--------------------- include/asm-generic/ticket_spinlock.h | 103 ++++++++++++++++++++++++++ 2 files changed, 104 insertions(+), 86 deletions(-) create mode 100644 include/asm-generic/ticket_spinlock.h diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h index 4773334ee638..970590baf61b 100644 --- a/include/asm-generic/spinlock.h +++ b/include/asm-generic/spinlock.h @@ -1,94 +1,9 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* - * 'Generic' ticket-lock implementation. - * - * It relies on atomic_fetch_add() having well defined forward progress - * guarantees under contention. If your architecture cannot provide this, stick - * to a test-and-set lock. - * - * It also relies on atomic_fetch_add() being safe vs smp_store_release() on a - * sub-word of the value. This is generally true for anything LL/SC although - * you'd be hard pressed to find anything useful in architecture specifications - * about this. If your architecture cannot do this you might be better off with - * a test-and-set. - * - * It further assumes atomic_*_release() + atomic_*_acquire() is RCpc and hence - * uses atomic_fetch_add() which is RCsc to create an RCsc hot path, along with - * a full fence after the spin to upgrade the otherwise-RCpc - * atomic_cond_read_acquire(). - * - * The implementation uses smp_cond_load_acquire() to spin, so if the - * architecture has WFE like instructions to sleep instead of poll for word - * modifications be sure to implement that (see ARM64 for example). - * - */ - #ifndef __ASM_GENERIC_SPINLOCK_H #define __ASM_GENERIC_SPINLOCK_H -#include -#include - -static __always_inline void arch_spin_lock(arch_spinlock_t *lock) -{ - u32 val = atomic_fetch_add(1<<16, &lock->val); - u16 ticket = val >> 16; - - if (ticket == (u16)val) - return; - - /* - * atomic_cond_read_acquire() is RCpc, but rather than defining a - * custom cond_read_rcsc() here we just emit a full fence. We only - * need the prior reads before subsequent writes ordering from - * smb_mb(), but as atomic_cond_read_acquire() just emits reads and we - * have no outstanding writes due to the atomic_fetch_add() the extra - * orderings are free. - */ - atomic_cond_read_acquire(&lock->val, ticket == (u16)VAL); - smp_mb(); -} - -static __always_inline bool arch_spin_trylock(arch_spinlock_t *lock) -{ - u32 old = atomic_read(&lock->val); - - if ((old >> 16) != (old & 0xffff)) - return false; - - return atomic_try_cmpxchg(&lock->val, &old, old + (1<<16)); /* SC, for RCsc */ -} - -static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) -{ - u16 *ptr = (u16 *)lock + IS_ENABLED(CONFIG_CPU_BIG_ENDIAN); - u32 val = atomic_read(&lock->val); - - smp_store_release(ptr, (u16)val + 1); -} - -static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) -{ - u32 val = lock.val.counter; - - return ((val >> 16) == (val & 0xffff)); -} - -static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock) -{ - arch_spinlock_t val = READ_ONCE(*lock); - - return !arch_spin_value_unlocked(val); -} - -static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) -{ - u32 val = atomic_read(&lock->val); - - return (s16)((val >> 16) - (val & 0xffff)) > 1; -} - +#include #include #endif /* __ASM_GENERIC_SPINLOCK_H */ diff --git a/include/asm-generic/ticket_spinlock.h b/include/asm-generic/ticket_spinlock.h new file mode 100644 index 000000000000..cfcff22b37b3 --- /dev/null +++ b/include/asm-generic/ticket_spinlock.h @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * 'Generic' ticket-lock implementation. + * + * It relies on atomic_fetch_add() having well defined forward progress + * guarantees under contention. If your architecture cannot provide this, stick + * to a test-and-set lock. + * + * It also relies on atomic_fetch_add() being safe vs smp_store_release() on a + * sub-word of the value. This is generally true for anything LL/SC although + * you'd be hard pressed to find anything useful in architecture specifications + * about this. If your architecture cannot do this you might be better off with + * a test-and-set. + * + * It further assumes atomic_*_release() + atomic_*_acquire() is RCpc and hence + * uses atomic_fetch_add() which is RCsc to create an RCsc hot path, along with + * a full fence after the spin to upgrade the otherwise-RCpc + * atomic_cond_read_acquire(). + * + * The implementation uses smp_cond_load_acquire() to spin, so if the + * architecture has WFE like instructions to sleep instead of poll for word + * modifications be sure to implement that (see ARM64 for example). + * + */ + +#ifndef __ASM_GENERIC_TICKET_SPINLOCK_H +#define __ASM_GENERIC_TICKET_SPINLOCK_H + +#include +#include + +static __always_inline void ticket_spin_lock(arch_spinlock_t *lock) +{ + u32 val = atomic_fetch_add(1<<16, &lock->val); + u16 ticket = val >> 16; + + if (ticket == (u16)val) + return; + + /* + * atomic_cond_read_acquire() is RCpc, but rather than defining a + * custom cond_read_rcsc() here we just emit a full fence. We only + * need the prior reads before subsequent writes ordering from + * smb_mb(), but as atomic_cond_read_acquire() just emits reads and we + * have no outstanding writes due to the atomic_fetch_add() the extra + * orderings are free. + */ + atomic_cond_read_acquire(&lock->val, ticket == (u16)VAL); + smp_mb(); +} + +static __always_inline bool ticket_spin_trylock(arch_spinlock_t *lock) +{ + u32 old = atomic_read(&lock->val); + + if ((old >> 16) != (old & 0xffff)) + return false; + + return atomic_try_cmpxchg(&lock->val, &old, old + (1<<16)); /* SC, for RCsc */ +} + +static __always_inline void ticket_spin_unlock(arch_spinlock_t *lock) +{ + u16 *ptr = (u16 *)lock + IS_ENABLED(CONFIG_CPU_BIG_ENDIAN); + u32 val = atomic_read(&lock->val); + + smp_store_release(ptr, (u16)val + 1); +} + +static __always_inline int ticket_spin_value_unlocked(arch_spinlock_t lock) +{ + u32 val = lock.val.counter; + + return ((val >> 16) == (val & 0xffff)); +} + +static __always_inline int ticket_spin_is_locked(arch_spinlock_t *lock) +{ + arch_spinlock_t val = READ_ONCE(*lock); + + return !ticket_spin_value_unlocked(val); +} + +static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock) +{ + u32 val = atomic_read(&lock->val); + + return (s16)((val >> 16) - (val & 0xffff)) > 1; +} + +/* + * Remapping spinlock architecture specific functions to the corresponding + * ticket spinlock functions. + */ +#define arch_spin_is_locked(l) ticket_spin_is_locked(l) +#define arch_spin_is_contended(l) ticket_spin_is_contended(l) +#define arch_spin_value_unlocked(l) ticket_spin_value_unlocked(l) +#define arch_spin_lock(l) ticket_spin_lock(l) +#define arch_spin_trylock(l) ticket_spin_trylock(l) +#define arch_spin_unlock(l) ticket_spin_unlock(l) + +#endif /* __ASM_GENERIC_TICKET_SPINLOCK_H */