Message ID | 20241103145153.105097-14-alexghiti@rivosinc.com (mailing list archive) |
---|---|
State | Accepted |
Commit | ab83647fadae2f1f723119dc066b39a461d6d288 |
Headers | show |
Series | Zacas/Zabha support and qspinlocks | expand |
Hi Alexandre, kernel test robot noticed the following build warnings: [auto build test WARNING on arnd-asm-generic/master] [also build test WARNING on robh/for-next tip/locking/core linus/master v6.12-rc6] [cannot apply to next-20241101] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Alexandre-Ghiti/riscv-Move-cpufeature-h-macros-into-their-own-header/20241103-230614 base: https://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic.git master patch link: https://lore.kernel.org/r/20241103145153.105097-14-alexghiti%40rivosinc.com patch subject: [PATCH v6 13/13] riscv: Add qspinlock support compiler: clang version 19.1.3 (https://github.com/llvm/llvm-project ab51eccf88f5321e7c60591c5546b254b6afab99) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202411041609.gxjI2dsw-lkp@intel.com/ includecheck warnings: (new ones prefixed by >>) >> arch/riscv/include/asm/spinlock.h: asm/ticket_spinlock.h is included more than once. >> arch/riscv/include/asm/spinlock.h: asm/qspinlock.h is included more than once. vim +10 arch/riscv/include/asm/spinlock.h 8 9 #define __no_arch_spinlock_redefine > 10 #include <asm/ticket_spinlock.h> 11 #include <asm/qspinlock.h> 12 #include <asm/jump_label.h> 13 14 /* 15 * TODO: Use an alternative instead of a static key when we are able to parse 16 * the extensions string earlier in the boot process. 17 */ 18 DECLARE_STATIC_KEY_TRUE(qspinlock_key); 19 20 #define SPINLOCK_BASE_DECLARE(op, type, type_lock) \ 21 static __always_inline type arch_spin_##op(type_lock lock) \ 22 { \ 23 if (static_branch_unlikely(&qspinlock_key)) \ 24 return queued_spin_##op(lock); \ 25 return ticket_spin_##op(lock); \ 26 } 27 28 SPINLOCK_BASE_DECLARE(lock, void, arch_spinlock_t *) 29 SPINLOCK_BASE_DECLARE(unlock, void, arch_spinlock_t *) 30 SPINLOCK_BASE_DECLARE(is_locked, int, arch_spinlock_t *) 31 SPINLOCK_BASE_DECLARE(is_contended, int, arch_spinlock_t *) 32 SPINLOCK_BASE_DECLARE(trylock, bool, arch_spinlock_t *) 33 SPINLOCK_BASE_DECLARE(value_unlocked, int, arch_spinlock_t) 34 35 #elif defined(CONFIG_RISCV_QUEUED_SPINLOCKS) 36 37 #include <asm/qspinlock.h> 38 39 #else 40 > 41 #include <asm/ticket_spinlock.h> 42
On Mon, Nov 4, 2024 at 10:05 AM kernel test robot <lkp@intel.com> wrote: > > Hi Alexandre, > > kernel test robot noticed the following build warnings: > > [auto build test WARNING on arnd-asm-generic/master] > [also build test WARNING on robh/for-next tip/locking/core linus/master v6.12-rc6] > [cannot apply to next-20241101] > [If your patch is applied to the wrong git tree, kindly drop us a note. > And when submitting patch, we suggest to use '--base' as documented in > https://git-scm.com/docs/git-format-patch#_base_tree_information] > > url: https://github.com/intel-lab-lkp/linux/commits/Alexandre-Ghiti/riscv-Move-cpufeature-h-macros-into-their-own-header/20241103-230614 > base: https://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic.git master > patch link: https://lore.kernel.org/r/20241103145153.105097-14-alexghiti%40rivosinc.com > patch subject: [PATCH v6 13/13] riscv: Add qspinlock support > compiler: clang version 19.1.3 (https://github.com/llvm/llvm-project ab51eccf88f5321e7c60591c5546b254b6afab99) > > If you fix the issue in a separate patch/commit (i.e. not just a new version of > the same patch/commit), kindly add following tags > | Reported-by: kernel test robot <lkp@intel.com> > | Closes: https://lore.kernel.org/oe-kbuild-all/202411041609.gxjI2dsw-lkp@intel.com/ > > includecheck warnings: (new ones prefixed by >>) > >> arch/riscv/include/asm/spinlock.h: asm/ticket_spinlock.h is included more than once. > >> arch/riscv/include/asm/spinlock.h: asm/qspinlock.h is included more than once. Yes but that's in a #ifdef/#elif#else clause so nothing to do here! > > vim +10 arch/riscv/include/asm/spinlock.h > > 8 > 9 #define __no_arch_spinlock_redefine > > 10 #include <asm/ticket_spinlock.h> > 11 #include <asm/qspinlock.h> > 12 #include <asm/jump_label.h> > 13 > 14 /* > 15 * TODO: Use an alternative instead of a static key when we are able to parse > 16 * the extensions string earlier in the boot process. > 17 */ > 18 DECLARE_STATIC_KEY_TRUE(qspinlock_key); > 19 > 20 #define SPINLOCK_BASE_DECLARE(op, type, type_lock) \ > 21 static __always_inline type arch_spin_##op(type_lock lock) \ > 22 { \ > 23 if (static_branch_unlikely(&qspinlock_key)) \ > 24 return queued_spin_##op(lock); \ > 25 return ticket_spin_##op(lock); \ > 26 } > 27 > 28 SPINLOCK_BASE_DECLARE(lock, void, arch_spinlock_t *) > 29 SPINLOCK_BASE_DECLARE(unlock, void, arch_spinlock_t *) > 30 SPINLOCK_BASE_DECLARE(is_locked, int, arch_spinlock_t *) > 31 SPINLOCK_BASE_DECLARE(is_contended, int, arch_spinlock_t *) > 32 SPINLOCK_BASE_DECLARE(trylock, bool, arch_spinlock_t *) > 33 SPINLOCK_BASE_DECLARE(value_unlocked, int, arch_spinlock_t) > 34 > 35 #elif defined(CONFIG_RISCV_QUEUED_SPINLOCKS) > 36 > 37 #include <asm/qspinlock.h> > 38 > 39 #else > 40 > > 41 #include <asm/ticket_spinlock.h> > 42 > > -- > 0-DAY CI Kernel Test Service > https://github.com/intel/lkp-tests/wiki
On Mon, Nov 04, 2024 at 10:09:07AM +0100, Alexandre Ghiti wrote: > On Mon, Nov 4, 2024 at 10:05 AM kernel test robot <lkp@intel.com> wrote: > > > > Hi Alexandre, > > > > kernel test robot noticed the following build warnings: > > > > [auto build test WARNING on arnd-asm-generic/master] > > [also build test WARNING on robh/for-next tip/locking/core linus/master v6.12-rc6] > > [cannot apply to next-20241101] > > [If your patch is applied to the wrong git tree, kindly drop us a note. > > And when submitting patch, we suggest to use '--base' as documented in > > https://git-scm.com/docs/git-format-patch#_base_tree_information] > > > > url: https://github.com/intel-lab-lkp/linux/commits/Alexandre-Ghiti/riscv-Move-cpufeature-h-macros-into-their-own-header/20241103-230614 > > base: https://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic.git master > > patch link: https://lore.kernel.org/r/20241103145153.105097-14-alexghiti%40rivosinc.com > > patch subject: [PATCH v6 13/13] riscv: Add qspinlock support > > compiler: clang version 19.1.3 (https://github.com/llvm/llvm-project ab51eccf88f5321e7c60591c5546b254b6afab99) > > > > If you fix the issue in a separate patch/commit (i.e. not just a new version of > > the same patch/commit), kindly add following tags > > | Reported-by: kernel test robot <lkp@intel.com> > > | Closes: https://lore.kernel.org/oe-kbuild-all/202411041609.gxjI2dsw-lkp@intel.com/ > > > > includecheck warnings: (new ones prefixed by >>) > > >> arch/riscv/include/asm/spinlock.h: asm/ticket_spinlock.h is included more than once. > > >> arch/riscv/include/asm/spinlock.h: asm/qspinlock.h is included more than once. > > Yes but that's in a #ifdef/#elif#else clause so nothing to do here! Thanks for the info, we will fix the bot. Sorry for the false positive report. > > > > > vim +10 arch/riscv/include/asm/spinlock.h > > > > 8 > > 9 #define __no_arch_spinlock_redefine > > > 10 #include <asm/ticket_spinlock.h> > > 11 #include <asm/qspinlock.h> > > 12 #include <asm/jump_label.h> > > 13 > > 14 /* > > 15 * TODO: Use an alternative instead of a static key when we are able to parse > > 16 * the extensions string earlier in the boot process. > > 17 */ > > 18 DECLARE_STATIC_KEY_TRUE(qspinlock_key); > > 19 > > 20 #define SPINLOCK_BASE_DECLARE(op, type, type_lock) \ > > 21 static __always_inline type arch_spin_##op(type_lock lock) \ > > 22 { \ > > 23 if (static_branch_unlikely(&qspinlock_key)) \ > > 24 return queued_spin_##op(lock); \ > > 25 return ticket_spin_##op(lock); \ > > 26 } > > 27 > > 28 SPINLOCK_BASE_DECLARE(lock, void, arch_spinlock_t *) > > 29 SPINLOCK_BASE_DECLARE(unlock, void, arch_spinlock_t *) > > 30 SPINLOCK_BASE_DECLARE(is_locked, int, arch_spinlock_t *) > > 31 SPINLOCK_BASE_DECLARE(is_contended, int, arch_spinlock_t *) > > 32 SPINLOCK_BASE_DECLARE(trylock, bool, arch_spinlock_t *) > > 33 SPINLOCK_BASE_DECLARE(value_unlocked, int, arch_spinlock_t) > > 34 > > 35 #elif defined(CONFIG_RISCV_QUEUED_SPINLOCKS) > > 36 > > 37 #include <asm/qspinlock.h> > > 38 > > 39 #else > > 40 > > > 41 #include <asm/ticket_spinlock.h> > > 42 > > > > -- > > 0-DAY CI Kernel Test Service > > https://github.com/intel/lkp-tests/wiki >
On Sun, Nov 03, 2024 at 03:51:53PM +0100, Alexandre Ghiti wrote: > In order to produce a generic kernel, a user can select > CONFIG_COMBO_SPINLOCKS which will fallback at runtime to the ticket > spinlock implementation if Zabha or Ziccrse are not present. > > Note that we can't use alternatives here because the discovery of > extensions is done too late and we need to start with the qspinlock > implementation because the ticket spinlock implementation would pollute > the spinlock value, so let's use static keys. I think the static key toggling takes a mutex (jump_label_lock()) which can take a spinlock (lock->wait_lock) internally, so I don't grok how this works: > +static void __init riscv_spinlock_init(void) > +{ > + char *using_ext = NULL; > + > + if (IS_ENABLED(CONFIG_RISCV_TICKET_SPINLOCKS)) { > + pr_info("Ticket spinlock: enabled\n"); > + return; > + } > + > + if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) && > + IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) && > + riscv_isa_extension_available(NULL, ZABHA) && > + riscv_isa_extension_available(NULL, ZACAS)) { > + using_ext = "using Zabha"; > + } else if (riscv_isa_extension_available(NULL, ZICCRSE)) { > + using_ext = "using Ziccrse"; > + } > +#if defined(CONFIG_RISCV_COMBO_SPINLOCKS) > + else { > + static_branch_disable(&qspinlock_key); > + pr_info("Ticket spinlock: enabled\n"); > + return; > + } > +#endif i.e. we've potentially already used the qspinlock at this point. Will
On Tue, Nov 12, 2024 at 12:43 AM Will Deacon <will@kernel.org> wrote: > > On Sun, Nov 03, 2024 at 03:51:53PM +0100, Alexandre Ghiti wrote: > > In order to produce a generic kernel, a user can select > > CONFIG_COMBO_SPINLOCKS which will fallback at runtime to the ticket > > spinlock implementation if Zabha or Ziccrse are not present. > > > > Note that we can't use alternatives here because the discovery of > > extensions is done too late and we need to start with the qspinlock > > implementation because the ticket spinlock implementation would pollute > > the spinlock value, so let's use static keys. > > I think the static key toggling takes a mutex (jump_label_lock()) which > can take a spinlock (lock->wait_lock) internally, so I don't grok how > this works: > > > +static void __init riscv_spinlock_init(void) > > +{ > > + char *using_ext = NULL; > > + > > + if (IS_ENABLED(CONFIG_RISCV_TICKET_SPINLOCKS)) { > > + pr_info("Ticket spinlock: enabled\n"); > > + return; > > + } > > + > > + if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) && > > + IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) && > > + riscv_isa_extension_available(NULL, ZABHA) && > > + riscv_isa_extension_available(NULL, ZACAS)) { > > + using_ext = "using Zabha"; > > + } else if (riscv_isa_extension_available(NULL, ZICCRSE)) { > > + using_ext = "using Ziccrse"; > > + } > > +#if defined(CONFIG_RISCV_COMBO_SPINLOCKS) > > + else { > > + static_branch_disable(&qspinlock_key); > > + pr_info("Ticket spinlock: enabled\n"); > > + return; > > + } > > +#endif > > i.e. we've potentially already used the qspinlock at this point. Yes, I've used qspinlock here. But riscv_spinlock_init is called with irq_disabled and smp_off. That means this qspinlock only performs a test-set lock behavior by qspinlock fast-path. The qspinlock is a clean implementation. After qspin_unlock, the lock value remains at zero, but the ticket lock makes the value dirty. So we use Qspinlock at first or change it to ticket-lock before irq & smp up. > > Will
On Tue, Nov 12, 2024 at 09:49:15AM +0800, Guo Ren wrote: > On Tue, Nov 12, 2024 at 12:43 AM Will Deacon <will@kernel.org> wrote: > > > > On Sun, Nov 03, 2024 at 03:51:53PM +0100, Alexandre Ghiti wrote: > > > In order to produce a generic kernel, a user can select > > > CONFIG_COMBO_SPINLOCKS which will fallback at runtime to the ticket > > > spinlock implementation if Zabha or Ziccrse are not present. > > > > > > Note that we can't use alternatives here because the discovery of > > > extensions is done too late and we need to start with the qspinlock > > > implementation because the ticket spinlock implementation would pollute > > > the spinlock value, so let's use static keys. > > > > I think the static key toggling takes a mutex (jump_label_lock()) which > > can take a spinlock (lock->wait_lock) internally, so I don't grok how > > this works: > > > > > +static void __init riscv_spinlock_init(void) > > > +{ > > > + char *using_ext = NULL; > > > + > > > + if (IS_ENABLED(CONFIG_RISCV_TICKET_SPINLOCKS)) { > > > + pr_info("Ticket spinlock: enabled\n"); > > > + return; > > > + } > > > + > > > + if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) && > > > + IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) && > > > + riscv_isa_extension_available(NULL, ZABHA) && > > > + riscv_isa_extension_available(NULL, ZACAS)) { > > > + using_ext = "using Zabha"; > > > + } else if (riscv_isa_extension_available(NULL, ZICCRSE)) { > > > + using_ext = "using Ziccrse"; > > > + } > > > +#if defined(CONFIG_RISCV_COMBO_SPINLOCKS) > > > + else { > > > + static_branch_disable(&qspinlock_key); > > > + pr_info("Ticket spinlock: enabled\n"); > > > + return; > > > + } > > > +#endif > > > > i.e. we've potentially already used the qspinlock at this point. > Yes, I've used qspinlock here. But riscv_spinlock_init is called with > irq_disabled and smp_off. That means this qspinlock only performs a > test-set lock behavior by qspinlock fast-path. That's... horrendous. Will
diff --git a/Documentation/features/locking/queued-spinlocks/arch-support.txt b/Documentation/features/locking/queued-spinlocks/arch-support.txt index 22f2990392ff..cf26042480e2 100644 --- a/Documentation/features/locking/queued-spinlocks/arch-support.txt +++ b/Documentation/features/locking/queued-spinlocks/arch-support.txt @@ -20,7 +20,7 @@ | openrisc: | ok | | parisc: | TODO | | powerpc: | ok | - | riscv: | TODO | + | riscv: | ok | | s390: | TODO | | sh: | TODO | | sparc: | ok | diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 093ee6537331..f5698ecc5ccc 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -82,6 +82,7 @@ config RISCV select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP select ARCH_WANTS_NO_INSTR select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE + select ARCH_WEAK_RELEASE_ACQUIRE if ARCH_USE_QUEUED_SPINLOCKS select BINFMT_FLAT_NO_DATA_START_OFFSET if !MMU select BUILDTIME_TABLE_SORT if MMU select CLINT_TIMER if RISCV_M_MODE @@ -507,6 +508,39 @@ config NODES_SHIFT Specify the maximum number of NUMA Nodes available on the target system. Increases memory reserved to accommodate various tables. +choice + prompt "RISC-V spinlock type" + default RISCV_COMBO_SPINLOCKS + +config RISCV_TICKET_SPINLOCKS + bool "Using ticket spinlock" + +config RISCV_QUEUED_SPINLOCKS + bool "Using queued spinlock" + depends on SMP && MMU && NONPORTABLE + select ARCH_USE_QUEUED_SPINLOCKS + help + The queued spinlock implementation requires the forward progress + guarantee of cmpxchg()/xchg() atomic operations: CAS with Zabha or + LR/SC with Ziccrse provide such guarantee. + + Select this if and only if Zabha or Ziccrse is available on your + platform, RISCV_QUEUED_SPINLOCKS must not be selected for platforms + without one of those extensions. + + If unsure, select RISCV_COMBO_SPINLOCKS, which will use qspinlocks + when supported and otherwise ticket spinlocks. + +config RISCV_COMBO_SPINLOCKS + bool "Using combo spinlock" + depends on SMP && MMU + select ARCH_USE_QUEUED_SPINLOCKS + help + Embed both queued spinlock and ticket lock so that the spinlock + implementation can be chosen at runtime. + +endchoice + config RISCV_ALTERNATIVE bool depends on !XIP_KERNEL diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild index 1461af12da6e..de13d5a234f8 100644 --- a/arch/riscv/include/asm/Kbuild +++ b/arch/riscv/include/asm/Kbuild @@ -6,10 +6,12 @@ generic-y += early_ioremap.h generic-y += flat.h generic-y += kvm_para.h generic-y += mmzone.h +generic-y += mcs_spinlock.h generic-y += parport.h -generic-y += spinlock.h generic-y += spinlock_types.h +generic-y += ticket_spinlock.h generic-y += qrwlock.h generic-y += qrwlock_types.h +generic-y += qspinlock.h generic-y += user.h generic-y += vmlinux.lds.h diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h new file mode 100644 index 000000000000..e5121b89acea --- /dev/null +++ b/arch/riscv/include/asm/spinlock.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_RISCV_SPINLOCK_H +#define __ASM_RISCV_SPINLOCK_H + +#ifdef CONFIG_RISCV_COMBO_SPINLOCKS +#define _Q_PENDING_LOOPS (1 << 9) + +#define __no_arch_spinlock_redefine +#include <asm/ticket_spinlock.h> +#include <asm/qspinlock.h> +#include <asm/jump_label.h> + +/* + * TODO: Use an alternative instead of a static key when we are able to parse + * the extensions string earlier in the boot process. + */ +DECLARE_STATIC_KEY_TRUE(qspinlock_key); + +#define SPINLOCK_BASE_DECLARE(op, type, type_lock) \ +static __always_inline type arch_spin_##op(type_lock lock) \ +{ \ + if (static_branch_unlikely(&qspinlock_key)) \ + return queued_spin_##op(lock); \ + return ticket_spin_##op(lock); \ +} + +SPINLOCK_BASE_DECLARE(lock, void, arch_spinlock_t *) +SPINLOCK_BASE_DECLARE(unlock, void, arch_spinlock_t *) +SPINLOCK_BASE_DECLARE(is_locked, int, arch_spinlock_t *) +SPINLOCK_BASE_DECLARE(is_contended, int, arch_spinlock_t *) +SPINLOCK_BASE_DECLARE(trylock, bool, arch_spinlock_t *) +SPINLOCK_BASE_DECLARE(value_unlocked, int, arch_spinlock_t) + +#elif defined(CONFIG_RISCV_QUEUED_SPINLOCKS) + +#include <asm/qspinlock.h> + +#else + +#include <asm/ticket_spinlock.h> + +#endif + +#include <asm/qrwlock.h> + +#endif /* __ASM_RISCV_SPINLOCK_H */ diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index a2cde65b69e9..438e4f6ad2ad 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -244,6 +244,42 @@ static void __init parse_dtb(void) #endif } +#if defined(CONFIG_RISCV_COMBO_SPINLOCKS) +DEFINE_STATIC_KEY_TRUE(qspinlock_key); +EXPORT_SYMBOL(qspinlock_key); +#endif + +static void __init riscv_spinlock_init(void) +{ + char *using_ext = NULL; + + if (IS_ENABLED(CONFIG_RISCV_TICKET_SPINLOCKS)) { + pr_info("Ticket spinlock: enabled\n"); + return; + } + + if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) && + IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) && + riscv_isa_extension_available(NULL, ZABHA) && + riscv_isa_extension_available(NULL, ZACAS)) { + using_ext = "using Zabha"; + } else if (riscv_isa_extension_available(NULL, ZICCRSE)) { + using_ext = "using Ziccrse"; + } +#if defined(CONFIG_RISCV_COMBO_SPINLOCKS) + else { + static_branch_disable(&qspinlock_key); + pr_info("Ticket spinlock: enabled\n"); + return; + } +#endif + + if (!using_ext) + pr_err("Queued spinlock without Zabha or Ziccrse"); + else + pr_info("Queued spinlock %s: enabled\n", using_ext); +} + extern void __init init_rt_signal_env(void); void __init setup_arch(char **cmdline_p) @@ -297,6 +333,7 @@ void __init setup_arch(char **cmdline_p) riscv_set_dma_cache_alignment(); riscv_user_isa_enable(); + riscv_spinlock_init(); } bool arch_cpu_is_hotpluggable(int cpu) diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index 0655aa5b57b2..bf47cca2c375 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -136,6 +136,7 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock) } #endif +#ifndef __no_arch_spinlock_redefine /* * Remapping spinlock architecture specific functions to the corresponding * queued spinlock functions. @@ -146,5 +147,6 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock) #define arch_spin_lock(l) queued_spin_lock(l) #define arch_spin_trylock(l) queued_spin_trylock(l) #define arch_spin_unlock(l) queued_spin_unlock(l) +#endif #endif /* __ASM_GENERIC_QSPINLOCK_H */ diff --git a/include/asm-generic/ticket_spinlock.h b/include/asm-generic/ticket_spinlock.h index cfcff22b37b3..325779970d8a 100644 --- a/include/asm-generic/ticket_spinlock.h +++ b/include/asm-generic/ticket_spinlock.h @@ -89,6 +89,7 @@ static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock) return (s16)((val >> 16) - (val & 0xffff)) > 1; } +#ifndef __no_arch_spinlock_redefine /* * Remapping spinlock architecture specific functions to the corresponding * ticket spinlock functions. @@ -99,5 +100,6 @@ static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock) #define arch_spin_lock(l) ticket_spin_lock(l) #define arch_spin_trylock(l) ticket_spin_trylock(l) #define arch_spin_unlock(l) ticket_spin_unlock(l) +#endif #endif /* __ASM_GENERIC_TICKET_SPINLOCK_H */