From patchwork Fri Dec 27 01:10:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 13921493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E02BBE7718F for ; Fri, 27 Dec 2024 01:10:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dkAM04XwGgpzyCWY2YJ8pL1Z4zbADSWXB7WtUR6Louc=; b=niZL/xl4gjwRnO kdiEqj2eyLprSgArZDKm9OeJy44ZjkSUIEtwz/f5h/VB4nQwG6hniC5VvhRe3zAeT4n3SbEnNiFwD P7Vu/OsYZA7c6bDpnstfKXfjoz/1+Ofb3aUH+nrL5TH2/dH/udlhziEqXTBY6B7tGPs3vAa2mzzrs SQn+B5549VyCum+7E/aInNeEYfZirRiWNTfb9tM3eVEz8y/n6mHHEVoKrhrAhQDJUWHGBRljQbS8M xgxkL5X6wK/yyTIfctCsyY3huUeI9wJawJRnsn/kgurrpoDThbi9ogEhNF+KKyaUSs+mCANdyLigw NYGG00DwG/JOp1yQVLfA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tQysC-0000000GnD4-3Sb7; Fri, 27 Dec 2024 01:10:40 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tQysA-0000000GnBB-3ER7 for linux-riscv@lists.infradead.org; Fri, 27 Dec 2024 01:10:39 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 819F45C6111; Fri, 27 Dec 2024 01:09:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 98CBDC4CED1; Fri, 27 Dec 2024 01:10:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735261837; bh=qVctlfRDKSqvGJJe7RLiJvr7L2bxxrBrfqXVwQ0w6s4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VkMSXlxCaYTGsw/MT3j8ixWXlK+goloLcPoChbVn4Hnafknb3V6UdsQiYgCaBNHwi jEIcIA2OgExteYSL9mJYz2q7bTLU3hJ3diS6gWH5eDyMKOd0XJ/frbG0PpEBWdhDdn qoiXxUmSj17RwzF46gto2Fl4Q+WqrioLZMN/M3ubnZ163BCtJRezhqS0VZswzjFsVG HrJIsZYThpgL7PALSVDEPqH1FDmbXetNGEp5pNbfX8osHYvRo6ui+HX4euACDy0PK/ ukQA6vs6ScNs4mSsE0MU3cEvr8sa+YG3i238iwiwmPOV4V1YSbFPOvaC+u3Z99FCcm WHJT/if1sFp3Q== From: guoren@kernel.org To: paul.walmsley@sifive.com, palmer@dabbelt.com, guoren@kernel.org, bjorn@rivosinc.com, conor@kernel.org, leobras@redhat.com, peterz@infradead.org, parri.andrea@gmail.com, will@kernel.org, longman@redhat.com, boqun.feng@gmail.com, arnd@arndb.de, alexghiti@rivosinc.com, ajones@ventanamicro.com, rkrcmar@ventanamicro.com, atishp@rivosinc.com Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Guo Ren Subject: [RFC PATCH V2 2/4] RISC-V: paravirt: Add pvqspinlock frontend Date: Thu, 26 Dec 2024 20:10:09 -0500 Message-Id: <20241227011011.2331381-3-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20241227011011.2331381-1-guoren@kernel.org> References: <20241227011011.2331381-1-guoren@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241226_171038_896095_65D40E9E X-CRM114-Status: GOOD ( 17.63 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Add an unfair qspinlock virtualization-friendly frontend, by halting the virtual CPU rather than spinning. Using static_call to switch between: native_queued_spin_lock_slowpath() __pv_queued_spin_lock_slowpath() native_queued_spin_unlock() __pv_queued_spin_unlock() Add the pv_wait & pv_kick implementations. Reviewed-by: Leonardo Bras Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- arch/riscv/Kconfig | 12 ++++ arch/riscv/include/asm/Kbuild | 1 - arch/riscv/include/asm/qspinlock.h | 35 +++++++++++ arch/riscv/include/asm/qspinlock_paravirt.h | 28 +++++++++ arch/riscv/kernel/Makefile | 1 + arch/riscv/kernel/qspinlock_paravirt.c | 67 +++++++++++++++++++++ arch/riscv/kernel/setup.c | 4 ++ 7 files changed, 147 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/include/asm/qspinlock.h create mode 100644 arch/riscv/include/asm/qspinlock_paravirt.h create mode 100644 arch/riscv/kernel/qspinlock_paravirt.c diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index d4a7ca0388c0..e241ac39ecd6 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -1071,6 +1071,18 @@ config PARAVIRT_TIME_ACCOUNTING If in doubt, say N here. +config PARAVIRT_SPINLOCKS + bool "Paravirtualization layer for spinlocks" + depends on QUEUED_SPINLOCKS + default y + help + Paravirtualized spinlocks allow a unfair qspinlock to replace the + test-set kvm-guest virt spinlock implementation with something + virtualization-friendly, for example, halt the virtual CPU rather + than spinning. + + If you are unsure how to answer this question, answer Y. + config RELOCATABLE bool "Build a relocatable kernel" depends on MMU && 64BIT && !XIP_KERNEL diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild index de13d5a234f8..c726330d2b9f 100644 --- a/arch/riscv/include/asm/Kbuild +++ b/arch/riscv/include/asm/Kbuild @@ -12,6 +12,5 @@ generic-y += spinlock_types.h generic-y += ticket_spinlock.h generic-y += qrwlock.h generic-y += qrwlock_types.h -generic-y += qspinlock.h generic-y += user.h generic-y += vmlinux.lds.h diff --git a/arch/riscv/include/asm/qspinlock.h b/arch/riscv/include/asm/qspinlock.h new file mode 100644 index 000000000000..1d9f32334ff1 --- /dev/null +++ b/arch/riscv/include/asm/qspinlock.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c), 2024 Alibaba + * Authors: + * Guo Ren + */ + +#ifndef _ASM_RISCV_QSPINLOCK_H +#define _ASM_RISCV_QSPINLOCK_H + +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#include + +/* How long a lock should spin before we consider blocking */ +#define SPIN_THRESHOLD (1 << 15) + +void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +void __pv_init_lock_hash(void); +void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); + +static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) +{ + static_call(pv_queued_spin_lock_slowpath)(lock, val); +} + +#define queued_spin_unlock queued_spin_unlock +static inline void queued_spin_unlock(struct qspinlock *lock) +{ + static_call(pv_queued_spin_unlock)(lock); +} +#endif /* CONFIG_PARAVIRT_SPINLOCKS */ + +#include + +#endif /* _ASM_RISCV_QSPINLOCK_H */ diff --git a/arch/riscv/include/asm/qspinlock_paravirt.h b/arch/riscv/include/asm/qspinlock_paravirt.h new file mode 100644 index 000000000000..a365203dd782 --- /dev/null +++ b/arch/riscv/include/asm/qspinlock_paravirt.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c), 2024 Alibaba Cloud + * Authors: + * Guo Ren + */ + +#ifndef _ASM_RISCV_QSPINLOCK_PARAVIRT_H +#define _ASM_RISCV_QSPINLOCK_PARAVIRT_H + +void pv_wait(u8 *ptr, u8 val); +void pv_kick(int cpu); + +void dummy_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +void dummy_queued_spin_unlock(struct qspinlock *lock); + +DECLARE_STATIC_CALL(pv_queued_spin_lock_slowpath, dummy_queued_spin_lock_slowpath); +DECLARE_STATIC_CALL(pv_queued_spin_unlock, dummy_queued_spin_unlock); + +void __init pv_qspinlock_init(void); + +void __pv_queued_spin_unlock_slowpath(struct qspinlock *lock, u8 locked); + +bool pv_is_native_spin_unlock(void); + +void __pv_queued_spin_unlock(struct qspinlock *lock); + +#endif /* _ASM_RISCV_QSPINLOCK_PARAVIRT_H */ diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 063d1faf5a53..79f823e0e57d 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -123,3 +123,4 @@ obj-$(CONFIG_COMPAT) += compat_vdso/ obj-$(CONFIG_64BIT) += pi/ obj-$(CONFIG_ACPI) += acpi.o obj-$(CONFIG_ACPI_NUMA) += acpi_numa.o +obj-$(CONFIG_PARAVIRT_SPINLOCKS) += qspinlock_paravirt.o diff --git a/arch/riscv/kernel/qspinlock_paravirt.c b/arch/riscv/kernel/qspinlock_paravirt.c new file mode 100644 index 000000000000..4ec4765f57f3 --- /dev/null +++ b/arch/riscv/kernel/qspinlock_paravirt.c @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c), 2024 Alibaba Cloud + * Authors: + * Guo Ren + */ + +#include +#include +#include + +void pv_kick(int cpu) +{ + sbi_ecall(SBI_EXT_PVLOCK, SBI_EXT_PVLOCK_KICK_CPU, + cpuid_to_hartid_map(cpu), 0, 0, 0, 0, 0); + return; +} + +void pv_wait(u8 *ptr, u8 val) +{ + unsigned long flags; + + if (in_nmi()) + return; + + local_irq_save(flags); + if (READ_ONCE(*ptr) != val) + goto out; + + wait_for_interrupt(); +out: + local_irq_restore(flags); +} + +static void native_queued_spin_unlock(struct qspinlock *lock) +{ + smp_store_release(&lock->locked, 0); +} + +DEFINE_STATIC_CALL(pv_queued_spin_lock_slowpath, native_queued_spin_lock_slowpath); +EXPORT_STATIC_CALL(pv_queued_spin_lock_slowpath); + +DEFINE_STATIC_CALL(pv_queued_spin_unlock, native_queued_spin_unlock); +EXPORT_STATIC_CALL(pv_queued_spin_unlock); + +void __init pv_qspinlock_init(void) +{ + if (num_possible_cpus() == 1) + return; + + if (!sbi_probe_extension(SBI_EXT_PVLOCK)) + return; + + pr_info("PV qspinlocks enabled\n"); + __pv_init_lock_hash(); + + static_call_update(pv_queued_spin_lock_slowpath, __pv_queued_spin_lock_slowpath); + static_call_update(pv_queued_spin_unlock, __pv_queued_spin_unlock); +} + +bool pv_is_native_spin_unlock(void) +{ + if (static_call_query(pv_queued_spin_unlock) == native_queued_spin_unlock) + return true; + else + return false; +} diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index 45010e71df86..8b51ff5c7300 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -278,6 +278,10 @@ static void __init riscv_spinlock_init(void) pr_err("Queued spinlock without Zabha or Ziccrse"); else pr_info("Queued spinlock %s: enabled\n", using_ext); + +#ifdef CONFIG_PARAVIRT_SPINLOCKS + pv_qspinlock_init(); +#endif } extern void __init init_rt_signal_env(void);