From patchwork Sat Dec 23 04:29:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13503916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7ABACC3DA6E for ; Sat, 23 Dec 2023 04:29:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yhd+FtNNzq8BwzkL8ZS4cIMN2Mqeei6usXiBvti+cYc=; b=AE+YJK5gzA8WGt I3N6DcHeVP8RCUUHy0CouXfZhCtx9fUKAaKGT3ZZ3WI9isJlrHt15DoCf4JyyFAcbfix2NqPMv9Rv 6RZL6PuAPP9nR0IyiaBPATVihOgdi/ZMzARGqVYfyiifTZhYEtjftIQswZMJOZNNAuBi44NLfrduU pb6FeSEmYRVgkAYwA/8jvFIoFkXla99l7thIL5ALTK1Mc8zWZGjLexPScuRvD7tdCISttQA08Ki9l CurjO9RXe9AEx25iOsLkVPHLyY+WNjt8XTcKPfi0mqvYiuxuUNl0e/elOilFaavSjWJ5szgw2LdAr vNZz+NuBVc6j+HFpImzg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGtdw-007Lpz-13; Sat, 23 Dec 2023 04:29:44 +0000 Received: from mail-oo1-xc31.google.com ([2607:f8b0:4864:20::c31]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGtdt-007LpG-1Q for linux-riscv@lists.infradead.org; Sat, 23 Dec 2023 04:29:43 +0000 Received: by mail-oo1-xc31.google.com with SMTP id 006d021491bc7-593f6fb21a5so1706287eaf.2 for ; Fri, 22 Dec 2023 20:29:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1703305779; x=1703910579; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yXDKp1qYvQhpIzfBr7fHpNM8aGeJhBO+Cpj8eg5YQfo=; b=I3P0/jkolhb6vJxCRkJD8I8KyBYR8e6bgrG6dzMWIBCZzC2gWcU2iZqk562YHPr5Jh gMFloOEn2KeanP5+YhhyAsa9pWEyC6rekxXeGYGL8E358DpbBK+NJltaL5MAH6vtYPyA ZdAQbfUii+fpEvE3zPvzEJvuQVuBYVkcMro6Zbe29VWdvr1nFGLhK6lIEXD+9NZt4Hsi ho8RAM8jtj6qhVCjO2aS2CXq6izOX3LHL7+Jx4gce5EuFh7ID9repb9Q+eMC0MsJGy4x ZGk5KHvB2lproNhvKEPitxnHqnizOPnhHo9UuQu6dW7at/FIHnlG7DUVR5csfWegEe8x QiIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703305779; x=1703910579; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yXDKp1qYvQhpIzfBr7fHpNM8aGeJhBO+Cpj8eg5YQfo=; b=I+rmBSmuEMmjM0Th/rj75EHhQS1gLBxJS61uvOK4H/fWFnTPvUUdmR2h6GtjgBiP9U fpcEH0H3FIm/foN8stRKwBWbCO/5HxLfDungdpJ2rhbLxsFdTtu586VLKWUhYP13sW2C wbn9to5xBIabO2bS9gupdpLCCdRSYGbbtHMMihxKbIF73rL5HmKAjb4X4ieouCSum9lT TXJAy5Bw+HVnjlMnIHEQIzRTihTA92zD7zPBSlAFXV1Iq4GGm5wbuad1SHZqv7TkMuoB YnHQvHfrU9dvWl7q8UeYgOkW0My03yqaABFtI5AMSqoA15z5kdk8G5U6CN/m9YDb7mWF 7WGw== X-Gm-Message-State: AOJu0YzCstDZNZbMseOUfBksnfHGiu2QhHlxULMu6wW2d4DO8m+9YkkR u+UweeZQ4o8vXG0/4nZtyH/dAvjffWupjEBmunqUvyhKp+pxPUtyGF0KlBl4ZlBXvjBx15R7U3+ fCTJvU2yeOPOiZZgr3c2TO08KlDMCTRTYUNZBZPSekLMKcDUzLKIQ88fe+8y1nQn79PUgx4+CQs Nkx3dqBNbWXEDSpUxD/CDI X-Google-Smtp-Source: AGHT+IHuhdL34j28F4BWscpV/JlJayYsrEyl0uQvbwh9BH/YI1/gdnKe4TIjsV/CSrGD0tbMrW0uXQ== X-Received: by 2002:a05:6358:6f8d:b0:170:cc82:db41 with SMTP id s13-20020a0563586f8d00b00170cc82db41mr2500790rwn.21.1703305778476; Fri, 22 Dec 2023 20:29:38 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id sj10-20020a17090b2d8a00b0028b845f2890sm2623397pjb.33.2023.12.22.20.29.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 20:29:37 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Vincent Chen , Andy Chiu , Albert Ou , Heiko Stuebner , Baoquan He , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Guo Ren , Xiao Wang , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Conor Dooley , Alexandre Ghiti , Sami Tolvanen , Sia Jee Heng , Evan Green , Jisheng Zhang Subject: [v8, 01/10] riscv: Add support for kernel mode vector Date: Sat, 23 Dec 2023 04:29:05 +0000 Message-Id: <20231223042914.18599-2-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231223042914.18599-1-andy.chiu@sifive.com> References: <20231223042914.18599-1-andy.chiu@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_202941_480508_35888FE0 X-CRM114-Status: GOOD ( 24.70 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Greentime Hu Add kernel_vector_begin() and kernel_vector_end() function declarations and corresponding definitions in kernel_mode_vector.c These are needed to wrap uses of vector in kernel mode. Co-developed-by: Vincent Chen Signed-off-by: Vincent Chen Signed-off-by: Greentime Hu Signed-off-by: Andy Chiu --- Changelog v8: - Refactor unnecessary whitespace change (Eric) Changelog v7: - fix build fail for allmodconfig Changelog v6: - Use 8 bits to track non-preemptible vector context to provide better WARN coverage. Changelog v4: - Use kernel_v_flags and helpers to track vector context. Changelog v3: - Reorder patch 1 to patch 3 to make use of {get,put}_cpu_vector_context later. - Export {get,put}_cpu_vector_context. - Save V context after disabling preemption. (Guo) - Fix a build fail. (Conor) - Remove irqs_disabled() check as it is not needed, fix styling. (Björn) Changelog v2: - 's/kernel_rvv/kernel_vector' and return void in kernel_vector_begin (Conor) - export may_use_simd to include/asm/simd.h --- arch/riscv/include/asm/processor.h | 17 ++++- arch/riscv/include/asm/simd.h | 44 ++++++++++++ arch/riscv/include/asm/vector.h | 21 ++++++ arch/riscv/kernel/Makefile | 1 + arch/riscv/kernel/kernel_mode_vector.c | 95 ++++++++++++++++++++++++++ arch/riscv/kernel/process.c | 1 + 6 files changed, 178 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/include/asm/simd.h create mode 100644 arch/riscv/kernel/kernel_mode_vector.c diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index f19f861cda54..15781e2232e0 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -73,6 +73,20 @@ struct task_struct; struct pt_regs; +/* + * We use a flag to track in-kernel Vector context. Currently the flag has the + * following meaning: + * + * - bit 0-7 indicates whether the in-kernel Vector context is active. The + * activation of this state disables the preemption. On a non-RT kernel, it + * also disable bh. Currently only 0 and 1 are valid value for this field. + * Other values are reserved for future uses. + */ + +#define RISCV_KERNEL_MODE_V_MASK 0xff + +#define RISCV_KERNEL_MODE_V 0x1 + /* CPU-specific state of a task */ struct thread_struct { /* Callee-saved registers */ @@ -81,7 +95,8 @@ struct thread_struct { unsigned long s[12]; /* s[0]: frame pointer */ struct __riscv_d_ext_state fstate; unsigned long bad_cause; - unsigned long vstate_ctrl; + u32 riscv_v_flags; + u32 vstate_ctrl; struct __riscv_v_ext_state vstate; unsigned long align_ctl; }; diff --git a/arch/riscv/include/asm/simd.h b/arch/riscv/include/asm/simd.h new file mode 100644 index 000000000000..3b603e47c5d8 --- /dev/null +++ b/arch/riscv/include/asm/simd.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2017 Linaro Ltd. + * Copyright (C) 2023 SiFive + */ + +#ifndef __ASM_SIMD_H +#define __ASM_SIMD_H + +#include +#include +#include +#include +#include + +#include + +#ifdef CONFIG_RISCV_ISA_V +/* + * may_use_simd - whether it is allowable at this time to issue vector + * instructions or access the vector register file + * + * Callers must not assume that the result remains true beyond the next + * preempt_enable() or return from softirq context. + */ +static __must_check inline bool may_use_simd(void) +{ + /* + * RISCV_KERNEL_MODE_V is only set while preemption is disabled, + * and is clear whenever preemption is enabled. + */ + return !in_hardirq() && !in_nmi() && !(riscv_v_ctx_cnt() & RISCV_KERNEL_MODE_V_MASK); +} + +#else /* ! CONFIG_RISCV_ISA_V */ + +static __must_check inline bool may_use_simd(void) +{ + return false; +} + +#endif /* ! CONFIG_RISCV_ISA_V */ + +#endif diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index 87aaef656257..6254830c0668 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -22,6 +22,27 @@ extern unsigned long riscv_v_vsize; int riscv_v_setup_vsize(void); bool riscv_v_first_use_handler(struct pt_regs *regs); +void kernel_vector_begin(void); +void kernel_vector_end(void); +void get_cpu_vector_context(void); +void put_cpu_vector_context(void); + +static inline void riscv_v_ctx_cnt_add(u32 offset) +{ + current->thread.riscv_v_flags += offset; + barrier(); +} + +static inline void riscv_v_ctx_cnt_sub(u32 offset) +{ + barrier(); + current->thread.riscv_v_flags -= offset; +} + +static inline u32 riscv_v_ctx_cnt(void) +{ + return READ_ONCE(current->thread.riscv_v_flags); +} static __always_inline bool has_vector(void) { diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index fee22a3d1b53..8c58595696b3 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -63,6 +63,7 @@ obj-$(CONFIG_MMU) += vdso.o vdso/ obj-$(CONFIG_RISCV_MISALIGNED) += traps_misaligned.o obj-$(CONFIG_FPU) += fpu.o obj-$(CONFIG_RISCV_ISA_V) += vector.o +obj-$(CONFIG_RISCV_ISA_V) += kernel_mode_vector.o obj-$(CONFIG_SMP) += smpboot.o obj-$(CONFIG_SMP) += smp.o obj-$(CONFIG_SMP) += cpu_ops.o diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c new file mode 100644 index 000000000000..105147c7d2da --- /dev/null +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright (C) 2012 ARM Ltd. + * Author: Catalin Marinas + * Copyright (C) 2017 Linaro Ltd. + * Copyright (C) 2021 SiFive + */ +#include +#include +#include +#include +#include + +#include +#include +#include + +/* + * Claim ownership of the CPU vector context for use by the calling context. + * + * The caller may freely manipulate the vector context metadata until + * put_cpu_vector_context() is called. + */ +void get_cpu_vector_context(void) +{ + preempt_disable(); + + WARN_ON((riscv_v_ctx_cnt() & RISCV_KERNEL_MODE_V_MASK) != 0); + riscv_v_ctx_cnt_add(RISCV_KERNEL_MODE_V); +} + +/* + * Release the CPU vector context. + * + * Must be called from a context in which get_cpu_vector_context() was + * previously called, with no call to put_cpu_vector_context() in the + * meantime. + */ +void put_cpu_vector_context(void) +{ + WARN_ON((riscv_v_ctx_cnt() & RISCV_KERNEL_MODE_V_MASK) != RISCV_KERNEL_MODE_V); + riscv_v_ctx_cnt_sub(RISCV_KERNEL_MODE_V); + + preempt_enable(); +} + +/* + * kernel_vector_begin(): obtain the CPU vector registers for use by the calling + * context + * + * Must not be called unless may_use_simd() returns true. + * Task context in the vector registers is saved back to memory as necessary. + * + * A matching call to kernel_vector_end() must be made before returning from the + * calling context. + * + * The caller may freely use the vector registers until kernel_vector_end() is + * called. + */ +void kernel_vector_begin(void) +{ + if (WARN_ON(!has_vector())) + return; + + BUG_ON(!may_use_simd()); + + get_cpu_vector_context(); + + riscv_v_vstate_save(current, task_pt_regs(current)); + + riscv_v_enable(); +} +EXPORT_SYMBOL_GPL(kernel_vector_begin); + +/* + * kernel_vector_end(): give the CPU vector registers back to the current task + * + * Must be called from a context in which kernel_vector_begin() was previously + * called, with no call to kernel_vector_end() in the meantime. + * + * The caller must not use the vector registers after this function is called, + * unless kernel_vector_begin() is called again in the meantime. + */ +void kernel_vector_end(void) +{ + if (WARN_ON(!has_vector())) + return; + + riscv_v_vstate_restore(current, task_pt_regs(current)); + + riscv_v_disable(); + + put_cpu_vector_context(); +} +EXPORT_SYMBOL_GPL(kernel_vector_end); diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 4f21d970a129..4a1275db1146 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -221,6 +221,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) childregs->a0 = 0; /* Return value of fork() */ p->thread.s[0] = 0; } + p->thread.riscv_v_flags = 0; p->thread.ra = (unsigned long)ret_from_fork; p->thread.sp = (unsigned long)childregs; /* kernel sp */ return 0; From patchwork Sat Dec 23 04:29:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13503917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 96ADBC3DA6E for ; Sat, 23 Dec 2023 04:29:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DYYQ2vQ/45jWs3ldcMBAe9pee8WFvmDL/zvIW6UNBsA=; b=G4NKRbPC+MkOXd y0rrvA1Sq9YtTi3k8RG30ytmEsP/CnVr3kk1E1VbY6l+7W/btV/xKl6/sIvqJM1vAJdC+fC8DIfQH NhHNwVtGqitaVnr0A435JgfWCDOcPwIT5Empw5IvsNxJxwA0hOb7Y8j9NeV2H2FYMtIp27DlwNkV7 4CAeiPv6REtt/Tx6IzoObsUfbYPui701A/tfEx665aYddnXkVpIUTBQzBqW7rt3iDTS1e8EcK3s3n 8Qxi8FwrKGZW2hwJKWxNS9GVXQSPl/ter7IgprWq4ZrqREPsWkwlg0TQzthj5Ae21LuZaVf02TJgL LfRWIirgW4FxE22I6UMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGte2-007LrZ-0I; Sat, 23 Dec 2023 04:29:50 +0000 Received: from mail-oo1-xc33.google.com ([2607:f8b0:4864:20::c33]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGtdz-007Lqo-1M for linux-riscv@lists.infradead.org; Sat, 23 Dec 2023 04:29:48 +0000 Received: by mail-oo1-xc33.google.com with SMTP id 006d021491bc7-5908a63a83fso1516683eaf.1 for ; Fri, 22 Dec 2023 20:29:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1703305786; x=1703910586; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=Lp2+zsApB1cPbYh9VNGYBy7kWpspp9gbGDo7+YZjNlI=; b=GvtX2aqcxyNmMQIWQwtsIsRMwBC1TwG4L3RmCU/RWpG2T7zZSJXe47fETeFVoCPJXV iT5tmMNG7eGAHQDVbLd2bim6po4oYOB5Rn644vzIgLNickBNemC0GuX+GPDZjwWF4ZMI G2GN2yKHNXx3oM6UO7esHtqU0QbFxLckrnd/8Gjm8ybK9hCzifdczn/X45/ER64GgJg1 /4oMuPqCvy0rkqI2+a9lNQRZK8xiypaAnGz6tl13dIrkUMVS3c67lSx4gNHdG8sJvQjY 35ThC6DL/Gd13bq7AYwCMvqtuMN+386pDKR0MWw+G+oFUyxqoGQfNVclabnYnNDh+H5j kfFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703305786; x=1703910586; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Lp2+zsApB1cPbYh9VNGYBy7kWpspp9gbGDo7+YZjNlI=; b=LI6/IlFAh5r37ASxFG0rcsmRb93yxBUlp2xKCjffu8WN6jxOfBM5qskJqu0veKD10N sXHOAHhqDS+4wpnt1JdzpN2rWTJSQQ/JgkBYzHSjspa53ClnpkhFY+AZIPan7tca/Sfp 3QaGyJ2aCwPKkoMS3sEssVVoVOO8Hzsg+iEL1nAXZi8IJGsFzfI43gYaO8GUD5nkCsWK DL+C5ALEEVqjQPym2e1pHM13XCMRcJhspK0oLs+jCfeAHb+CgRRgj6EMTiqHdYbpuZa5 A6hFshGfu2siQFS6Vzm1n85E7FYqrEHhIGZWiU+6HnFYpwWnVgkBAOY4F2b+MQunL1th HpAA== X-Gm-Message-State: AOJu0YxHknOJczmfoc8RaCotvxYOwZCoC5W+rw7iFe6LuuWHepMMRCy5 Te+iJLOMIlJoi3CCVzlvPBS5KTUufzGPLWGpI8lnt7LzKSg0NrpzjWWN81TOom83bwy/3XfOy2N Us9ENP0Mci4DXQQsQH8+aS+Cxhtqba7z0VlAD2RB5KUHY/aicSEiZb/KJubY2IdCfR50YKDr7ro GlYOEYOwVcKu/c/WOGNjWi X-Google-Smtp-Source: AGHT+IFnD3BlZsvQGPajrdNuqUpn9/TfLxhUPCHRWaI7J/VUH/ZH+OVcZJnCJloR3KX2gnwZqnW/gA== X-Received: by 2002:a05:6358:10a:b0:174:cf23:8fb8 with SMTP id f10-20020a056358010a00b00174cf238fb8mr2212614rwa.60.1703305785629; Fri, 22 Dec 2023 20:29:45 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id sj10-20020a17090b2d8a00b0028b845f2890sm2623397pjb.33.2023.12.22.20.29.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 20:29:44 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Vincent Chen , Conor Dooley Subject: [v8, 02/10] riscv: vector: make Vector always available for softirq context Date: Sat, 23 Dec 2023 04:29:06 +0000 Message-Id: <20231223042914.18599-3-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231223042914.18599-1-andy.chiu@sifive.com> References: <20231223042914.18599-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_202947_460886_87520336 X-CRM114-Status: GOOD ( 14.74 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The goal of this patch is to provide full support of Vector in kernel softirq context. So that some of the crypto alogrithms won't need scalar fallbacks. By disabling bottom halves in active kernel-mode Vector, softirq will not be able to nest on top of any kernel-mode Vector. So, softirq context is able to use Vector whenever it runs. After this patch, Vector context cannot start with irqs disabled. Otherwise local_bh_enable() may run in a wrong context. Disabling bh is not enough for RT-kernel to prevent preeemption. So we must disable preemption, which also implies disabling bh on RT. Related-to: commit 696207d4258b ("arm64/sve: Make kernel FPU protection RT friendly") Related-to: commit 66c3ec5a7120 ("arm64: neon: Forbid when irqs are disabled") Signed-off-by: Andy Chiu --- Changelog v8: - refine comments, fix typos (Eric) Changelog v4: - new patch since v4 --- arch/riscv/include/asm/simd.h | 6 +++++- arch/riscv/kernel/kernel_mode_vector.c | 14 ++++++++++++-- 2 files changed, 17 insertions(+), 3 deletions(-) diff --git a/arch/riscv/include/asm/simd.h b/arch/riscv/include/asm/simd.h index 3b603e47c5d8..2f1e95ccb03c 100644 --- a/arch/riscv/include/asm/simd.h +++ b/arch/riscv/include/asm/simd.h @@ -28,8 +28,12 @@ static __must_check inline bool may_use_simd(void) /* * RISCV_KERNEL_MODE_V is only set while preemption is disabled, * and is clear whenever preemption is enabled. + * + * Kernel-mode Vector temporarily disables bh. So we must not return + * true on irq_disabled(). Otherwise we would fail the lockdep check + * calling local_bh_enable() */ - return !in_hardirq() && !in_nmi() && !(riscv_v_ctx_cnt() & RISCV_KERNEL_MODE_V_MASK); + return !in_hardirq() && !in_nmi() && !irqs_disabled() && !(riscv_v_ctx_cnt() & RISCV_KERNEL_MODE_V_MASK); } #else /* ! CONFIG_RISCV_ISA_V */ diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c index 105147c7d2da..385d9b4d8cc6 100644 --- a/arch/riscv/kernel/kernel_mode_vector.c +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -23,7 +23,14 @@ */ void get_cpu_vector_context(void) { - preempt_disable(); + /* + * disable softirqs so it is impossible for softirqs to nest + * get_cpu_vector_context() when kernel is actively using Vector. + */ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_disable(); + else + preempt_disable(); WARN_ON((riscv_v_ctx_cnt() & RISCV_KERNEL_MODE_V_MASK) != 0); riscv_v_ctx_cnt_add(RISCV_KERNEL_MODE_V); @@ -41,7 +48,10 @@ void put_cpu_vector_context(void) WARN_ON((riscv_v_ctx_cnt() & RISCV_KERNEL_MODE_V_MASK) != RISCV_KERNEL_MODE_V); riscv_v_ctx_cnt_sub(RISCV_KERNEL_MODE_V); - preempt_enable(); + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_enable(); + else + preempt_enable(); } /* From patchwork Sat Dec 23 04:29:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13503918 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 47C35C3DA6E for ; Sat, 23 Dec 2023 04:30:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Xaq2TRk3wXbQZSmYt3X3wTn1lW/bK0rQJZnzB7U/D/M=; b=QMpp6itN86PQP3 7VvogT87d/rFeRg3OwphqPR4JI0TdaG6zD4g03IwA0UKVRtueYd5S9xN9yccAn3dr7pI56H3E0206 4/vPWUYwP0i6u5sBwW8k/QJ/4nbSdtRXDelu0B4lZxEDLd2iLITy+Z086cB6g57V0Jnoq9S2mja+X lrJNnxQTHwT4YfJAKib8aR9iQ7743qqeIqrrt8nF0ZPWOF2QvwcWkK/vm/bnrWGXT0DxRmeC6FqN3 NNMu7e2DXBwGBRCMjAGgTL89uHagPbsJy2iBwUCVwzgbAs6lmUEHkNNuarLiQZzmEcRlN7UPdCeBo XDIUCgfVw84qK7VSX6LQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGteD-007Luh-0N; Sat, 23 Dec 2023 04:30:01 +0000 Received: from mail-oo1-xc30.google.com ([2607:f8b0:4864:20::c30]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGteA-007LtC-2R for linux-riscv@lists.infradead.org; Sat, 23 Dec 2023 04:30:00 +0000 Received: by mail-oo1-xc30.google.com with SMTP id 006d021491bc7-593fa46fd45so1587133eaf.2 for ; Fri, 22 Dec 2023 20:29:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1703305796; x=1703910596; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=NFnYzFltuLxaA5kymfUv4YQXh/w1s0UKs0KJQeLv+3U=; b=LG15DRl8moFYtPmIaqdf+vqsh9P2R58Lgu3o1HcTiDvoSQIrP4gcUi8btR4eoix0Ek 6d7H4c8nFezcmOdEn83WNEr6BtTVEMTThDH0ayfqTcZK1M9q6UL7tKW71asZ8eHyM+UM toj/azNRJ+ULiah6SkRpMVtraxYpI5NpmsNU+O6maMRHhVQ9N1+DB5NzQN7NotffFeMT zfu8xAAf9wF9VwCNwKD06xbGIuDbhBxSzCd8UzmrDYgb2/e3HdpGhCe7zxO7KMTjBxwl dd7JIdSbbpvT+nL6/1v3n2QE3qgNnT9hO4BPPPX9TxO2dU1WmPR7BPJIbr1kbPuFHovs VH/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703305796; x=1703910596; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NFnYzFltuLxaA5kymfUv4YQXh/w1s0UKs0KJQeLv+3U=; b=Sbjl4a0+3JI8Tvz0YBD0lLRgB2baVB3yajjMDbdCa1/HYyEnMbrxIV2d4j184k6knT 6qPeHcgmZpNIbG3lgGGe49QWnHAeADwFvBc+5yTrcBgKEzkirouj7w//HyakLq4ymY9e db1MSecsis5MwrN/xsjRoMEKHngOk+YVrwlg4PUF6vlO0QWwsjqomnomEZOIAgMqlIoH s24Or/IqXrcVBgTSGUxjDllzGxnInoXKkbzEyESkyoBZ3sQOK2K/Rx9Nh86qSwk1SxVk lcf89ikCOs809YrsGbOxRklZhUbj+46k4okqEfkdErKyPhg9OMUDSxcTfWrDVmWaJaeZ 941Q== X-Gm-Message-State: AOJu0YxjVaOq12NV7ueDAM4Alz3/qVyG4IWiA4CRnn9mbTDf97vksd46 jJh9rzFiPkPZJT5834ayXwOCCiwvG41VvM6Dnzb9Yi/mSg4U/skeob3Tl//HSU+5dBef9MO30pG ts9POvQudaKQ7wTUFBvjbh8E8ZTKEX/sJ6HxRAUhRwxB1VDNHID+2Zi7mDDCG4agiy4nsJ181CT 18zcpobFx2bj/EK+6vmb4X X-Google-Smtp-Source: AGHT+IHOtzusNbmxPsflP0l4pVxtG4GJ8CdLY+2Gx6+BB2pQF8JbzgWb9Y02/R9UPu/bKHvkn0dZIA== X-Received: by 2002:a05:6870:fb88:b0:204:4351:8bb7 with SMTP id kv8-20020a056870fb8800b0020443518bb7mr2584101oab.43.1703305795877; Fri, 22 Dec 2023 20:29:55 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id sj10-20020a17090b2d8a00b0028b845f2890sm2623397pjb.33.2023.12.22.20.29.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 20:29:55 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Han-Kuan Chen , Andy Chiu , Albert Ou , Guo Ren , Sami Tolvanen , Deepak Gupta , Andrew Jones , Conor Dooley , Heiko Stuebner Subject: [v8, 03/10] riscv: Add vector extension XOR implementation Date: Sat, 23 Dec 2023 04:29:07 +0000 Message-Id: <20231223042914.18599-4-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231223042914.18599-1-andy.chiu@sifive.com> References: <20231223042914.18599-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_202958_794076_2F2C89CE X-CRM114-Status: GOOD ( 15.31 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Greentime Hu This patch adds support for vector optimized XOR and it is tested in qemu. Co-developed-by: Han-Kuan Chen Signed-off-by: Han-Kuan Chen Signed-off-by: Greentime Hu Signed-off-by: Andy Chiu --- Changelog v8: - wrap xor function prototypes with CONFIG_RISCV_ISA_V Changelog v7: - fix build warning message and use proper entry/exit macro for assembly. Drop Conor's A-b Changelog v2: - 's/rvv/vector/' (Conor) --- arch/riscv/include/asm/asm-prototypes.h | 18 ++++++ arch/riscv/include/asm/xor.h | 68 +++++++++++++++++++++ arch/riscv/lib/Makefile | 1 + arch/riscv/lib/xor.S | 81 +++++++++++++++++++++++++ 4 files changed, 168 insertions(+) create mode 100644 arch/riscv/include/asm/xor.h create mode 100644 arch/riscv/lib/xor.S diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h index 36b955c762ba..6db1a9bbff4c 100644 --- a/arch/riscv/include/asm/asm-prototypes.h +++ b/arch/riscv/include/asm/asm-prototypes.h @@ -9,6 +9,24 @@ long long __lshrti3(long long a, int b); long long __ashrti3(long long a, int b); long long __ashlti3(long long a, int b); +#ifdef CONFIG_RISCV_ISA_V + +void xor_regs_2_(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2); +void xor_regs_3_(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3); +void xor_regs_4_(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3, + const unsigned long *__restrict p4); +void xor_regs_5_(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3, + const unsigned long *__restrict p4, + const unsigned long *__restrict p5); + +#endif /* CONFIG_RISCV_ISA_V */ #define DECLARE_DO_ERROR_INFO(name) asmlinkage void name(struct pt_regs *regs) diff --git a/arch/riscv/include/asm/xor.h b/arch/riscv/include/asm/xor.h new file mode 100644 index 000000000000..96011861e46b --- /dev/null +++ b/arch/riscv/include/asm/xor.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Copyright (C) 2021 SiFive + */ + +#include +#include +#ifdef CONFIG_RISCV_ISA_V +#include +#include +#include + +static void xor_vector_2(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2) +{ + kernel_vector_begin(); + xor_regs_2_(bytes, p1, p2); + kernel_vector_end(); +} + +static void xor_vector_3(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3) +{ + kernel_vector_begin(); + xor_regs_3_(bytes, p1, p2, p3); + kernel_vector_end(); +} + +static void xor_vector_4(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3, + const unsigned long *__restrict p4) +{ + kernel_vector_begin(); + xor_regs_4_(bytes, p1, p2, p3, p4); + kernel_vector_end(); +} + +static void xor_vector_5(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3, + const unsigned long *__restrict p4, + const unsigned long *__restrict p5) +{ + kernel_vector_begin(); + xor_regs_5_(bytes, p1, p2, p3, p4, p5); + kernel_vector_end(); +} + +static struct xor_block_template xor_block_rvv = { + .name = "rvv", + .do_2 = xor_vector_2, + .do_3 = xor_vector_3, + .do_4 = xor_vector_4, + .do_5 = xor_vector_5 +}; + +#undef XOR_TRY_TEMPLATES +#define XOR_TRY_TEMPLATES \ + do { \ + xor_speed(&xor_block_8regs); \ + xor_speed(&xor_block_32regs); \ + if (has_vector()) { \ + xor_speed(&xor_block_rvv);\ + } \ + } while (0) +#endif diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile index 26cb2502ecf8..494f9cd1a00c 100644 --- a/arch/riscv/lib/Makefile +++ b/arch/riscv/lib/Makefile @@ -11,3 +11,4 @@ lib-$(CONFIG_64BIT) += tishift.o lib-$(CONFIG_RISCV_ISA_ZICBOZ) += clear_page.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o +lib-$(CONFIG_RISCV_ISA_V) += xor.o diff --git a/arch/riscv/lib/xor.S b/arch/riscv/lib/xor.S new file mode 100644 index 000000000000..b28f2430e52f --- /dev/null +++ b/arch/riscv/lib/xor.S @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Copyright (C) 2021 SiFive + */ +#include +#include +#include + +SYM_FUNC_START(xor_regs_2_) + vsetvli a3, a0, e8, m8, ta, ma + vle8.v v0, (a1) + vle8.v v8, (a2) + sub a0, a0, a3 + vxor.vv v16, v0, v8 + add a2, a2, a3 + vse8.v v16, (a1) + add a1, a1, a3 + bnez a0, xor_regs_2_ + ret +SYM_FUNC_END(xor_regs_2_) +EXPORT_SYMBOL(xor_regs_2_) + +SYM_FUNC_START(xor_regs_3_) + vsetvli a4, a0, e8, m8, ta, ma + vle8.v v0, (a1) + vle8.v v8, (a2) + sub a0, a0, a4 + vxor.vv v0, v0, v8 + vle8.v v16, (a3) + add a2, a2, a4 + vxor.vv v16, v0, v16 + add a3, a3, a4 + vse8.v v16, (a1) + add a1, a1, a4 + bnez a0, xor_regs_3_ + ret +SYM_FUNC_END(xor_regs_3_) +EXPORT_SYMBOL(xor_regs_3_) + +SYM_FUNC_START(xor_regs_4_) + vsetvli a5, a0, e8, m8, ta, ma + vle8.v v0, (a1) + vle8.v v8, (a2) + sub a0, a0, a5 + vxor.vv v0, v0, v8 + vle8.v v16, (a3) + add a2, a2, a5 + vxor.vv v0, v0, v16 + vle8.v v24, (a4) + add a3, a3, a5 + vxor.vv v16, v0, v24 + add a4, a4, a5 + vse8.v v16, (a1) + add a1, a1, a5 + bnez a0, xor_regs_4_ + ret +SYM_FUNC_END(xor_regs_4_) +EXPORT_SYMBOL(xor_regs_4_) + +SYM_FUNC_START(xor_regs_5_) + vsetvli a6, a0, e8, m8, ta, ma + vle8.v v0, (a1) + vle8.v v8, (a2) + sub a0, a0, a6 + vxor.vv v0, v0, v8 + vle8.v v16, (a3) + add a2, a2, a6 + vxor.vv v0, v0, v16 + vle8.v v24, (a4) + add a3, a3, a6 + vxor.vv v0, v0, v24 + vle8.v v8, (a5) + add a4, a4, a6 + vxor.vv v16, v0, v8 + add a5, a5, a6 + vse8.v v16, (a1) + add a1, a1, a6 + bnez a0, xor_regs_5_ + ret +SYM_FUNC_END(xor_regs_5_) +EXPORT_SYMBOL(xor_regs_5_) From patchwork Sat Dec 23 04:29:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13503919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 23F2DC3DA6E for ; Sat, 23 Dec 2023 04:30:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Y1kxSxx6iGTtoAP1+MlgfHHWu0xVsP7FvSq1Egp3kjA=; b=UrwPO5hCNi/z6I gE0MIAZh1kmRXcGjN10iMvE1ZWNBMaelK+dOnFJRMgY9UypJlamDpPwf6MGi/v1nGLPRICdiVcHQ4 u76HrRQZ4iHK0N+q5dbftvliDIll/jSibEQb1GctpZR2wmLa6r6OK5fbuvo4SJ422vwRzNYS7iGIi s3rxlWKGgGZibenKw7jKjrIuFE9F2JQs51QFTEqaDsUr9SMc2EXg2t4VnH7SUrvUqG7vNVsp5bZWX 5pHEpRlSErHVE2djYPCrehqA398R0CzDv6maJajOXZ3FpWG8O+sFJQlvDjWbFGAstLeWXnT0Ohhyh OUu+sK51trmw2EB8cfJA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGteR-007LzI-0p; Sat, 23 Dec 2023 04:30:15 +0000 Received: from mail-oo1-xc33.google.com ([2607:f8b0:4864:20::c33]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGteN-007Ly3-1A for linux-riscv@lists.infradead.org; Sat, 23 Dec 2023 04:30:13 +0000 Received: by mail-oo1-xc33.google.com with SMTP id 006d021491bc7-59451ae06c2so503751eaf.0 for ; Fri, 22 Dec 2023 20:30:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1703305810; x=1703910610; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=v8iAwBeYNUXqEPY0sDPa1+UdCfoyJrfdTmb6644+HQE=; b=CSxYm07D64qPGp2hnSmiBuRd9+J6rJoBvgHQlDM61hRqLXNboUPpU98mMdOUqL59Hk Wm+tHlU2F30GODmPNIGl9K3nYPFVmBwsovcWKIa9TN8mNpGZhybauiRn/NN2SwyrHEug 2m2PmAGYn87wBN6gwU/eL53iHqf0TmzBNW51oVHS0WQBUfCRhDwrsu2Pp4mTvaC/YjfL j5Pqxp39vGSakUvMp1XyOwKvvxBWqeXxskeCy5l0NYzqhHMvdjR0ypsyXjSv2Q99mKxn Zog1DHDAGSMXA1XFielRqOKkgXjA/+LAtr3dt7FDS7LdHkWG37E/CvkQzpAGL7KLRJqL s1Vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703305810; x=1703910610; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v8iAwBeYNUXqEPY0sDPa1+UdCfoyJrfdTmb6644+HQE=; b=hTzucjpWWn18fUjs22jZ7sD2CdcZ4uLza4zu/cczEk/ZUAv48Q9J3YQYNej7Sa7IBe IdrtvW4ZKkPM05GZkFaNBH5Y59fRUCNIgoCXsxW1TFJw0If/8wPPxIFGGTnH9cDUPCPo 2atyJh/xQ5Ujh4cv0NaorW3FVML2QBK/wxzrLc3h1FOxrpSjgBza4p77/HJbYd3qbUz/ fT5xTjhdtToDGcLNFUtLY+yalvEukpLW28y0XJczL/e5IoMwtvuyDGB/ih7EWiRCC6Hi Ar6hkQlU0BspaXdrF3CmpQlgHWwIOxVK2P8G+NwOzjGnn7Iiu8goARHDxiJd4GsmLxvP MZig== X-Gm-Message-State: AOJu0YzbvejqBggRyfF3OJhoGOvLmlk9KHE9ATRPWR5CEXHres2h5PYF 51WVP1tT6azMjvoxVKqi2utTLzyZTPgTTAcEFGWDMMv0EwiNwcdZChpaQq2rzwqyrQtuz91hTKl M6TD9NUnQAzHBiB4opL1Yjk58kp+WLhF8TYUzCre1677S6glqhzpe7obkTr+1eR3dHL52djNW+B VAEMw2sNgBNLbEgBTSpjQa X-Google-Smtp-Source: AGHT+IG/vSoRRIKTWk+anhMW+p2XNjg9k0+2nbAc8wM7mpFirAJxtMwaXfoVKWQ+5OMUtezErGCpbQ== X-Received: by 2002:a05:6358:9042:b0:172:98ae:cd35 with SMTP id f2-20020a056358904200b0017298aecd35mr2555343rwf.29.1703305809494; Fri, 22 Dec 2023 20:30:09 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id sj10-20020a17090b2d8a00b0028b845f2890sm2623397pjb.33.2023.12.22.20.30.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 20:30:08 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Oleg Nesterov , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Conor Dooley , Guo Ren , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Jisheng Zhang , Sami Tolvanen , Deepak Gupta , Vincent Chen , Heiko Stuebner , Xiao Wang , Haorong Lu , Mathis Salmen , Joel Granados Subject: [v8, 04/10] riscv: sched: defer restoring Vector context for user Date: Sat, 23 Dec 2023 04:29:08 +0000 Message-Id: <20231223042914.18599-5-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231223042914.18599-1-andy.chiu@sifive.com> References: <20231223042914.18599-1-andy.chiu@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_203011_410103_25D49A0F X-CRM114-Status: GOOD ( 20.89 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org User will use its Vector registers only after the kernel really returns to the userspace. So we can delay restoring Vector registers as long as we are still running in kernel mode. So, add a thread flag to indicates the need of restoring Vector and do the restore at the last arch-specific exit-to-user hook. This save the context restoring cost when we switch over multiple processes that run V in kernel mode. For example, if the kernel performs a context swicth from A->B->C, and returns to C's userspace, then there is no need to restore B's V-register. Besides, this also prevents us from repeatedly restoring V context when executing kernel-mode Vector multiple times. The cost of this is that we must disable preemption and mark vector as busy during vstate_{save,restore}. Because then the V context will not get restored back immediately when a trap-causing context switch happens in the middle of vstate_{save,restore}. Signed-off-by: Andy Chiu Acked-by: Conor Dooley --- Changelog v4: - fix typos and re-add Conor's A-b. Changelog v3: - Guard {get,put}_cpu_vector_context between vstate_* operation and explain it in the commit msg. - Drop R-b from Björn and A-b from Conor. Changelog v2: - rename and add comment for the new thread flag (Conor) --- arch/riscv/include/asm/entry-common.h | 17 +++++++++++++++++ arch/riscv/include/asm/thread_info.h | 2 ++ arch/riscv/include/asm/vector.h | 11 ++++++++++- arch/riscv/kernel/kernel_mode_vector.c | 2 +- arch/riscv/kernel/process.c | 2 ++ arch/riscv/kernel/ptrace.c | 5 ++++- arch/riscv/kernel/signal.c | 5 ++++- arch/riscv/kernel/vector.c | 2 +- 8 files changed, 41 insertions(+), 5 deletions(-) diff --git a/arch/riscv/include/asm/entry-common.h b/arch/riscv/include/asm/entry-common.h index 7ab5e34318c8..6361a8488642 100644 --- a/arch/riscv/include/asm/entry-common.h +++ b/arch/riscv/include/asm/entry-common.h @@ -4,6 +4,23 @@ #define _ASM_RISCV_ENTRY_COMMON_H #include +#include +#include + +static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, + unsigned long ti_work) +{ + if (ti_work & _TIF_RISCV_V_DEFER_RESTORE) { + clear_thread_flag(TIF_RISCV_V_DEFER_RESTORE); + /* + * We are already called with irq disabled, so go without + * keeping track of vector_context_busy. + */ + riscv_v_vstate_restore(current, regs); + } +} + +#define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare void handle_page_fault(struct pt_regs *regs); void handle_break(struct pt_regs *regs); diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h index 574779900bfb..1047a97ddbc8 100644 --- a/arch/riscv/include/asm/thread_info.h +++ b/arch/riscv/include/asm/thread_info.h @@ -103,12 +103,14 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src); #define TIF_NOTIFY_SIGNAL 9 /* signal notifications exist */ #define TIF_UPROBE 10 /* uprobe breakpoint or singlestep */ #define TIF_32BIT 11 /* compat-mode 32bit process */ +#define TIF_RISCV_V_DEFER_RESTORE 12 /* restore Vector before returing to user */ #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) #define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL) #define _TIF_UPROBE (1 << TIF_UPROBE) +#define _TIF_RISCV_V_DEFER_RESTORE (1 << TIF_RISCV_V_DEFER_RESTORE) #define _TIF_WORK_MASK \ (_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED | \ diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index 6254830c0668..e706613aae2c 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -205,6 +205,15 @@ static inline void riscv_v_vstate_restore(struct task_struct *task, } } +static inline void riscv_v_vstate_set_restore(struct task_struct *task, + struct pt_regs *regs) +{ + if ((regs->status & SR_VS) != SR_VS_OFF) { + set_tsk_thread_flag(task, TIF_RISCV_V_DEFER_RESTORE); + riscv_v_vstate_on(regs); + } +} + static inline void __switch_to_vector(struct task_struct *prev, struct task_struct *next) { @@ -212,7 +221,7 @@ static inline void __switch_to_vector(struct task_struct *prev, regs = task_pt_regs(prev); riscv_v_vstate_save(prev, regs); - riscv_v_vstate_restore(next, task_pt_regs(next)); + riscv_v_vstate_set_restore(next, task_pt_regs(next)); } void riscv_v_vstate_ctrl_init(struct task_struct *tsk); diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c index 385d9b4d8cc6..63814e780c28 100644 --- a/arch/riscv/kernel/kernel_mode_vector.c +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -96,7 +96,7 @@ void kernel_vector_end(void) if (WARN_ON(!has_vector())) return; - riscv_v_vstate_restore(current, task_pt_regs(current)); + riscv_v_vstate_set_restore(current, task_pt_regs(current)); riscv_v_disable(); diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 4a1275db1146..36993f408de4 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -171,6 +171,7 @@ void flush_thread(void) riscv_v_vstate_off(task_pt_regs(current)); kfree(current->thread.vstate.datap); memset(¤t->thread.vstate, 0, sizeof(struct __riscv_v_ext_state)); + clear_tsk_thread_flag(current, TIF_RISCV_V_DEFER_RESTORE); #endif } @@ -187,6 +188,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) *dst = *src; /* clear entire V context, including datap for a new task */ memset(&dst->thread.vstate, 0, sizeof(struct __riscv_v_ext_state)); + clear_tsk_thread_flag(dst, TIF_RISCV_V_DEFER_RESTORE); return 0; } diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c index 2afe460de16a..7b93bcbdf9fa 100644 --- a/arch/riscv/kernel/ptrace.c +++ b/arch/riscv/kernel/ptrace.c @@ -99,8 +99,11 @@ static int riscv_vr_get(struct task_struct *target, * Ensure the vector registers have been saved to the memory before * copying them to membuf. */ - if (target == current) + if (target == current) { + get_cpu_vector_context(); riscv_v_vstate_save(current, task_pt_regs(current)); + put_cpu_vector_context(); + } ptrace_vstate.vstart = vstate->vstart; ptrace_vstate.vl = vstate->vl; diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c index 88b6220b2608..aca4a12c8416 100644 --- a/arch/riscv/kernel/signal.c +++ b/arch/riscv/kernel/signal.c @@ -86,7 +86,10 @@ static long save_v_state(struct pt_regs *regs, void __user **sc_vec) /* datap is designed to be 16 byte aligned for better performance */ WARN_ON(unlikely(!IS_ALIGNED((unsigned long)datap, 16))); + get_cpu_vector_context(); riscv_v_vstate_save(current, regs); + put_cpu_vector_context(); + /* Copy everything of vstate but datap. */ err = __copy_to_user(&state->v_state, ¤t->thread.vstate, offsetof(struct __riscv_v_ext_state, datap)); @@ -134,7 +137,7 @@ static long __restore_v_state(struct pt_regs *regs, void __user *sc_vec) if (unlikely(err)) return err; - riscv_v_vstate_restore(current, regs); + riscv_v_vstate_set_restore(current, regs); return err; } diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index 578b6292487e..66e8c6ab09d2 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -167,7 +167,7 @@ bool riscv_v_first_use_handler(struct pt_regs *regs) return true; } riscv_v_vstate_on(regs); - riscv_v_vstate_restore(current, regs); + riscv_v_vstate_set_restore(current, regs); return true; } From patchwork Sat Dec 23 04:29:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13503920 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F29DFC3DA6E for ; Sat, 23 Dec 2023 04:30:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+hIieG5alVHABdmXoSXIF3wc/8qqCtNaqGJkH8HYHzI=; b=dQ/ZPGrknLFgQM 67Q6shu39e7JO/qAEFMsMHHuwwaR1da4kEIDcbEjHmterb1y87WbF5/MXFm8k0b2/KzPXwi/cogJh SH6FpCav2fccmL/4DpScus0dPxlClR00N8ICkZHH1MkfHCusrzMqPWND4QSxmJJo8ENIR3+O+Trr+ GWHfYhpD4pFDtBwKMIYjNe6maX/NcrDp1CfGpgCkf9QpX+AL/wciqTBtBlyM5fW7elgznvGaog045 OO6B/YZHupnD9uYnBYoogMkQW2Mky0nTLZIRrWe1NzDQu4WOiD/7fhaAKeNyuPcH7gf4bx6MFibEw 2zLHzR3OCuN8qs83v2MQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGted-007M4H-0a; Sat, 23 Dec 2023 04:30:27 +0000 Received: from mail-ot1-x331.google.com ([2607:f8b0:4864:20::331]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGteZ-007M27-0d for linux-riscv@lists.infradead.org; Sat, 23 Dec 2023 04:30:26 +0000 Received: by mail-ot1-x331.google.com with SMTP id 46e09a7af769-6da5d410432so1846146a34.2 for ; Fri, 22 Dec 2023 20:30:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1703305821; x=1703910621; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=vWRlgwSOjDtRNsyl0UctYF1l5cwomAcgVJVuoWlsSQI=; b=h6HJ66IXw4nALuBJRLfob1cXjLq42Kd/KQE9WMYqHQE8apIoWdJyjEdGvrA+y1ZCt2 gMVjL9ZVVMezeUza8CdjWWZuBAn4simbuCrEcjpPSIMnEM01OGt+Rquvtj69wkpqgt6i g86jZqITh3o2KtaSlLAqZUU7NXPhYYXHbZU8C80BSjeojM8l51nCqZxSljW+b/jcoav/ 8+F+sVyPGpRrtBjY9YxrT9LDYcObDzrsVfixRtji8LAcOyXoAILEeKXDhoGc2L89ikvm BL1EpETQ7VvFDBTkXU82fjROflUeuzxDd4TKMyyT5q2hQ2tKLfbcKpEepEf7J54tOcXR 7y8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703305821; x=1703910621; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vWRlgwSOjDtRNsyl0UctYF1l5cwomAcgVJVuoWlsSQI=; b=u1GMMVC/AEXAin4CuXIC3/OE1WraKDMoHSKDtn5Kw6naljYabCnmT0LiCzIMyKufLo N2oVURO4HzqHeq25yP8i9Jo88fjbJdAw2plsfLck5Ba8NDWwbYVCtOlE+q1sRXp89h7O VXAm/t8bkcDp+Kb49V5xpGGM4kYqf7Wb85AAiEXlUu3EQS9FxwZYba2ysOatQ2C54Kjx IK9576/vcAWMhodJWE7G6jmgs1xxHJGhc3mw8wjSFJzXcVykoeCAPsZMQCYzYGyWOjTI IBiWB/htR8gU3ZfBiWM9wCrrvFm7FBaq9grcJiW0B3hXNSs4tY4ua0+netJ3QFYCzN2K D6dA== X-Gm-Message-State: AOJu0YwvX1byi67Qn50aG9VjIEs/qfWLgV/7uSQyvbooy83nRR5Jw5J4 vpaU3d54LGlS4wyD+oQc8Vm9FQSuKhXt42cZn+LSNJxbfFYh4RtbmNSq0tl01e98a7Hx8zk29xt BGDWfyPn4CpsNsck3DimWGQ6aj9yWjR+w75IZ1b1ehJ/Fn2besXP9gZHxszMhTAz8NDYM57Kowd P4jL3PIDODtKT/92X5VB9e X-Google-Smtp-Source: AGHT+IHRjSeDLuise4lSQ+xiIpZEbcvhTVvj9glvBKxpOUJ5lyzrc5wZ6eH3Y4JcZ4Q6oYmgrGuUNw== X-Received: by 2002:a05:6358:7f0d:b0:174:c7d3:6729 with SMTP id p13-20020a0563587f0d00b00174c7d36729mr2841936rwn.34.1703305820684; Fri, 22 Dec 2023 20:30:20 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id sj10-20020a17090b2d8a00b0028b845f2890sm2623397pjb.33.2023.12.22.20.30.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 20:30:19 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Guo Ren , Sami Tolvanen , Han-Kuan Chen , Deepak Gupta , Andrew Jones , Conor Dooley , Heiko Stuebner , Aurelien Jarno , Bo YU , Alexandre Ghiti , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= Subject: [v8, 05/10] riscv: lib: vectorize copy_to_user/copy_from_user Date: Sat, 23 Dec 2023 04:29:09 +0000 Message-Id: <20231223042914.18599-6-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231223042914.18599-1-andy.chiu@sifive.com> References: <20231223042914.18599-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_203023_236205_6556C912 X-CRM114-Status: GOOD ( 22.01 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This patch utilizes Vector to perform copy_to_user/copy_from_user. If Vector is available and the size of copy is large enough for Vector to perform better than scalar, then direct the kernel to do Vector copies for userspace. Though the best programming practice for users is to reduce the copy, this provides a faster variant when copies are inevitable. The optimal size for using Vector, copy_to_user_thres, is only a heuristic for now. We can add DT parsing if people feel the need of customizing it. The exception fixup code of the __asm_vector_usercopy must fallback to the scalar one because accessing user pages might fault, and must be sleepable. Current kernel-mode Vector does not allow tasks to be preemptible, so we must disactivate Vector and perform a scalar fallback in such case. The original implementation of Vector operations comes from https://github.com/sifive/sifive-libc, which we agree to contribute to Linux kernel. Signed-off-by: Andy Chiu --- Changelog v8: - fix no-mmu build Changelog v6: - Add a kconfig entry to configure threshold values (Charlie) - Refine assembly code (Charlie) Changelog v4: - new patch since v4 --- arch/riscv/Kconfig | 8 ++++ arch/riscv/include/asm/asm-prototypes.h | 4 ++ arch/riscv/lib/Makefile | 6 ++- arch/riscv/lib/riscv_v_helpers.c | 44 ++++++++++++++++++++++ arch/riscv/lib/uaccess.S | 10 +++++ arch/riscv/lib/uaccess_vector.S | 50 +++++++++++++++++++++++++ 6 files changed, 121 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/lib/riscv_v_helpers.c create mode 100644 arch/riscv/lib/uaccess_vector.S diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 95a2a06acc6a..3c5ba05e8a2d 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -525,6 +525,14 @@ config RISCV_ISA_V_DEFAULT_ENABLE If you don't know what to do here, say Y. +config RISCV_ISA_V_UCOPY_THRESHOLD + int "Threshold size for vectorized user copies" + depends on RISCV_ISA_V + default 768 + help + Prefer using vectorized copy_to_user()/copy_from_user() when the + workload size exceeds this value. + config TOOLCHAIN_HAS_ZBB bool default y diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h index 6db1a9bbff4c..be438932f321 100644 --- a/arch/riscv/include/asm/asm-prototypes.h +++ b/arch/riscv/include/asm/asm-prototypes.h @@ -11,6 +11,10 @@ long long __ashlti3(long long a, int b); #ifdef CONFIG_RISCV_ISA_V +#ifdef CONFIG_MMU +asmlinkage int enter_vector_usercopy(void *dst, void *src, size_t n); +#endif /* CONFIG_MMU */ + void xor_regs_2_(unsigned long bytes, unsigned long *__restrict p1, const unsigned long *__restrict p2); void xor_regs_3_(unsigned long bytes, unsigned long *__restrict p1, diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile index 494f9cd1a00c..c8a6787d5827 100644 --- a/arch/riscv/lib/Makefile +++ b/arch/riscv/lib/Makefile @@ -6,9 +6,13 @@ lib-y += memmove.o lib-y += strcmp.o lib-y += strlen.o lib-y += strncmp.o -lib-$(CONFIG_MMU) += uaccess.o +ifeq ($(CONFIG_MMU), y) +lib-y += uaccess.o +lib-$(CONFIG_RISCV_ISA_V) += uaccess_vector.o +endif lib-$(CONFIG_64BIT) += tishift.o lib-$(CONFIG_RISCV_ISA_ZICBOZ) += clear_page.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o lib-$(CONFIG_RISCV_ISA_V) += xor.o +lib-$(CONFIG_RISCV_ISA_V) += riscv_v_helpers.o diff --git a/arch/riscv/lib/riscv_v_helpers.c b/arch/riscv/lib/riscv_v_helpers.c new file mode 100644 index 000000000000..6cac8f4e69e9 --- /dev/null +++ b/arch/riscv/lib/riscv_v_helpers.c @@ -0,0 +1,44 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright (C) 2023 SiFive + * Author: Andy Chiu + */ +#include +#include + +#include +#include + +#ifdef CONFIG_MMU +#include +#endif + +#ifdef CONFIG_MMU +size_t riscv_v_usercopy_threshold = CONFIG_RISCV_ISA_V_UCOPY_THRESHOLD; +int __asm_vector_usercopy(void *dst, void *src, size_t n); +int fallback_scalar_usercopy(void *dst, void *src, size_t n); +asmlinkage int enter_vector_usercopy(void *dst, void *src, size_t n) +{ + size_t remain, copied; + + /* skip has_vector() check because it has been done by the asm */ + if (!may_use_simd()) + goto fallback; + + kernel_vector_begin(); + remain = __asm_vector_usercopy(dst, src, n); + kernel_vector_end(); + + if (remain) { + copied = n - remain; + dst += copied; + src += copied; + goto fallback; + } + + return remain; + +fallback: + return fallback_scalar_usercopy(dst, src, n); +} +#endif diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S index 3ab438f30d13..a1e4a3c42925 100644 --- a/arch/riscv/lib/uaccess.S +++ b/arch/riscv/lib/uaccess.S @@ -3,6 +3,8 @@ #include #include #include +#include +#include .macro fixup op reg addr lbl 100: @@ -11,6 +13,13 @@ .endm SYM_FUNC_START(__asm_copy_to_user) +#ifdef CONFIG_RISCV_ISA_V + ALTERNATIVE("j fallback_scalar_usercopy", "nop", 0, RISCV_ISA_EXT_v, CONFIG_RISCV_ISA_V) + REG_L t0, riscv_v_usercopy_threshold + bltu a2, t0, fallback_scalar_usercopy + tail enter_vector_usercopy +#endif +SYM_FUNC_START(fallback_scalar_usercopy) /* Enable access to user memory */ li t6, SR_SUM @@ -181,6 +190,7 @@ SYM_FUNC_START(__asm_copy_to_user) sub a0, t5, a0 ret SYM_FUNC_END(__asm_copy_to_user) +SYM_FUNC_END(fallback_scalar_usercopy) EXPORT_SYMBOL(__asm_copy_to_user) SYM_FUNC_ALIAS(__asm_copy_from_user, __asm_copy_to_user) EXPORT_SYMBOL(__asm_copy_from_user) diff --git a/arch/riscv/lib/uaccess_vector.S b/arch/riscv/lib/uaccess_vector.S new file mode 100644 index 000000000000..7bd96cee39e4 --- /dev/null +++ b/arch/riscv/lib/uaccess_vector.S @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include +#include +#include +#include +#include + +#define pDst a0 +#define pSrc a1 +#define iNum a2 + +#define iVL a3 + +#define ELEM_LMUL_SETTING m8 +#define vData v0 + + .macro fixup op reg addr lbl +100: + \op \reg, \addr + _asm_extable 100b, \lbl + .endm + +SYM_FUNC_START(__asm_vector_usercopy) + /* Enable access to user memory */ + li t6, SR_SUM + csrs CSR_STATUS, t6 + +loop: + vsetvli iVL, iNum, e8, ELEM_LMUL_SETTING, ta, ma + fixup vle8.v vData, (pSrc), 10f + fixup vse8.v vData, (pDst), 10f + sub iNum, iNum, iVL + add pSrc, pSrc, iVL + add pDst, pDst, iVL + bnez iNum, loop + +.Lout_copy_user: + /* Disable access to user memory */ + csrc CSR_STATUS, t6 + li a0, 0 + ret + + /* Exception fixup code */ +10: + /* Disable access to user memory */ + csrc CSR_STATUS, t6 + mv a0, iNum + ret +SYM_FUNC_END(__asm_vector_usercopy) From patchwork Sat Dec 23 04:29:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13503921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 834F5C3DA6E for ; Sat, 23 Dec 2023 04:30:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZbxQdMfKNEwIenHIavvgK6M/fE1oVXHnNl4kXC2nhgE=; b=5A+rASFvEpnLX/ RSU190/NKuQjOqN0eFkX2jp+aILadvztlwnxVEj5qkR+h09mvKSwWLQaYfjVUvlgzUzf2pZ+aMnYU XKrm95RiGlxRaefPD9HSLcNG3ScdOHONdUMBs3MTnQcMFz1orp2jDqoJNXjtW7jkNRDrgCxpyEYKE 3NfydZAwUF3eP0aE4n72y63TlVUECUHxqMIymNHmvIzDIUfBcy494xRJ1x6JsQ8e8MFkM66CTSlGT WOL2EiDE1k7Juves7JHgTt/Dbu2fwuyMrMmQynbuMz2sR+GbZtQTSZv5Pzh7AyCmJcIZH7ZhRNO66 BCQ8R8+oROlU1crwA3AQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGtf2-007MEc-1d; Sat, 23 Dec 2023 04:30:52 +0000 Received: from mail-oo1-xc2a.google.com ([2607:f8b0:4864:20::c2a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGtez-007M88-1I for linux-riscv@lists.infradead.org; Sat, 23 Dec 2023 04:30:51 +0000 Received: by mail-oo1-xc2a.google.com with SMTP id 006d021491bc7-59446cd46e9so533637eaf.3 for ; Fri, 22 Dec 2023 20:30:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1703305832; x=1703910632; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=Mqdq8las81Phx1TUr4Zt2rkRy0CyQ2SbCdGd8DVHTUk=; b=GxmRkYay5CRkRSPjr0jnFlGpW3VEU68xoGn7g85ohz2Ha47ebxhIVq5lDSv1rVU55y tEM8oHEXaShxBPWFC5W6SZNjyo1m+4yj2XT2Ee9U5ueQfBEpkoamDW/fgzJ3eJJK0ot8 yEy9sIhBzJ/nrn1ylYaP229SAwtqghnj3Qg74VL01Q2nZMcbO5d2q3/PgUANCX1QKhID FAbKHZA+lWUqjBYE7VLYxGa//OY3DU0sKdvfZfl6sEKLfZgjfRZqQqpKcYO0PDNKQaCk 3+hsVtKgtfLBQ75yST/I87IGFTfLcuHFjae1+Eq9n2orD40a0luNsjsF1pU++gnN78h+ vokA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703305832; x=1703910632; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Mqdq8las81Phx1TUr4Zt2rkRy0CyQ2SbCdGd8DVHTUk=; b=VB461rIuvyjHp77IvttGBPOPqNKDubMIKQXTli0Z+6Id5VZciVU5xsc+OoWAjS6+E5 A3xqyPGz/uuEeMnU0c+9BN51D1kFsl+skaZthVsj7pPsDxkQ52/YfHkx0hqqBC11GvJi FPFZPDT9IIaVbqvTZJSzmungZ3A7+yuDzYa0uyiuXQ3+gYsGcxkF0WiZrgdsocJgBEpR 7Qm7X8ES+ZJOLjH+U0q2dO7aerDQ6y7L/dQmHI/xh6ePvsbwRi4AvyqqHslMGra1aH8g GuzmpfJRiKDlgvp8OUTGGnHe7Byn3hY6IgFIDRxZ9B3chII9h50SMn/r7pyzmhxlie+I dSRQ== X-Gm-Message-State: AOJu0YzSTUEX5zPWhFBjltRDOnyzrZYxXGXOZkpstLkQ8l0fota1YzrC Re+dYF4dRuzgFjSILVm/ZdxlGbM7qH7A6JEnGPaDSgBC0LDs/ATUyCfpwHzQ25sVft0wOMdKu1i y41/0hDfr/QKTbeZjH1/48P83rlXqXeEYc2+YmNODuz3+2WDGXLhrPwWIGhNwFu4Vj7493RSRvC E18y66/mI2Job0UKScOaJA X-Google-Smtp-Source: AGHT+IF5JjNsoQm5epEVk5B35JJM8Uor8Reuh0cMXafj1GyOfS+pYDPq6Rijc4EOpDclqdHbyVD5tw== X-Received: by 2002:a05:6358:a085:b0:170:50f4:447a with SMTP id u5-20020a056358a08500b0017050f4447amr2478570rwn.48.1703305832355; Fri, 22 Dec 2023 20:30:32 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id sj10-20020a17090b2d8a00b0028b845f2890sm2623397pjb.33.2023.12.22.20.30.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 20:30:31 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Kees Cook , Han-Kuan Chen , Conor Dooley , Andrew Jones , Heiko Stuebner Subject: [v8, 06/10] riscv: lib: add vectorized mem* routines Date: Sat, 23 Dec 2023 04:29:10 +0000 Message-Id: <20231223042914.18599-7-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231223042914.18599-1-andy.chiu@sifive.com> References: <20231223042914.18599-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_203049_438680_793728D1 X-CRM114-Status: GOOD ( 16.34 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Provide vectorized memcpy/memset/memmove to accelerate common memory operations. Also, group them into V_OPT_TEMPLATE3 macro because their setup/tear-down and fallback logics are the same. The optimal size for the kernel to preference Vector over scalar, riscv_v_mem*_threshold, is only a heuristic for now. We can add DT parsing if people feel the need of customizing it. The original implementation of Vector operations comes from https://github.com/sifive/sifive-libc, which we agree to contribute to Linux kernel. Signed-off-by: Andy Chiu Reviewed-by: Charlie Jenkins --- Changelog v7: - add __NO_FORTIFY to prevent conflicting function declaration with macro for mem* functions. Changelog v6: - provide kconfig to set threshold for vectorized functions (Charlie) - rename *thres to *threshold (Charlie) Changelog v4: - new patch since v4 --- arch/riscv/Kconfig | 24 ++++++++++++++++ arch/riscv/lib/Makefile | 3 ++ arch/riscv/lib/memcpy_vector.S | 29 +++++++++++++++++++ arch/riscv/lib/memmove_vector.S | 49 ++++++++++++++++++++++++++++++++ arch/riscv/lib/memset_vector.S | 33 +++++++++++++++++++++ arch/riscv/lib/riscv_v_helpers.c | 26 +++++++++++++++++ 6 files changed, 164 insertions(+) create mode 100644 arch/riscv/lib/memcpy_vector.S create mode 100644 arch/riscv/lib/memmove_vector.S create mode 100644 arch/riscv/lib/memset_vector.S diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 3c5ba05e8a2d..cba53dcc2ae0 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -533,6 +533,30 @@ config RISCV_ISA_V_UCOPY_THRESHOLD Prefer using vectorized copy_to_user()/copy_from_user() when the workload size exceeds this value. +config RISCV_ISA_V_MEMSET_THRESHOLD + int "Threshold size for vectorized memset()" + depends on RISCV_ISA_V + default 1280 + help + Prefer using vectorized memset() when the workload size exceeds this + value. + +config RISCV_ISA_V_MEMCPY_THRESHOLD + int "Threshold size for vectorized memcpy()" + depends on RISCV_ISA_V + default 768 + help + Prefer using vectorized memcpy() when the workload size exceeds this + value. + +config RISCV_ISA_V_MEMMOVE_THRESHOLD + int "Threshold size for vectorized memmove()" + depends on RISCV_ISA_V + default 512 + help + Prefer using vectorized memmove() when the workload size exceeds this + value. + config TOOLCHAIN_HAS_ZBB bool default y diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile index c8a6787d5827..d389dbf285fe 100644 --- a/arch/riscv/lib/Makefile +++ b/arch/riscv/lib/Makefile @@ -16,3 +16,6 @@ lib-$(CONFIG_RISCV_ISA_ZICBOZ) += clear_page.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o lib-$(CONFIG_RISCV_ISA_V) += xor.o lib-$(CONFIG_RISCV_ISA_V) += riscv_v_helpers.o +lib-$(CONFIG_RISCV_ISA_V) += memset_vector.o +lib-$(CONFIG_RISCV_ISA_V) += memcpy_vector.o +lib-$(CONFIG_RISCV_ISA_V) += memmove_vector.o diff --git a/arch/riscv/lib/memcpy_vector.S b/arch/riscv/lib/memcpy_vector.S new file mode 100644 index 000000000000..4176b6e0a53c --- /dev/null +++ b/arch/riscv/lib/memcpy_vector.S @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include +#include + +#define pDst a0 +#define pSrc a1 +#define iNum a2 + +#define iVL a3 +#define pDstPtr a4 + +#define ELEM_LMUL_SETTING m8 +#define vData v0 + + +/* void *memcpy(void *, const void *, size_t) */ +SYM_FUNC_START(__asm_memcpy_vector) + mv pDstPtr, pDst +loop: + vsetvli iVL, iNum, e8, ELEM_LMUL_SETTING, ta, ma + vle8.v vData, (pSrc) + sub iNum, iNum, iVL + add pSrc, pSrc, iVL + vse8.v vData, (pDstPtr) + add pDstPtr, pDstPtr, iVL + bnez iNum, loop + ret +SYM_FUNC_END(__asm_memcpy_vector) diff --git a/arch/riscv/lib/memmove_vector.S b/arch/riscv/lib/memmove_vector.S new file mode 100644 index 000000000000..4cea9d244dc9 --- /dev/null +++ b/arch/riscv/lib/memmove_vector.S @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#include +#include + +#define pDst a0 +#define pSrc a1 +#define iNum a2 + +#define iVL a3 +#define pDstPtr a4 +#define pSrcBackwardPtr a5 +#define pDstBackwardPtr a6 + +#define ELEM_LMUL_SETTING m8 +#define vData v0 + +SYM_FUNC_START(__asm_memmove_vector) + + mv pDstPtr, pDst + + bgeu pSrc, pDst, forward_copy_loop + add pSrcBackwardPtr, pSrc, iNum + add pDstBackwardPtr, pDst, iNum + bltu pDst, pSrcBackwardPtr, backward_copy_loop + +forward_copy_loop: + vsetvli iVL, iNum, e8, ELEM_LMUL_SETTING, ta, ma + + vle8.v vData, (pSrc) + sub iNum, iNum, iVL + add pSrc, pSrc, iVL + vse8.v vData, (pDstPtr) + add pDstPtr, pDstPtr, iVL + + bnez iNum, forward_copy_loop + ret + +backward_copy_loop: + vsetvli iVL, iNum, e8, ELEM_LMUL_SETTING, ta, ma + + sub pSrcBackwardPtr, pSrcBackwardPtr, iVL + vle8.v vData, (pSrcBackwardPtr) + sub iNum, iNum, iVL + sub pDstBackwardPtr, pDstBackwardPtr, iVL + vse8.v vData, (pDstBackwardPtr) + bnez iNum, backward_copy_loop + ret + +SYM_FUNC_END(__asm_memmove_vector) diff --git a/arch/riscv/lib/memset_vector.S b/arch/riscv/lib/memset_vector.S new file mode 100644 index 000000000000..4611feed72ac --- /dev/null +++ b/arch/riscv/lib/memset_vector.S @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#include +#include + +#define pDst a0 +#define iValue a1 +#define iNum a2 + +#define iVL a3 +#define iTemp a4 +#define pDstPtr a5 + +#define ELEM_LMUL_SETTING m8 +#define vData v0 + +/* void *memset(void *, int, size_t) */ +SYM_FUNC_START(__asm_memset_vector) + + mv pDstPtr, pDst + + vsetvli iVL, iNum, e8, ELEM_LMUL_SETTING, ta, ma + vmv.v.x vData, iValue + +loop: + vse8.v vData, (pDstPtr) + sub iNum, iNum, iVL + add pDstPtr, pDstPtr, iVL + vsetvli iVL, iNum, e8, ELEM_LMUL_SETTING, ta, ma + bnez iNum, loop + + ret + +SYM_FUNC_END(__asm_memset_vector) diff --git a/arch/riscv/lib/riscv_v_helpers.c b/arch/riscv/lib/riscv_v_helpers.c index 6cac8f4e69e9..c62f333ba557 100644 --- a/arch/riscv/lib/riscv_v_helpers.c +++ b/arch/riscv/lib/riscv_v_helpers.c @@ -3,9 +3,13 @@ * Copyright (C) 2023 SiFive * Author: Andy Chiu */ +#ifndef __NO_FORTIFY +# define __NO_FORTIFY +#endif #include #include +#include #include #include @@ -42,3 +46,25 @@ asmlinkage int enter_vector_usercopy(void *dst, void *src, size_t n) return fallback_scalar_usercopy(dst, src, n); } #endif + +#define V_OPT_TEMPLATE3(prefix, type_r, type_0, type_1) \ +extern type_r __asm_##prefix##_vector(type_0, type_1, size_t n); \ +type_r prefix(type_0 a0, type_1 a1, size_t n) \ +{ \ + type_r ret; \ + if (has_vector() && may_use_simd() && \ + n > riscv_v_##prefix##_threshold) { \ + kernel_vector_begin(); \ + ret = __asm_##prefix##_vector(a0, a1, n); \ + kernel_vector_end(); \ + return ret; \ + } \ + return __##prefix(a0, a1, n); \ +} + +static size_t riscv_v_memset_threshold = CONFIG_RISCV_ISA_V_MEMSET_THRESHOLD; +V_OPT_TEMPLATE3(memset, void *, void*, int) +static size_t riscv_v_memcpy_threshold = CONFIG_RISCV_ISA_V_MEMCPY_THRESHOLD; +V_OPT_TEMPLATE3(memcpy, void *, void*, const void *) +static size_t riscv_v_memmove_threshold = CONFIG_RISCV_ISA_V_MEMMOVE_THRESHOLD; +V_OPT_TEMPLATE3(memmove, void *, void*, const void *) From patchwork Sat Dec 23 04:29:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13503922 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 223B0C4706F for ; Sat, 23 Dec 2023 04:31:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ha73qKQbtIq7iUoS3LMGCuAnns7/WgmLk7nO4StrMoo=; b=dSJ9EY1+ebnj0U Tf8je9LLie9P7WFANtuo9sbHd5th76etVnPNINZmu22AJu1EmyKz8XrMFV1AuHrcp3vYxDu2SUf4b 7kXeyKZEJio2MKWag4N8Aqgbh908SDMysjng2xijJnKZCSLEHKfzxTrQ5+Ul2JSQ8sekMH4W+OHdk TYYNvcaumqpfHYtsrrishEjoxI5hHEo6i0JCG038hal4YU2G+FRMCjaa/omTzXcjXsD1vz+otVRJM m87NsObfNsv0OeWGmBNXXT9mpLStV+loiR6FdX9TlA6fbX1qsIlXGJq6AT6l/w3dVCDrf1eFO5h/X WPxejBPxiLY+D1HMNVWQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGtf6-007MHU-0v; Sat, 23 Dec 2023 04:30:56 +0000 Received: from mail-pl1-x62a.google.com ([2607:f8b0:4864:20::62a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGtf3-007MC0-2a for linux-riscv@lists.infradead.org; Sat, 23 Dec 2023 04:30:55 +0000 Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-1d3aa0321b5so20880625ad.2 for ; Fri, 22 Dec 2023 20:30:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1703305842; x=1703910642; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=SZlQGTCjQqjliuojyWgivd2rpkt3ddAT4OmznBNHzWs=; b=d/1R2uP0MadaEQrk1axlV7sR6ZiQF4xufij0kZAMj1nN8L8hnjSMsKbpqDCYDMSFRy 2h4R6rLg5Y8/GpSqwynqfj3D2qBq0frdPkRHpc796bcdH0/aNypIyWgtab6F0AAZ85+t TrhpRaLEapGOs7N96k5+o1pr75Z7fvWORBAmTIeYAX4hXGAxKyUqnbmOv5HqBasS1F5y aKhnKJLwv8vMShbEdBTM6xhHflOjrJJC9WJOkqORISTZ205gDXHGAMVS6aNIURPD4Yyd wXBLNBEx+g5SYeDP4/9rsmajUB+Qsab6q5lHLYQv4+73sLN6Z2xeXloGffQdgM+qZVjN 3gfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703305842; x=1703910642; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SZlQGTCjQqjliuojyWgivd2rpkt3ddAT4OmznBNHzWs=; b=l2ydSnXNHvwzyv4UXLTZX5soXzV4N9Rl4b40DH50ENBbVAzSlJQWJxuK+prsLmtxjF j85wnDiIPEOr+BmeN3h5NkqRkz+sb+npr7/qDzDJDRQr2JdD0CerXNWBnYy+eFdTgr9n NWReDaQfFdXW/D6McV6zdB1dKE3pqHzVUgLlDcx3NKnvaobedKHKG/AZh/YVC0vs3JpM gN4ST8fuJeJs0AyrCm/pcPXURjU5AvBrdF7ImBex5633vNQOOgtbi8LMj6ap2Yo/p8G6 YQ1e8JJFksV41natBwuvkcwNbdUVnFkx3sh/xhL33MWBK7c7CXhWxkPc9MwXZ91H80iT 5TPw== X-Gm-Message-State: AOJu0Yw21QNFI3YuLSkMFMOqPhLBwLFWPDJYX98TIUWrjMBp4Ec64hwu eG2MsHQ1TWv+iEA7WM621AjMznx02y03JMXazFl4SCK6Typ6Pga0qt07AzS/ZadRod3UdrzHUx1 UScb3iqY1UDKMT503Rh5xnui6I2kqAP/41atTdKSeMDyhS+8MW9/zeNEe/7fCDs0dAhaGMz4r5g vcEs6QNiVzzLypIRu6t8Fs X-Google-Smtp-Source: AGHT+IHr+n42GPvBf0BJybFkh7arTRqC3HcbDE9/mTojLW4MMw7A2D/djMwBN+yWfYJpyM3Y/0T4Gw== X-Received: by 2002:a17:902:ea05:b0:1d3:66ba:454d with SMTP id s5-20020a170902ea0500b001d366ba454dmr2967223plg.4.1703305842401; Fri, 22 Dec 2023 20:30:42 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id sj10-20020a17090b2d8a00b0028b845f2890sm2623397pjb.33.2023.12.22.20.30.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 20:30:41 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Oleg Nesterov , Guo Ren , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Conor Dooley , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Vincent Chen , Heiko Stuebner , Xiao Wang , Mathis Salmen , Haorong Lu Subject: [v8, 07/10] riscv: vector: do not pass task_struct into riscv_v_vstate_{save,restore}() Date: Sat, 23 Dec 2023 04:29:11 +0000 Message-Id: <20231223042914.18599-8-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231223042914.18599-1-andy.chiu@sifive.com> References: <20231223042914.18599-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_203053_854026_9EF70796 X-CRM114-Status: GOOD ( 11.55 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org riscv_v_vstate_{save,restore}() can operate only on the knowlege of struct __riscv_v_ext_state, and struct pt_regs. Let the caller decides which should be passed into the function. Meanwhile, the kernel-mode Vector is going to introduce another vstate, so this also makes functions potentially able to be reused. Signed-off-by: Andy Chiu Acked-by: Conor Dooley --- Changelog v6: - re-added for v6 Changelog v3: - save V context after get_cpu_vector_context Changelog v2: - fix build fail that get caught on this patch (Conor) --- arch/riscv/include/asm/entry-common.h | 2 +- arch/riscv/include/asm/vector.h | 14 +++++--------- arch/riscv/kernel/kernel_mode_vector.c | 2 +- arch/riscv/kernel/ptrace.c | 2 +- arch/riscv/kernel/signal.c | 2 +- 5 files changed, 9 insertions(+), 13 deletions(-) diff --git a/arch/riscv/include/asm/entry-common.h b/arch/riscv/include/asm/entry-common.h index 6361a8488642..08fe8cdbf33e 100644 --- a/arch/riscv/include/asm/entry-common.h +++ b/arch/riscv/include/asm/entry-common.h @@ -16,7 +16,7 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, * We are already called with irq disabled, so go without * keeping track of vector_context_busy. */ - riscv_v_vstate_restore(current, regs); + riscv_v_vstate_restore(¤t->thread.vstate, regs); } } diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index e706613aae2c..c5a83c277583 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -183,23 +183,19 @@ static inline void riscv_v_vstate_discard(struct pt_regs *regs) __riscv_v_vstate_dirty(regs); } -static inline void riscv_v_vstate_save(struct task_struct *task, +static inline void riscv_v_vstate_save(struct __riscv_v_ext_state *vstate, struct pt_regs *regs) { if ((regs->status & SR_VS) == SR_VS_DIRTY) { - struct __riscv_v_ext_state *vstate = &task->thread.vstate; - __riscv_v_vstate_save(vstate, vstate->datap); __riscv_v_vstate_clean(regs); } } -static inline void riscv_v_vstate_restore(struct task_struct *task, +static inline void riscv_v_vstate_restore(struct __riscv_v_ext_state *vstate, struct pt_regs *regs) { if ((regs->status & SR_VS) != SR_VS_OFF) { - struct __riscv_v_ext_state *vstate = &task->thread.vstate; - __riscv_v_vstate_restore(vstate, vstate->datap); __riscv_v_vstate_clean(regs); } @@ -220,7 +216,7 @@ static inline void __switch_to_vector(struct task_struct *prev, struct pt_regs *regs; regs = task_pt_regs(prev); - riscv_v_vstate_save(prev, regs); + riscv_v_vstate_save(&prev->thread.vstate, regs); riscv_v_vstate_set_restore(next, task_pt_regs(next)); } @@ -238,8 +234,8 @@ static inline bool riscv_v_vstate_query(struct pt_regs *regs) { return false; } static inline bool riscv_v_vstate_ctrl_user_allowed(void) { return false; } #define riscv_v_vsize (0) #define riscv_v_vstate_discard(regs) do {} while (0) -#define riscv_v_vstate_save(task, regs) do {} while (0) -#define riscv_v_vstate_restore(task, regs) do {} while (0) +#define riscv_v_vstate_save(vstate, regs) do {} while (0) +#define riscv_v_vstate_restore(vstate, regs) do {} while (0) #define __switch_to_vector(__prev, __next) do {} while (0) #define riscv_v_vstate_off(regs) do {} while (0) #define riscv_v_vstate_on(regs) do {} while (0) diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c index 63814e780c28..7350e975e094 100644 --- a/arch/riscv/kernel/kernel_mode_vector.c +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -76,7 +76,7 @@ void kernel_vector_begin(void) get_cpu_vector_context(); - riscv_v_vstate_save(current, task_pt_regs(current)); + riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); riscv_v_enable(); } diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c index 7b93bcbdf9fa..e8515aa9d80b 100644 --- a/arch/riscv/kernel/ptrace.c +++ b/arch/riscv/kernel/ptrace.c @@ -101,7 +101,7 @@ static int riscv_vr_get(struct task_struct *target, */ if (target == current) { get_cpu_vector_context(); - riscv_v_vstate_save(current, task_pt_regs(current)); + riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); put_cpu_vector_context(); } diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c index aca4a12c8416..5d69f4db9e8f 100644 --- a/arch/riscv/kernel/signal.c +++ b/arch/riscv/kernel/signal.c @@ -87,7 +87,7 @@ static long save_v_state(struct pt_regs *regs, void __user **sc_vec) WARN_ON(unlikely(!IS_ALIGNED((unsigned long)datap, 16))); get_cpu_vector_context(); - riscv_v_vstate_save(current, regs); + riscv_v_vstate_save(¤t->thread.vstate, regs); put_cpu_vector_context(); /* Copy everything of vstate but datap. */ From patchwork Sat Dec 23 04:29:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13503923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7F128C3DA6E for ; Sat, 23 Dec 2023 04:31:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DBoi3TludzXQKB4y5mbYXIWz1RaUEj0UxOrD0WXy9GU=; b=MYs3MFxO/lTX6B aYYf2S3r911hSk6AjWgSUUGrFmXTXtPviW1YB8gQvCwpqmstue28sVmCJqqKwgHnROY6a5ZPZ6k69 ze+vkRYdqgiNgI3zlNo7KH62jkdsgZ7bPn/YtgdlibgCKEzf8AMWWt0sN2VUl7PNvqyhiOaakRMqK Rdvw22kUfSnewHIiBGJAEpMIcES6R2EvTp6xvwNRF0XZW2aKDpl0YD5G6St3zl9c4rnPYBj511OJM aqZHeYkWGA1ctgOqzKOmqt2Dut9+FzkuUMXyZ3fPV8YKj4WER4FYNOOepYhY99OQVH4C+MERrcqvN OgmNM5jeyKfqopbvj9BA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGtf9-007MJq-0P; Sat, 23 Dec 2023 04:30:59 +0000 Received: from mail-oa1-x2b.google.com ([2001:4860:4864:20::2b]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGtf3-007MDE-2p for linux-riscv@lists.infradead.org; Sat, 23 Dec 2023 04:30:57 +0000 Received: by mail-oa1-x2b.google.com with SMTP id 586e51a60fabf-20389f2780fso1724182fac.2 for ; Fri, 22 Dec 2023 20:30:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1703305847; x=1703910647; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=0MSU8pZMg99K/bCQ2FBfCGwDj7GvbS21JTvN92FV/sE=; b=dh7H7ARpn2TcSk6+7HwMOXHnxMzuem1IXoo5d0H5KpjVj7QnSZHs9t6tnTX9gsd1cK yxqOFDtWqJKFdGqEQCp7CvGwDyaqxty3MQCtv7yq9HPUaTu/EMr7uzwYdijJxK7NaIXJ sf4ApVBS4ZQpCd/pp3iPhruDuSKt1gFVHlM2fG8TIJ61IiXlJ9qleVEzqCQYQEx41wsU tPKQYKfBsKiJ+QQ9wAZlqGRRY6zTNoRay91dhfRHON1V704mlaYEc3Gt/L6KBz7J8aVk pjU4TK91L0MIKwGNMl0eUjdgr12L5QArrhdqdWWjXNm0pi3w78++oHPhopvqNkdAMlwD 00zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703305847; x=1703910647; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0MSU8pZMg99K/bCQ2FBfCGwDj7GvbS21JTvN92FV/sE=; b=cS/G+iBR0pTyMJ9uM0kC4ZtDTDecbH/xWyrunzGS4s/TXqYzcVkFnA00bvybcoQPNz L2mgJyTeFLMjwxL/J196jwhIjgGWbFpT2nboYrP2Aq73rIY3dKBs8Fl38lbRFtXp/JO0 9dpi4KRDqLHEJ4l+oSOyYl+/U48NJqAGyVv76AUlvw8ykGduharIaTtxmm6sxRJ6UmsF u7zGMOpbPeqMyQwrNxQW3SdQemKAY6S8eZvLy6LzBfYaa2cHoaGftTekwIJFb5OAWJ0a Z3/DKadAYLQE6rG3FHQRAjbAcu269HzR6vQNTh/6vDSWtDDEckFtAiHt6ENbXCUJsWmO 9Jew== X-Gm-Message-State: AOJu0YyBmL1CTa/oGpMCMEPzMKwevhD4YGek8KGBzunliZaDrE9nc0jG vXAw7E1kzL7Wn+kEk5U9ngvLzgPND06pYpIg+eatNoHTE7X8vWhcL/YdeocuJrZnxqgYLicnhAT V6twhOLB/vQdkAtJxor4NP4zOQ+vmOIjV1R/g6b2PJMX+F19SuZeNaoehtRFiHRgfnfWiw1fM6E fm8tMZsZIIbi14i6CBvwdN X-Google-Smtp-Source: AGHT+IFwm14iXktHdrYHpS/tkMosMLHYiE9mIhqIgQC/pck5rK1jaqeZZEd0QVM+q4DYaUB6zwmSfg== X-Received: by 2002:a05:6870:ac06:b0:203:b7d8:ea2f with SMTP id kw6-20020a056870ac0600b00203b7d8ea2fmr3526192oab.4.1703305847457; Fri, 22 Dec 2023 20:30:47 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id sj10-20020a17090b2d8a00b0028b845f2890sm2623397pjb.33.2023.12.22.20.30.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 20:30:46 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Vincent Chen , Conor Dooley , Joel Granados Subject: [v8, 08/10] riscv: vector: use a mask to write vstate_ctrl Date: Sat, 23 Dec 2023 04:29:12 +0000 Message-Id: <20231223042914.18599-9-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231223042914.18599-1-andy.chiu@sifive.com> References: <20231223042914.18599-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_203053_915367_5842D973 X-CRM114-Status: UNSURE ( 7.90 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org riscv_v_ctrl_set() should only touch bits within PR_RISCV_V_VSTATE_CTRL_MASK. So, use the mask when we really set task's vstate_ctrl. Signed-off-by: Andy Chiu --- Changelog v6: - splitted out from v3 --- arch/riscv/kernel/vector.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index 66e8c6ab09d2..c1f28bc89ec6 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -122,7 +122,8 @@ static inline void riscv_v_ctrl_set(struct task_struct *tsk, int cur, int nxt, ctrl |= VSTATE_CTRL_MAKE_NEXT(nxt); if (inherit) ctrl |= PR_RISCV_V_VSTATE_CTRL_INHERIT; - tsk->thread.vstate_ctrl = ctrl; + tsk->thread.vstate_ctrl &= ~PR_RISCV_V_VSTATE_CTRL_MASK; + tsk->thread.vstate_ctrl |= ctrl; } bool riscv_v_vstate_ctrl_user_allowed(void) From patchwork Sat Dec 23 04:29:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13503924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 451DDC3DA6E for ; Sat, 23 Dec 2023 04:31:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mhuzWkXR7iO3siaTlQDj4CpHvsSWk/B5SMw+R52KNcY=; b=1rTydC4iZfNUrl D250iNYqFH5tMVeD1USKeCuvZSyKSEGtf2vglUTrq9+BPC8F+QpR+fwGGLJiR/+PFIlZiZe2BfLRw lE4k735VjWFq33724mwxN9OJEdd8/UMZTrpz22nGKKeCW1Vc/j+68FjOKeIi4GtAIuWxI+/cJP6QG BCl/d0Td/O7eLq2eTAme5ImW+0/+cnLjU3RoYsOq7lshsAQsfLUpFbJiuuCiyGMknPdfuOGMeF7Ca sF2GqSMlRyXBnv0vB7QDNf9Gy0JE/JQ+tBnN/9NQX3eOrr6Qx1v3dCAkx4yYCiTzjO9aztCWzIs6n MgRe82EVL/9EVKBTdprg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGtfF-007MNA-10; Sat, 23 Dec 2023 04:31:05 +0000 Received: from mail-pf1-x433.google.com ([2607:f8b0:4864:20::433]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGtf8-007MIG-0T for linux-riscv@lists.infradead.org; Sat, 23 Dec 2023 04:31:03 +0000 Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-6d93d15db24so2265063b3a.0 for ; Fri, 22 Dec 2023 20:30:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1703305855; x=1703910655; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=QUioTI8ChUPoxvAbO/795+zuvqJ+EYESvZy9n2CvtTk=; b=Vm+p7jDUtQpCwppsMk4fgxWKtvIE6WIipfvmRbyZLoJBV1yTlfJ0ojihriinP+5eQy 67VZnq5FjqBoIBns5hgoygCE/aiVS8gJy7KyXxVX0+XLZJd9aPxD/l2b0IkCxWxnVfiN 74hivTs9kVuJNm7DY7/H2N6B7SoQDRBtE8DfjLOumixn+G44AehQHdBM9UAtBxbVi3Lq 4ocbMHxIhfGE9Jw+vFG4YQbl7qPMQEZgbN5UBvV86+f+hHizr6It5D5pTu9TaP7ISOP9 BejjACgQ6ahWMJsHj3Mf7vz1im/Qrjq11G9CD+wFHGUAZcR1j2AqZcfgbEQwQqdgsuIz OyXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703305855; x=1703910655; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QUioTI8ChUPoxvAbO/795+zuvqJ+EYESvZy9n2CvtTk=; b=gfcZaRsVWg8jvH6qEHOD+2RTit9FwrBnYaMq2Z0KkQ1AzAqLVFZHQafijMOi+9z3xR 0uD+5Z5KdOWV31yNG+CkCwipG7yCtYMQ/hGwNhbFNtgsxQQ9QyEWzT3zi9M35CIndirO OrImDLkLmQWpPUQlYnu7S4rZFRYRV12lRAqLcebHIKCEHB1dMhACldt+DijbmuYYzB6r m4um0vN5FpGigA9elIhSMriSoVSWCf5NESnulYHLOZDG127hmMEwNJEliV/QfQ04WPfQ Mc4UV7f/Cq0CR0i7BXXXEtmK5RnKBhtyyrvsujR5V44MdXY4aXN7QqRT4fkcI/rnP7I7 UePQ== X-Gm-Message-State: AOJu0YxFCHg/KEl2nsvABMV+bFwNiLJo5cG3/v/UskaJUZXjlgtxwksr UasTO2SKE3Tih5zfQju4FyBC1QIXYSxLmE5kNmL1uUUITm3xVkwPIo1knHZTGxsr/XoTp+ZL+an qld3i+MH5jXfMBKfaOZ6CJrV3c6qMASbppcQQAnYr28QODRJQI9DbK14VRLMhLMb6LhIBAc2jMJ 91lGRsfrpghzuY7/0Ri1dR X-Google-Smtp-Source: AGHT+IEMMHGzwtQu3ge4GndrHks6ZEwLOpbUDwY/KIrwz6tcWTJY5BpuzjXBDYYznYE0iORWxz41iA== X-Received: by 2002:a05:6a20:1606:b0:195:3bfc:4908 with SMTP id l6-20020a056a20160600b001953bfc4908mr2278151pzj.51.1703305855413; Fri, 22 Dec 2023 20:30:55 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id sj10-20020a17090b2d8a00b0028b845f2890sm2623397pjb.33.2023.12.22.20.30.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 20:30:54 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Vincent Chen , Heiko Stuebner , Guo Ren , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Xiao Wang , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Jisheng Zhang , Conor Dooley , Joel Granados Subject: [v8, 09/10] riscv: vector: use kmem_cache to manage vector context Date: Sat, 23 Dec 2023 04:29:13 +0000 Message-Id: <20231223042914.18599-10-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231223042914.18599-1-andy.chiu@sifive.com> References: <20231223042914.18599-1-andy.chiu@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_203058_197832_52F58651 X-CRM114-Status: GOOD ( 11.80 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The allocation size of thread.vstate.datap is always riscv_v_vsize. So it is possbile to use kmem_cache_* to manage the allocation. This gives users more information regarding allocation of vector context via /proc/slabinfo. And it potentially reduces the latency of the first-use trap because of the allocation caches. Signed-off-by: Andy Chiu --- Changelog v6: - new patch since v6 --- arch/riscv/include/asm/vector.h | 4 ++++ arch/riscv/kernel/process.c | 7 ++++++- arch/riscv/kernel/vector.c | 16 +++++++++++++++- 3 files changed, 25 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index c5a83c277583..0e6741dd9ef3 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -26,6 +26,8 @@ void kernel_vector_begin(void); void kernel_vector_end(void); void get_cpu_vector_context(void); void put_cpu_vector_context(void); +void riscv_v_thread_free(struct task_struct *tsk); +void __init riscv_v_setup_ctx_cache(void); static inline void riscv_v_ctx_cnt_add(u32 offset) { @@ -239,6 +241,8 @@ static inline bool riscv_v_vstate_ctrl_user_allowed(void) { return false; } #define __switch_to_vector(__prev, __next) do {} while (0) #define riscv_v_vstate_off(regs) do {} while (0) #define riscv_v_vstate_on(regs) do {} while (0) +#define riscv_v_thread_free(tsk) do {} while (0) +#define riscv_v_setup_ctx_cache() do {} while (0) #endif /* CONFIG_RISCV_ISA_V */ diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 36993f408de4..862d59c3872e 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -179,7 +179,7 @@ void arch_release_task_struct(struct task_struct *tsk) { /* Free the vector context of datap. */ if (has_vector()) - kfree(tsk->thread.vstate.datap); + riscv_v_thread_free(tsk); } int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) @@ -228,3 +228,8 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) p->thread.sp = (unsigned long)childregs; /* kernel sp */ return 0; } + +void __init arch_task_cache_init(void) +{ + riscv_v_setup_ctx_cache(); +} diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index c1f28bc89ec6..1fe140e34557 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -21,6 +21,7 @@ #include static bool riscv_v_implicit_uacc = IS_ENABLED(CONFIG_RISCV_ISA_V_DEFAULT_ENABLE); +static struct kmem_cache *riscv_v_user_cachep; unsigned long riscv_v_vsize __read_mostly; EXPORT_SYMBOL_GPL(riscv_v_vsize); @@ -47,6 +48,13 @@ int riscv_v_setup_vsize(void) return 0; } +void __init riscv_v_setup_ctx_cache(void) +{ + riscv_v_user_cachep = kmem_cache_create_usercopy("riscv_vector_ctx", + riscv_v_vsize, 16, SLAB_PANIC, + 0, riscv_v_vsize, NULL); +} + static bool insn_is_vector(u32 insn_buf) { u32 opcode = insn_buf & __INSN_OPCODE_MASK; @@ -84,7 +92,7 @@ static int riscv_v_thread_zalloc(void) { void *datap; - datap = kzalloc(riscv_v_vsize, GFP_KERNEL); + datap = kmem_cache_zalloc(riscv_v_user_cachep, GFP_KERNEL); if (!datap) return -ENOMEM; @@ -94,6 +102,12 @@ static int riscv_v_thread_zalloc(void) return 0; } +void riscv_v_thread_free(struct task_struct *tsk) +{ + if (tsk->thread.vstate.datap) + kmem_cache_free(riscv_v_user_cachep, tsk->thread.vstate.datap); +} + #define VSTATE_CTRL_GET_CUR(x) ((x) & PR_RISCV_V_VSTATE_CTRL_CUR_MASK) #define VSTATE_CTRL_GET_NEXT(x) (((x) & PR_RISCV_V_VSTATE_CTRL_NEXT_MASK) >> 2) #define VSTATE_CTRL_MAKE_NEXT(x) (((x) << 2) & PR_RISCV_V_VSTATE_CTRL_NEXT_MASK) From patchwork Sat Dec 23 04:29:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13503925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4B753C3DA6E for ; Sat, 23 Dec 2023 04:31:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=C3vlKuLJmlcRd4Z2OolKpEM3wlsWMHsKAFI7+/TU6Os=; b=yMKJHvMw9QSCPC Atgf9jSP5khMfktcsx6//gQ9+0RWZCfwgaPzuzDii9ewgP6H8wFKswan1jO9Q9jAjB1ELbB8BIhTQ LID1PmbDr2vi2q9X9G1hZPzGsNaYaqIucQPChCJn/c4jglAJJQnJ/2jxihxbzG+JiUEhdYnKT2XCS 9crKL7RFa0itoPdCqSpwitjJSO450H+M/YIA5J4HOOOlrwAJAiNoAOXBHiopcoVJ0yAH2axaKhpxr r8xsIRnicleccrSQSEDXiUg0E6tfpi90qpbyPcwnU6D44QEYl4cNUK5eYNsr+dik9WUM4FlTq9Yjy Nob2INQxjVPh0QjZYBbw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rGtfQ-007MTO-1V; Sat, 23 Dec 2023 04:31:16 +0000 Received: from mail-pf1-x42b.google.com ([2607:f8b0:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rGtfM-007MR0-2j for linux-riscv@lists.infradead.org; Sat, 23 Dec 2023 04:31:15 +0000 Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-6d099d316a8so2387152b3a.0 for ; Fri, 22 Dec 2023 20:31:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1703305871; x=1703910671; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+VzxL8dwLE8xNUZIX8KLqfmKBBQpCpXQLf1Y4wzLehA=; b=ThwYAjPqbESthvwqJIxPeGb26gAHw07RlJmEshu7hgV3DqwAekaZORHHIlIFLqQFPi RSRCUpJHA43qe+vyzwZpOcwC5/d83hH3hD65Zs8JzMMc8uBjyKhWZiv+R1g9rJa3URE6 18Ftu15A6o3Km5oqj7Y0BPGcMPjekCCasNIaZoBfKQlKAz8JsZLLs/aqHgiGDe3RHdi8 aBAaSZM5/ZDZrbHg91Lx/GXWtnWRAnIg/eA7rcnXcIlohwhWFPmtDiMfh6LowFfV97Je JkPLwwtx6H/5+90yo8NLCrf90pf4nHONqaWTGmn7LC3ItjajmPey15MXb2O4G9iMFULZ AjKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703305871; x=1703910671; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+VzxL8dwLE8xNUZIX8KLqfmKBBQpCpXQLf1Y4wzLehA=; b=jJt7lG7ul9Fdhx5SCOc3IFiqtpCvI8JlGxd3VcPe5vomm2D8wy8eoB42N7Me6RQ9jI F6JE8RY3PpP8pOBiNPbQ2Y2wy8MM1zE5o8ea6O/ooPpYnbuxC+B+d+u2w42gXWXm0M/r Axtu5jewacPu2SJIlf57NdYP992JYfM9gfBOje5fJalTQFCEqk9y0S5x/LjVZS8F2gTv HKDjkiNj7puCSljq6oySDRuj8HemDzlJfM574YsI2YpbduEoSutjfSIA+Eyc6F0ekHBo AwNl62pl13Z52ckamyfq9UdQ3Cg3LmXZJF5L1R6THFobxEP2oV8T7htRZxWUkq6GHiIx uaCA== X-Gm-Message-State: AOJu0YzUUKmmnvAC724G9yNWuKHKsBCq+wBKn/CGvCbuCKMfvF7D82F1 pgpUYbUInSKXsdecYph2g6aCA3McT0onStMhYu9j2GNdUaxqM2Q5E3I+OEzC0BMxnyCWXOV8HIk FuQ+zILBIJw8ht7Hx84V4gO3SCq7GvLRoL78DqEx0Yw8SEC0w+HCX1nGqdNsumN+3bib2XusjKc WQA5Df0MfEa8RRHKCtHu9M X-Google-Smtp-Source: AGHT+IH3ERcIr+3MJvL1dXIsUPRw+M00AUzOGUqOywXO3pLn7T6M/RGc+D9bFdGOPko6I/cVY6RNeg== X-Received: by 2002:a05:6a21:18e:b0:195:601a:ae1b with SMTP id le14-20020a056a21018e00b00195601aae1bmr715142pzb.14.1703305870251; Fri, 22 Dec 2023 20:31:10 -0800 (PST) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id sj10-20020a17090b2d8a00b0028b845f2890sm2623397pjb.33.2023.12.22.20.31.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 20:31:09 -0800 (PST) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, charlie@rivosinc.com, ardb@kernel.org, arnd@arndb.de, peterz@infradead.org, tglx@linutronix.de, ebiggers@kernel.org, Andy Chiu , Albert Ou , Guo Ren , Sami Tolvanen , Han-Kuan Chen , Deepak Gupta , Vincent Chen , Heiko Stuebner , Baoquan He , =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , =?utf-8?b?QmrDtnJu?= =?utf-8?b?IFTDtnBlbA==?= , Xiao Wang , Nathan Chancellor , Jisheng Zhang , Conor Dooley , Joel Granados Subject: [v8, 10/10] riscv: vector: allow kernel-mode Vector with preemption Date: Sat, 23 Dec 2023 04:29:14 +0000 Message-Id: <20231223042914.18599-11-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231223042914.18599-1-andy.chiu@sifive.com> References: <20231223042914.18599-1-andy.chiu@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231222_203112_890090_F1693015 X-CRM114-Status: GOOD ( 31.47 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add kernel_vstate to keep track of kernel-mode Vector registers when trap introduced context switch happens. Also, provide riscv_v_flags to let context save/restore routine track context status. Context tracking happens whenever the core starts its in-kernel Vector executions. An active (dirty) kernel task's V contexts will be saved to memory whenever a trap-introduced context switch happens. Or, when a softirq, which happens to nest on top of it, uses Vector. Context retoring happens when the execution transfer back to the original Kernel context where it first enable preempt_v. Also, provide a config CONFIG_RISCV_ISA_V_PREEMPTIVE to give users an option to disable preemptible kernel-mode Vector at build time. Users with constraint memory may want to disable this config as preemptible kernel-mode Vector needs extra space for tracking of per thread's kernel-mode V context. Or, users might as well want to disable it if all kernel-mode Vector code is time sensitive and cannot tolerate context switch overhead. Signed-off-by: Andy Chiu --- Changelog v8: - fix -Wmissing-prototypes for functions with asmlinkage Changelog v6: - re-write patch to handle context nesting for softirqs - drop thread flag and track context instead in riscv_v_flags - refine some asm code and constraint it into C functions - preallocate v context for preempt_v - Return non-zero in riscv_v_start_kernel_context with non-preemptible kernel-mode Vector Changelog v4: - dropped from v4 Changelog v3: - Guard vstate_save with {get,set}_cpu_vector_context - Add comments on preventions of nesting V contexts - remove warnings in context switch when trap's reg is not pressent (Conor) - refactor code (Björn) Changelog v2: - fix build fail when compiling without RISCV_ISA_V (Conor) - 's/TIF_RISCV_V_KMV/TIF_RISCV_V_KERNEL_MODE' and add comment (Conor) - merge Kconfig patch into this oine (Conor). - 's/CONFIG_RISCV_ISA_V_PREEMPTIVE_KMV/CONFIG_RISCV_ISA_V_PREEMPTIVE/' (Conor) - fix some typos (Conor) - enclose assembly with RISCV_ISA_V_PREEMPTIVE. - change riscv_v_vstate_ctrl_config_kmv() to kernel_vector_allow_preemption() for better understanding. (Conor) - 's/riscv_v_kmv_preempitble/kernel_vector_preemptible/' --- arch/riscv/Kconfig | 14 +++ arch/riscv/include/asm/asm-prototypes.h | 5 + arch/riscv/include/asm/processor.h | 26 ++++- arch/riscv/include/asm/simd.h | 26 ++++- arch/riscv/include/asm/vector.h | 57 ++++++++++- arch/riscv/kernel/entry.S | 8 ++ arch/riscv/kernel/kernel_mode_vector.c | 124 +++++++++++++++++++++++- arch/riscv/kernel/process.c | 3 + arch/riscv/kernel/vector.c | 31 ++++-- 9 files changed, 273 insertions(+), 21 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index cba53dcc2ae0..70603c486593 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -557,6 +557,20 @@ config RISCV_ISA_V_MEMMOVE_THRESHOLD Prefer using vectorized memmove() when the workload size exceeds this value. +config RISCV_ISA_V_PREEMPTIVE + bool "Run kernel-mode Vector with kernel preemption" + depends on PREEMPTION + depends on RISCV_ISA_V + default y + help + Usually, in-kernel SIMD routines are run with preemption disabled. + Functions which envoke long running SIMD thus must yield core's + vector unit to prevent blocking other tasks for too long. + + This config allows kernel to run SIMD without explicitly disable + preemption. Enabling this config will result in higher memory + consumption due to the allocation of per-task's kernel Vector context. + config TOOLCHAIN_HAS_ZBB bool default y diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h index be438932f321..cd627ec289f1 100644 --- a/arch/riscv/include/asm/asm-prototypes.h +++ b/arch/riscv/include/asm/asm-prototypes.h @@ -30,6 +30,11 @@ void xor_regs_5_(unsigned long bytes, unsigned long *__restrict p1, const unsigned long *__restrict p4, const unsigned long *__restrict p5); +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE +asmlinkage void riscv_v_context_nesting_start(struct pt_regs *regs); +asmlinkage void riscv_v_context_nesting_end(struct pt_regs *regs); +#endif /* CONFIG_RISCV_ISA_V_PREEMPTIVE */ + #endif /* CONFIG_RISCV_ISA_V */ #define DECLARE_DO_ERROR_INFO(name) asmlinkage void name(struct pt_regs *regs) diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index 15781e2232e0..4de9124bcf4f 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -81,11 +81,32 @@ struct pt_regs; * activation of this state disables the preemption. On a non-RT kernel, it * also disable bh. Currently only 0 and 1 are valid value for this field. * Other values are reserved for future uses. + * - bits 8-15 are used for tracking preemptible kernel-mode Vector, when + * RISCV_ISA_V_PREEMPTIVE is set. Calling kernel_vector_begin() does not + * disable the preemption if the thread's kernel_vstate.datap is allocated. + * Instead, the kernel adds 1 into this field. Then the trap entry/exit code + * knows if we are entering/exiting the context that owns preempt_v. + * - 0: the task is not using preempt_v + * - 1: the task is actively using, and owns preempt_v + * - >1: the task was using preempt_v, but then took a trap within. Thus, + * the task does not own preempt_v. Any use of Vector will have to save + * preempt_v, if dirty, and fallback to non-preemptible kernel-mode + * Vector. + * - bit 30: The in-kernel preempt_v context is saved, and requries to be + * restored when returning to the context that owns the preempt_v. + * - bit 31: The in-kernel preempt_v context is dirty, as signaled by the + * trap entry code. Any context switches out-of current task need to save + * it to the task's in-kernel V context. Also, any traps nesting on-top-of + * preempt_v requesting to use V needs a save. */ -#define RISCV_KERNEL_MODE_V_MASK 0xff +#define RISCV_KERNEL_MODE_V_MASK 0x000000ff +#define RISCV_PREEMPT_V_MASK 0x0000ff00 -#define RISCV_KERNEL_MODE_V 0x1 +#define RISCV_KERNEL_MODE_V 0x00000001 +#define RISCV_PREEMPT_V 0x00000100 +#define RISCV_PREEMPT_V_DIRTY 0x80000000 +#define RISCV_PREEMPT_V_NEED_RESTORE 0x40000000 /* CPU-specific state of a task */ struct thread_struct { @@ -99,6 +120,7 @@ struct thread_struct { u32 vstate_ctrl; struct __riscv_v_ext_state vstate; unsigned long align_ctl; + struct __riscv_v_ext_state kernel_vstate; }; /* Whitelist the fstate from the task_struct for hardened usercopy */ diff --git a/arch/riscv/include/asm/simd.h b/arch/riscv/include/asm/simd.h index 2f1e95ccb03c..7daccdcbdee8 100644 --- a/arch/riscv/include/asm/simd.h +++ b/arch/riscv/include/asm/simd.h @@ -12,6 +12,7 @@ #include #include #include +#include #include @@ -28,12 +29,27 @@ static __must_check inline bool may_use_simd(void) /* * RISCV_KERNEL_MODE_V is only set while preemption is disabled, * and is clear whenever preemption is enabled. - * - * Kernel-mode Vector temporarily disables bh. So we must not return - * true on irq_disabled(). Otherwise we would fail the lockdep check - * calling local_bh_enable() */ - return !in_hardirq() && !in_nmi() && !irqs_disabled() && !(riscv_v_ctx_cnt() & RISCV_KERNEL_MODE_V_MASK); + if (in_hardirq() || in_nmi()) + return false; + + /* + * Nesting is acheived in preempt_v by spreading the control for + * preemptible and non-preemptible kernel-mode Vector into two fields. + * Always try to match with prempt_v if kernel V-context exists. Then, + * fallback to check non preempt_v if nesting happens, or if the config + * is not set. + */ + if (IS_ENABLED(CONFIG_RISCV_ISA_V_PREEMPTIVE) && current->thread.kernel_vstate.datap) { + if (!riscv_preempt_v_started(current)) + return true; + } + /* + * Non-preemptible kernel-mode Vector temporarily disables bh. So we + * must not return true on irq_disabled(). Otherwise we would fail the + * lockdep check calling local_bh_enable() + */ + return !irqs_disabled() && !(riscv_v_ctx_cnt() & RISCV_KERNEL_MODE_V_MASK); } #else /* ! CONFIG_RISCV_ISA_V */ diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index 0e6741dd9ef3..542eaf9227c3 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -28,6 +28,7 @@ void get_cpu_vector_context(void); void put_cpu_vector_context(void); void riscv_v_thread_free(struct task_struct *tsk); void __init riscv_v_setup_ctx_cache(void); +void riscv_v_thread_alloc(struct task_struct *tsk); static inline void riscv_v_ctx_cnt_add(u32 offset) { @@ -212,14 +213,63 @@ static inline void riscv_v_vstate_set_restore(struct task_struct *task, } } +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE +static inline bool riscv_preempt_v_dirty(struct task_struct *task) +{ + u32 val = READ_ONCE(task->thread.riscv_v_flags); + + return !!(val & RISCV_PREEMPT_V_DIRTY); +} + +static inline bool riscv_preempt_v_restore(struct task_struct *task) +{ + u32 val = READ_ONCE(task->thread.riscv_v_flags); + + return !!(val & RISCV_PREEMPT_V_NEED_RESTORE); +} + +static inline void riscv_preempt_v_clear_dirty(struct task_struct *task) +{ + barrier(); + task->thread.riscv_v_flags &= ~RISCV_PREEMPT_V_DIRTY; +} + +static inline void riscv_preempt_v_set_restore(struct task_struct *task) +{ + barrier(); + task->thread.riscv_v_flags |= RISCV_PREEMPT_V_NEED_RESTORE; +} + +static inline bool riscv_preempt_v_started(struct task_struct *task) +{ + return !!(READ_ONCE(task->thread.riscv_v_flags) & RISCV_PREEMPT_V_MASK); +} +#else /* !CONFIG_RISCV_ISA_V_PREEMPTIVE */ +static inline bool riscv_preempt_v_dirty(struct task_struct *task) { return false; } +static inline bool riscv_preempt_v_restore(struct task_struct *task) { return false; } +static inline bool riscv_preempt_v_started(struct task_struct *task) { return false; } +#define riscv_preempt_v_clear_dirty(tsk) do {} while (0) +#define riscv_preempt_v_set_restore(tsk) do {} while (0) +#endif /* CONFIG_RISCV_ISA_V_PREEMPTIVE */ + static inline void __switch_to_vector(struct task_struct *prev, struct task_struct *next) { struct pt_regs *regs; - regs = task_pt_regs(prev); - riscv_v_vstate_save(&prev->thread.vstate, regs); - riscv_v_vstate_set_restore(next, task_pt_regs(next)); + if (riscv_preempt_v_dirty(prev)) { + __riscv_v_vstate_save(&prev->thread.kernel_vstate, + prev->thread.kernel_vstate.datap); + riscv_preempt_v_clear_dirty(prev); + } else { + regs = task_pt_regs(prev); + riscv_v_vstate_save(&prev->thread.vstate, regs); + } + + if (riscv_preempt_v_started(next)) + riscv_preempt_v_set_restore(next); + else + riscv_v_vstate_set_restore(next, task_pt_regs(next)); } void riscv_v_vstate_ctrl_init(struct task_struct *tsk); @@ -243,6 +293,7 @@ static inline bool riscv_v_vstate_ctrl_user_allowed(void) { return false; } #define riscv_v_vstate_on(regs) do {} while (0) #define riscv_v_thread_free(tsk) do {} while (0) #define riscv_v_setup_ctx_cache() do {} while (0) +#define riscv_v_thread_alloc(tsk) do {} while (0) #endif /* CONFIG_RISCV_ISA_V */ diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 54ca4564a926..9d1a305d5508 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -83,6 +83,10 @@ SYM_CODE_START(handle_exception) /* Load the kernel shadow call stack pointer if coming from userspace */ scs_load_current_if_task_changed s5 +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE + move a0, sp + call riscv_v_context_nesting_start +#endif move a0, sp /* pt_regs */ la ra, ret_from_exception @@ -138,6 +142,10 @@ SYM_CODE_START_NOALIGN(ret_from_exception) */ csrw CSR_SCRATCH, tp 1: +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE + move a0, sp + call riscv_v_context_nesting_end +#endif REG_L a0, PT_STATUS(sp) /* * The current load reservation is effectively part of the processor's diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c index 7350e975e094..75d6b00842b3 100644 --- a/arch/riscv/kernel/kernel_mode_vector.c +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -14,6 +14,9 @@ #include #include #include +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE +#include +#endif /* * Claim ownership of the CPU vector context for use by the calling context. @@ -54,6 +57,111 @@ void put_cpu_vector_context(void) preempt_enable(); } +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE +static inline void riscv_preempt_v_set_dirty(void) +{ + current->thread.riscv_v_flags |= RISCV_PREEMPT_V_DIRTY; +} + +static inline void riscv_preempt_v_reset_flags(void) +{ + current->thread.riscv_v_flags &= ~(RISCV_PREEMPT_V_DIRTY | RISCV_PREEMPT_V_NEED_RESTORE); +} + +static inline void riscv_preempt_v_depth_inc(void) +{ + riscv_v_ctx_cnt_add(RISCV_PREEMPT_V); +} + +static inline void riscv_preempt_v_depth_dec(void) +{ + riscv_v_ctx_cnt_sub(RISCV_PREEMPT_V); +} + +static inline u32 riscv_preempt_v_get_depth(void) +{ + return riscv_v_ctx_cnt() & RISCV_PREEMPT_V_MASK; +} + +#define PREEMPT_V_FIRST_DEPTH RISCV_PREEMPT_V +static int riscv_v_stop_kernel_context(void) +{ + if (riscv_preempt_v_get_depth() != PREEMPT_V_FIRST_DEPTH) + return 1; + + riscv_preempt_v_depth_dec(); + return 0; +} + +static int riscv_v_start_kernel_context(bool *is_nested) +{ + struct __riscv_v_ext_state *vstate = ¤t->thread.kernel_vstate; + + if (!vstate->datap) + return -ENOENT; + + if (riscv_preempt_v_started(current)) { + WARN_ON(riscv_preempt_v_get_depth() == PREEMPT_V_FIRST_DEPTH); + if (riscv_preempt_v_dirty(current)) { + get_cpu_vector_context(); + __riscv_v_vstate_save(vstate, vstate->datap); + riscv_preempt_v_clear_dirty(current); + put_cpu_vector_context(); + } + get_cpu_vector_context(); + riscv_preempt_v_set_restore(current); + *is_nested = true; + return 0; + } + + get_cpu_vector_context(); + riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); + put_cpu_vector_context(); + + riscv_preempt_v_depth_inc(); + return 0; +} + +/* low-level V context handling code, called with irq disabled */ +asmlinkage void riscv_v_context_nesting_start(struct pt_regs *regs) +{ + int depth; + + if (!riscv_preempt_v_started(current)) + return; + + depth = riscv_preempt_v_get_depth(); + if (depth == PREEMPT_V_FIRST_DEPTH && (regs->status & SR_VS) == SR_VS_DIRTY) + riscv_preempt_v_set_dirty(); + + riscv_preempt_v_depth_inc(); +} + +asmlinkage void riscv_v_context_nesting_end(struct pt_regs *regs) +{ + struct __riscv_v_ext_state *vstate = ¤t->thread.kernel_vstate; + u32 depth; + + lockdep_assert_irqs_disabled(); + + if (!riscv_preempt_v_started(current)) + return; + + riscv_preempt_v_depth_dec(); + depth = riscv_preempt_v_get_depth(); + if (depth == PREEMPT_V_FIRST_DEPTH) { + if (riscv_preempt_v_restore(current)) { + __riscv_v_vstate_restore(vstate, vstate->datap); + __riscv_v_vstate_clean(regs); + } + riscv_preempt_v_reset_flags(); + } +} +#else +#define riscv_v_start_kernel_context(nested) (-ENOENT) +#define riscv_v_stop_kernel_context() (-ENOENT) +#endif /* CONFIG_RISCV_ISA_V_PREEMPTIVE */ + /* * kernel_vector_begin(): obtain the CPU vector registers for use by the calling * context @@ -69,14 +177,20 @@ void put_cpu_vector_context(void) */ void kernel_vector_begin(void) { + bool nested = false; + if (WARN_ON(!has_vector())) return; BUG_ON(!may_use_simd()); - get_cpu_vector_context(); + if (riscv_v_start_kernel_context(&nested)) { + get_cpu_vector_context(); + riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); + } - riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); + if (!nested) + riscv_v_vstate_set_restore(current, task_pt_regs(current)); riscv_v_enable(); } @@ -96,10 +210,10 @@ void kernel_vector_end(void) if (WARN_ON(!has_vector())) return; - riscv_v_vstate_set_restore(current, task_pt_regs(current)); - riscv_v_disable(); - put_cpu_vector_context(); + if (riscv_v_stop_kernel_context()) {// we should call this early + put_cpu_vector_context(); + } } EXPORT_SYMBOL_GPL(kernel_vector_end); diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 862d59c3872e..92922dbd5b5c 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -188,6 +188,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) *dst = *src; /* clear entire V context, including datap for a new task */ memset(&dst->thread.vstate, 0, sizeof(struct __riscv_v_ext_state)); + memset(&dst->thread.kernel_vstate, 0, sizeof(struct __riscv_v_ext_state)); clear_tsk_thread_flag(dst, TIF_RISCV_V_DEFER_RESTORE); return 0; @@ -224,6 +225,8 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) p->thread.s[0] = 0; } p->thread.riscv_v_flags = 0; + if (has_vector()) + riscv_v_thread_alloc(p); p->thread.ra = (unsigned long)ret_from_fork; p->thread.sp = (unsigned long)childregs; /* kernel sp */ return 0; diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index 1fe140e34557..f9769703fd39 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -22,6 +22,9 @@ static bool riscv_v_implicit_uacc = IS_ENABLED(CONFIG_RISCV_ISA_V_DEFAULT_ENABLE); static struct kmem_cache *riscv_v_user_cachep; +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE +static struct kmem_cache *riscv_v_kernel_cachep; +#endif unsigned long riscv_v_vsize __read_mostly; EXPORT_SYMBOL_GPL(riscv_v_vsize); @@ -53,6 +56,11 @@ void __init riscv_v_setup_ctx_cache(void) riscv_v_user_cachep = kmem_cache_create_usercopy("riscv_vector_ctx", riscv_v_vsize, 16, SLAB_PANIC, 0, riscv_v_vsize, NULL); +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE + riscv_v_kernel_cachep = kmem_cache_create("riscv_vector_kctx", + riscv_v_vsize, 16, + SLAB_PANIC, NULL); +#endif } static bool insn_is_vector(u32 insn_buf) @@ -88,24 +96,35 @@ static bool insn_is_vector(u32 insn_buf) return false; } -static int riscv_v_thread_zalloc(void) +static int riscv_v_thread_zalloc(struct kmem_cache *cache, + struct __riscv_v_ext_state *ctx) { void *datap; - datap = kmem_cache_zalloc(riscv_v_user_cachep, GFP_KERNEL); + datap = kmem_cache_zalloc(cache, GFP_KERNEL); if (!datap) return -ENOMEM; - current->thread.vstate.datap = datap; - memset(¤t->thread.vstate, 0, offsetof(struct __riscv_v_ext_state, - datap)); + ctx->datap = datap; + memset(ctx, 0, offsetof(struct __riscv_v_ext_state, datap)); return 0; } +void riscv_v_thread_alloc(struct task_struct *tsk) +{ +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE + riscv_v_thread_zalloc(riscv_v_kernel_cachep, &tsk->thread.kernel_vstate); +#endif +} + void riscv_v_thread_free(struct task_struct *tsk) { if (tsk->thread.vstate.datap) kmem_cache_free(riscv_v_user_cachep, tsk->thread.vstate.datap); +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE + if (tsk->thread.kernel_vstate.datap) + kmem_cache_free(riscv_v_kernel_cachep, tsk->thread.kernel_vstate.datap); +#endif } #define VSTATE_CTRL_GET_CUR(x) ((x) & PR_RISCV_V_VSTATE_CTRL_CUR_MASK) @@ -177,7 +196,7 @@ bool riscv_v_first_use_handler(struct pt_regs *regs) * context where VS has been off. So, try to allocate the user's V * context and resume execution. */ - if (riscv_v_thread_zalloc()) { + if (riscv_v_thread_zalloc(riscv_v_user_cachep, ¤t->thread.vstate)) { force_sig(SIGBUS); return true; }