From patchwork Thu Oct 19 15:45:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 13429476 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E5547CDB465 for ; Thu, 19 Oct 2023 15:47:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pz5+n4HN+a7DTY+Xq5ZZGTdYzfToLsOZg4F+HRK+q14=; b=BbBwxIWKl8tu76 jm2xSpUIvcB4Ge6W6Yt9ijcuFLKdMeTbEAsaoxbY/yguAdnINkTJnHkPBWD0ttnmnjW/J2U+w+oNX f6mEYwq9z3ARfV9g54EnqfTlB2PnmOCRLsecRTE9u+6oOmVlnwV1q0iGNtlkQRUWAePXEArxo6lX/ nopGJenvf0VjljtZQFb7rib5EFmhV8/A+jaADtBeyYkKixS3z3hrapdd5qKRBjY7/8RDoBhR9LR8T 1RSu2VrXhb6FPXOd9fi1T0kNEWHc1VkkiJ2Nv7pvuU7r4TFw3yq0cjI4u9Jfrk38jtITmzto9ncec H1I8rersWSuBTrJKmBow==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qtVEr-000FDf-0E; Thu, 19 Oct 2023 15:47:09 +0000 Received: from mail-pl1-x629.google.com ([2607:f8b0:4864:20::629]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qtVEl-000FAe-2z for linux-riscv@lists.infradead.org; Thu, 19 Oct 2023 15:47:07 +0000 Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-1c9bf22fe05so55150395ad.2 for ; Thu, 19 Oct 2023 08:47:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1697730420; x=1698335220; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dnUns77zbo50ic9FLAsOKmJNJbGCHs/qvuOYjD56H80=; b=BdUMXcjcFgYMvsWT/Se+/FHRp2gyO6L+OP/nLF1YJyD4XksOfzuD9gMV4n637VoTtb EiVK4rvRKYJR1klSSAHSjIohYkPyK+mIAREQHVysQO2RE4XLdpp4QcRZwDfbKts8gISN AwsJyaYdrOf/Kd2Eved+lIZTLast1Lb9rAILV2/mLCtdE5DTL6Z6NaWp4KignrBXhdws PFU0OjonrVH3rdTmVIN/HdiwagV1YE41O2r4kDgyklVHwtHe+D0fIcCQQMSxS+IDkJFA IITxWmbyYUaWTn9mUe76RHi3EKkFqgOpdtAKlNEEsbvcXZBUH4wnFtHmIlD5Vuxw2icZ ZGzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697730420; x=1698335220; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dnUns77zbo50ic9FLAsOKmJNJbGCHs/qvuOYjD56H80=; b=PANcz5Qv9kwE4v+1/OvWKcZxG/3UgH7rThYzPjaoABeJdFplk1kFOt36uAo3vHfzy1 67J4+IuBm2mdHRDy4nJCQeCv1lFyFkCHLso5diDZUKlDzkRg42s7S6oI6ha3HwJT7ORu kvkEr45eXGQ8eJxbMPuUpP6wT1niw1q73fzB4zhno5P1GwKd1EMEoDgkHLTDSQgOUdnM hfsxIc1AUJ2N/4XyDLyC/bxWSZ5dnQvjfPcvQCJRnwDBPsZL5ta5+TA7DahIZZ4d6znN bygJrHTucSObVAUS1Dxkud7jB7OTOm+j63KDW/SsWL1vPbLmh05eNeio+ou+i5GdTjSl 9yYQ== X-Gm-Message-State: AOJu0YzP/u/plYPcmPeOIbw39lkY/aTl1TwPH8AlvDU57VUrPqsTR/EP Sk47ie0ZpXCG1TxNCP5a3OXcQC1qpOSOOdwDyzSQw0muyFO7mDApWbhu6n1wIPm3lPmFSeowxEw ugH/8Va37knUaNQFNGJJLm5rTse7za7qTHM8A1r0Iafk8b2yw6N3B6U74GXdIo0cBrwlQ09/k5n eT92N3VQevx+soMIs= X-Google-Smtp-Source: AGHT+IEyK93YSzT+Nl7itIguA0b8i0GFxEr449MpjrrcYY20qUGDYlAELclh8ZJJXGlb8aAnYbyCTA== X-Received: by 2002:a17:902:e5cb:b0:1c6:ec8:4c67 with SMTP id u11-20020a170902e5cb00b001c60ec84c67mr3067729plf.18.1697730419775; Thu, 19 Oct 2023 08:46:59 -0700 (PDT) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id u8-20020a17090282c800b001c75d7f2597sm2084710plz.141.2023.10.19.08.46.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Oct 2023 08:46:58 -0700 (PDT) From: Andy Chiu To: linux-riscv@lists.infradead.org, palmer@dabbelt.com Cc: paul.walmsley@sifive.com, greentime.hu@sifive.com, guoren@linux.alibaba.com, bjorn@kernel.org, peterz@infradead.org, tglx@linutronix.de, Andy Chiu , Albert Ou , Heiko Stuebner , Vincent Chen , Conor Dooley , Charlie Jenkins , Guo Ren , Jisheng Zhang , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Andrew Jones , Ley Foon Tan , Sia Jee Heng , Han-Kuan Chen , Andrew Bresticker , Fangrui Song , Nick Knight Subject: [v3, 5/5] riscv: vector: allow kernel-mode Vector with preemption Date: Thu, 19 Oct 2023 15:45:52 +0000 Message-Id: <20231019154552.23351-6-andy.chiu@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231019154552.23351-1-andy.chiu@sifive.com> References: <20231019154552.23351-1-andy.chiu@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231019_084703_966870_3CC3D1E4 X-CRM114-Status: GOOD ( 28.12 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add kernel_vstate to keep track of kernel-mode Vector registers when trap introduced context switch happens. Also, provide trap_pt_regs to let context save/restore routine reference status.VS at which the trap takes place. The thread flag TIF_RISCV_V_KERNEL_MODE indicates whether a task is running in kernel-mode Vector with preemption 'ON'. So context switch routines know and would save V-regs to kernel_vstate and restore V-regs immediately from kernel_vstate if the bit is set. Apart from a task's preemption status, the capability of running preemptive kernel-mode Vector is as well controlled by the RISCV_V_VSTATE_CTRL_PREEMPTIBLE mask in the task's thread.vstate_ctrl. This bit is masked whenever a trap takes place in kernel mode while executing preemptive Vector code. Also, provide a config CONFIG_RISCV_ISA_V_PREEMPTIVE to give users an option to disable preemptible kernel-mode Vector at build time. Users with constraint memory may want to disable this config as preemptible kernel-mode Vector needs extra space for tracking per thread's kernel-mode V context. Or, users might as well want to disable it if all kernel-mode Vector code is time sensitive and cannot tolerate context switch overhead. Signed-off-by: Andy Chiu --- Changelog v3: - Guard vstate_save with {get,set}_cpu_vector_context - Add comments on preventions of nesting V contexts - remove warnings in context switch when trap's reg is not pressent (Conor) - refactor code (Björn) Changelog v2: - fix build fail when compiling without RISCV_ISA_V (Conor) - 's/TIF_RISCV_V_KMV/TIF_RISCV_V_KERNEL_MODE' and add comment (Conor) - merge Kconfig patch into this oine (Conor). - 's/CONFIG_RISCV_ISA_V_PREEMPTIVE_KMV/CONFIG_RISCV_ISA_V_PREEMPTIVE/' (Conor) - fix some typos (Conor) - enclose assembly with RISCV_ISA_V_PREEMPTIVE. - change riscv_v_vstate_ctrl_config_kmv() to kernel_vector_allow_preemption() for better understanding. (Conor) - 's/riscv_v_kmv_preempitble/kernel_vector_preemptible/' --- arch/riscv/Kconfig | 10 +++++ arch/riscv/include/asm/processor.h | 2 + arch/riscv/include/asm/simd.h | 9 +++- arch/riscv/include/asm/thread_info.h | 4 ++ arch/riscv/include/asm/vector.h | 25 +++++++++-- arch/riscv/kernel/asm-offsets.c | 2 + arch/riscv/kernel/entry.S | 49 ++++++++++++++++++++++ arch/riscv/kernel/kernel_mode_vector.c | 57 ++++++++++++++++++++++++-- arch/riscv/kernel/process.c | 8 +++- arch/riscv/kernel/vector.c | 3 +- 10 files changed, 159 insertions(+), 10 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index d607ab0f7c6d..dc51164b8fd4 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -520,6 +520,16 @@ config RISCV_ISA_V_DEFAULT_ENABLE If you don't know what to do here, say Y. +config RISCV_ISA_V_PREEMPTIVE + bool "Run kernel-mode Vector with kernel preemption" + depends on PREEMPTION + depends on RISCV_ISA_V + default y + help + Ordinarily the kernel disables preemption before running in-kernel + Vector code. This config frees the kernel from disabling preemption + by adding memory on demand for tracking kernel's V-context. + config TOOLCHAIN_HAS_ZBB bool default y diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index 3e23e1786d05..f9b85e37e624 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -82,6 +82,8 @@ struct thread_struct { unsigned long bad_cause; unsigned long vstate_ctrl; struct __riscv_v_ext_state vstate; + struct pt_regs *trap_pt_regs; + struct __riscv_v_ext_state kernel_vstate; }; /* Whitelist the fstate from the task_struct for hardened usercopy */ diff --git a/arch/riscv/include/asm/simd.h b/arch/riscv/include/asm/simd.h index 0c5ba555b460..93d9015b4751 100644 --- a/arch/riscv/include/asm/simd.h +++ b/arch/riscv/include/asm/simd.h @@ -12,6 +12,7 @@ #include #include #include +#include #ifdef CONFIG_RISCV_ISA_V @@ -33,8 +34,14 @@ static __must_check inline bool may_use_simd(void) * cannot change under our feet -- if it's set we cannot be * migrated, and if it's clear we cannot be migrated to a CPU * where it is set. + * + * The TIF_RISCV_V_KERNEL_MODE check here prevent us from nesting a + * non-preemptible V context on top of a preemptible one. For example, + * executing V in a softirq context is prevented if the core is + * interrupted during the execution of preemptible V. */ - return !in_hardirq() && !in_nmi() && !this_cpu_read(vector_context_busy); + return !in_hardirq() && !in_nmi() && !this_cpu_read(vector_context_busy) && + !test_thread_flag(TIF_RISCV_V_KERNEL_MODE); } #else /* ! CONFIG_RISCV_ISA_V */ diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h index b182f2d03e25..8797d520e8ef 100644 --- a/arch/riscv/include/asm/thread_info.h +++ b/arch/riscv/include/asm/thread_info.h @@ -94,6 +94,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src); #define TIF_UPROBE 10 /* uprobe breakpoint or singlestep */ #define TIF_32BIT 11 /* compat-mode 32bit process */ #define TIF_RISCV_V_DEFER_RESTORE 12 /* restore Vector before returing to user */ +#define TIF_RISCV_V_KERNEL_MODE 13 /* kernel-mode Vector run with preemption-on */ #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) @@ -101,9 +102,12 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src); #define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL) #define _TIF_UPROBE (1 << TIF_UPROBE) #define _TIF_RISCV_V_DEFER_RESTORE (1 << TIF_RISCV_V_DEFER_RESTORE) +#define _TIF_RISCV_V_KERNEL_MODE (1 << TIF_RISCV_V_KERNEL_MODE) #define _TIF_WORK_MASK \ (_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED | \ _TIF_NOTIFY_SIGNAL | _TIF_UPROBE) +#define RISCV_V_VSTATE_CTRL_PREEMPTIBLE 0x20 + #endif /* _ASM_RISCV_THREAD_INFO_H */ diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index d356eac8c0b4..27bb49e97af8 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -198,9 +198,22 @@ static inline void __switch_to_vector(struct task_struct *prev, { struct pt_regs *regs; - regs = task_pt_regs(prev); - riscv_v_vstate_save(&prev->thread.vstate, regs); - riscv_v_vstate_set_restore(next, task_pt_regs(next)); + if (IS_ENABLED(CONFIG_RISCV_ISA_V_PREEMPTIVE) && + test_tsk_thread_flag(prev, TIF_RISCV_V_KERNEL_MODE)) { + regs = prev->thread.trap_pt_regs; + riscv_v_vstate_save(&prev->thread.kernel_vstate, regs); + } else { + regs = task_pt_regs(prev); + riscv_v_vstate_save(&prev->thread.vstate, regs); + } + + if (IS_ENABLED(CONFIG_RISCV_ISA_V_PREEMPTIVE) && + test_tsk_thread_flag(next, TIF_RISCV_V_KERNEL_MODE)) { + regs = next->thread.trap_pt_regs; + riscv_v_vstate_restore(&next->thread.kernel_vstate, regs); + } else { + riscv_v_vstate_set_restore(next, task_pt_regs(next)); + } } void riscv_v_vstate_ctrl_init(struct task_struct *tsk); @@ -225,4 +238,10 @@ static inline bool riscv_v_vstate_ctrl_user_allowed(void) { return false; } #endif /* CONFIG_RISCV_ISA_V */ +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE +void kernel_vector_allow_preemption(void); +#else +#define kernel_vector_allow_preemption() do {} while (0) +#endif + #endif /* ! __ASM_RISCV_VECTOR_H */ diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index d6a75aac1d27..4b062f7741b2 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -38,6 +38,8 @@ void asm_offsets(void) OFFSET(TASK_TI_PREEMPT_COUNT, task_struct, thread_info.preempt_count); OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_sp); OFFSET(TASK_TI_USER_SP, task_struct, thread_info.user_sp); + OFFSET(TASK_THREAD_TRAP_REGP, task_struct, thread.trap_pt_regs); + OFFSET(TASK_THREAD_VSTATE_CTRL, task_struct, thread.vstate_ctrl); OFFSET(TASK_THREAD_F0, task_struct, thread.fstate.f[0]); OFFSET(TASK_THREAD_F1, task_struct, thread.fstate.f[1]); diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 143a2bb3e697..ec8baada608f 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -66,6 +66,33 @@ _save_context: REG_S s4, PT_CAUSE(sp) REG_S s5, PT_TP(sp) +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE + /* + * Record the register set at the frame where in-kernel V registers are + * last alive. + */ + REG_L s0, TASK_TI_FLAGS(tp) + li s1, 1 << TIF_RISCV_V_KERNEL_MODE + and s0, s0, s1 + beqz s0, 1f + li s0, TASK_THREAD_TRAP_REGP + add s0, s0, tp + REG_L s1, (s0) + bnez s1, 1f + REG_S sp, (s0) + li s0, TASK_THREAD_VSTATE_CTRL + add s0, s0, tp + REG_L s1, (s0) + /* + * Nesting preemptible Vector context is prevented by unsetting + * RISCV_V_VSTATE_CTRL_PREEMPTIBLE here. + */ + li s2, ~RISCV_V_VSTATE_CTRL_PREEMPTIBLE + and s1, s1, s2 + REG_S s1, (s0) +1: +#endif + /* * Set the scratch register to 0, so that if a recursive exception * occurs, the exception vector knows it came from the kernel @@ -129,6 +156,28 @@ SYM_CODE_START_NOALIGN(ret_from_exception) */ csrw CSR_SCRATCH, tp 1: +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE + /* + * Clear tracking of the trap registers when we return to the frame + * that uses kernel mode Vector. + */ + REG_L s0, TASK_TI_FLAGS(tp) + li s1, 1 << TIF_RISCV_V_KERNEL_MODE + and s0, s0, s1 + beqz s0, 1f + li s0, TASK_THREAD_TRAP_REGP + add s0, s0, tp + REG_L s1, (s0) + bne s1, sp, 1f + REG_S x0, (s0) + li s0, TASK_THREAD_VSTATE_CTRL + add s0, s0, tp + REG_L s1, (s0) + ori s1, s1, RISCV_V_VSTATE_CTRL_PREEMPTIBLE + REG_S s1, (s0) +1: +#endif + REG_L a0, PT_STATUS(sp) /* * The current load reservation is effectively part of the processor's diff --git a/arch/riscv/kernel/kernel_mode_vector.c b/arch/riscv/kernel/kernel_mode_vector.c index 2344817f8640..6203990476b3 100644 --- a/arch/riscv/kernel/kernel_mode_vector.c +++ b/arch/riscv/kernel/kernel_mode_vector.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -48,6 +49,50 @@ void put_cpu_vector_context(void) preempt_enable(); } +#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE +void kernel_vector_allow_preemption(void) +{ + current->thread.vstate_ctrl |= RISCV_V_VSTATE_CTRL_PREEMPTIBLE; +} + +static bool kernel_vector_preemptible(void) +{ + return !!(current->thread.vstate_ctrl & RISCV_V_VSTATE_CTRL_PREEMPTIBLE); +} + +static int riscv_v_start_kernel_context(void) +{ + struct __riscv_v_ext_state *vstate; + + if (!kernel_vector_preemptible()) + return -EBUSY; + + vstate = ¤t->thread.kernel_vstate; + if (!vstate->datap) { + vstate->datap = kmalloc(riscv_v_vsize, GFP_KERNEL); + if (!vstate->datap) + return -ENOMEM; + } + + get_cpu_vector_context(); + riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); + put_cpu_vector_context(); + + current->thread.trap_pt_regs = NULL; + WARN_ON(test_and_set_thread_flag(TIF_RISCV_V_KERNEL_MODE)); + return 0; +} + +static void riscv_v_stop_kernel_context(void) +{ + WARN_ON(!test_and_clear_thread_flag(TIF_RISCV_V_KERNEL_MODE)); + current->thread.trap_pt_regs = NULL; +} +#else +#define riscv_v_start_kernel_context() (0) +#define riscv_v_stop_kernel_context() do {} while (0) +#endif /* CONFIG_RISCV_ISA_V_PREEMPTIVE */ + /* * kernel_vector_begin(): obtain the CPU vector registers for use by the calling * context @@ -68,9 +113,10 @@ void kernel_vector_begin(void) BUG_ON(!may_use_simd()); - get_cpu_vector_context(); - - riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); + if (!preemptible() || riscv_v_start_kernel_context()) { + get_cpu_vector_context(); + riscv_v_vstate_save(¤t->thread.vstate, task_pt_regs(current)); + } riscv_v_enable(); } @@ -94,6 +140,9 @@ void kernel_vector_end(void) riscv_v_disable(); - put_cpu_vector_context(); + if (!test_thread_flag(TIF_RISCV_V_KERNEL_MODE)) + put_cpu_vector_context(); + else + riscv_v_stop_kernel_context(); } EXPORT_SYMBOL_GPL(kernel_vector_end); diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index ec89e7edb6fd..18cb37c305ab 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -160,8 +160,11 @@ void flush_thread(void) void arch_release_task_struct(struct task_struct *tsk) { /* Free the vector context of datap. */ - if (has_vector()) + if (has_vector()) { kfree(tsk->thread.vstate.datap); + if (IS_ENABLED(CONFIG_RISCV_ISA_V_PREEMPTIVE)) + kfree(tsk->thread.kernel_vstate.datap); + } } int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) @@ -170,7 +173,9 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) *dst = *src; /* clear entire V context, including datap for a new task */ memset(&dst->thread.vstate, 0, sizeof(struct __riscv_v_ext_state)); + memset(&dst->thread.kernel_vstate, 0, sizeof(struct __riscv_v_ext_state)); clear_tsk_thread_flag(dst, TIF_RISCV_V_DEFER_RESTORE); + clear_tsk_thread_flag(dst, TIF_RISCV_V_KERNEL_MODE); return 0; } @@ -205,6 +210,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) childregs->a0 = 0; /* Return value of fork() */ p->thread.s[0] = 0; } + kernel_vector_allow_preemption(); p->thread.ra = (unsigned long)ret_from_fork; p->thread.sp = (unsigned long)childregs; /* kernel sp */ return 0; diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index 9d583b760db4..42f227077ee5 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -122,7 +122,8 @@ static inline void riscv_v_ctrl_set(struct task_struct *tsk, int cur, int nxt, ctrl |= VSTATE_CTRL_MAKE_NEXT(nxt); if (inherit) ctrl |= PR_RISCV_V_VSTATE_CTRL_INHERIT; - tsk->thread.vstate_ctrl = ctrl; + tsk->thread.vstate_ctrl &= ~PR_RISCV_V_VSTATE_CTRL_MASK; + tsk->thread.vstate_ctrl |= ctrl; } bool riscv_v_vstate_ctrl_user_allowed(void)