From patchwork Wed Sep 28 16:20:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 12992565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 83E6BC04A95 for ; Wed, 28 Sep 2022 16:29:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GLOBoQz+Z9OGXQBtDE51gPse3fIC0jAGSvruhzb092k=; b=SqlcrLgJp1bQMe jBeUjc4KxVH4GQdQGVd7RNdOjp/yvybmMqQXcksGqKopztYScziOPopicH0+4T1qoQNCkqHNDABI9 DuVKiPdKJNIHhQrzSKUBcy5drBmucxsOsOLTBSHHp8Eb9nNnNcFc8r9JUEnsDycBh0s9PffNdYgqG /nhWvD1CaimP77JnYr7JsuXO780uH1hB+unKc9RE6ODylw97FZqjtpgGaLqMgIxY8AEkvMsYpDq1v LCVJQ3IhZqEnkJxH7uF8yVEcYBXE8YekFoBe2CDHH2vK+IWKexty43UBLd13zlQMRyz4YdfdElAv4 Yt2dZmInOw/w7vWI+kgQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1odZwL-00H7gj-Ou; Wed, 28 Sep 2022 16:29:41 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1odZwJ-00H7gC-E6 for linux-riscv@lists.infradead.org; Wed, 28 Sep 2022 16:29:40 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0417B61F26; Wed, 28 Sep 2022 16:29:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E07EC43470; Wed, 28 Sep 2022 16:29:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1664382578; bh=cai7j8Pf1IAppJ9ZFKXlpopQ5BzZ2xkkOLfulz7eTCA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z+QYcj0JVaUslzqtIXSBly7zV7u25zx2cZizBOkxuSv0JQSuV2YFg4CV8Yak82ObF PCYCX6ous4kGpxgDxD8CjfAgwGK//pgxpIJHwooHlq639aOwFib8lTd27gHxCViyfU YRRWpA1MNVKVtabKQ1h+NyBFAIzk1GboYOVStJqZFUK873kkALD85OFJaU94honi55 vz/ubHNay/fOVE7BDCqBp6Hn9tONvtnSqaAlhWFcixG/59Yh5U+OfVdUS0KXdltgwt fryhwMYKmmT9es0CajION4wYr3j0CbUSg1vm/jygKjl3OHprBWQhkD2w+inBXdQGVi bn1GKg/CQ8z+A== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Nathan Chancellor , Nick Desaulniers , Guo Ren Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, llvm@lists.linux.dev Subject: [PATCH v2 1/4] riscv: remove extra level wrappers of trace_hardirqs_{on,off} Date: Thu, 29 Sep 2022 00:20:04 +0800 Message-Id: <20220928162007.3791-2-jszhang@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220928162007.3791-1-jszhang@kernel.org> References: <20220928162007.3791-1-jszhang@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220928_092939_567625_60D9F580 X-CRM114-Status: GOOD ( 12.22 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Since riscv is converted to generic entry, there's no need for the extra wrappers of trace_hardirqs_{on,off}. Tested with llvm + irqsoff. Signed-off-by: Jisheng Zhang Reviewed-by: Guo Ren --- arch/riscv/kernel/Makefile | 2 -- arch/riscv/kernel/trace_irq.c | 27 --------------------------- arch/riscv/kernel/trace_irq.h | 11 ----------- 3 files changed, 40 deletions(-) delete mode 100644 arch/riscv/kernel/trace_irq.c delete mode 100644 arch/riscv/kernel/trace_irq.h diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 01da14e21019..11ee206cc235 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -69,8 +69,6 @@ obj-$(CONFIG_CPU_PM) += suspend_entry.o suspend.o obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o -obj-$(CONFIG_TRACE_IRQFLAGS) += trace_irq.o - obj-$(CONFIG_PERF_EVENTS) += perf_callchain.o obj-$(CONFIG_HAVE_PERF_REGS) += perf_regs.o obj-$(CONFIG_RISCV_SBI) += sbi.o diff --git a/arch/riscv/kernel/trace_irq.c b/arch/riscv/kernel/trace_irq.c deleted file mode 100644 index 095ac976d7da..000000000000 --- a/arch/riscv/kernel/trace_irq.c +++ /dev/null @@ -1,27 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (C) 2022 Changbin Du - */ - -#include -#include -#include "trace_irq.h" - -/* - * trace_hardirqs_on/off require the caller to setup frame pointer properly. - * Otherwise, CALLER_ADDR1 might trigger an pagging exception in kernel. - * Here we add one extra level so they can be safely called by low - * level entry code which $fp is used for other purpose. - */ - -void __trace_hardirqs_on(void) -{ - trace_hardirqs_on(); -} -NOKPROBE_SYMBOL(__trace_hardirqs_on); - -void __trace_hardirqs_off(void) -{ - trace_hardirqs_off(); -} -NOKPROBE_SYMBOL(__trace_hardirqs_off); diff --git a/arch/riscv/kernel/trace_irq.h b/arch/riscv/kernel/trace_irq.h deleted file mode 100644 index 99fe67377e5e..000000000000 --- a/arch/riscv/kernel/trace_irq.h +++ /dev/null @@ -1,11 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Copyright (C) 2022 Changbin Du - */ -#ifndef __TRACE_IRQ_H -#define __TRACE_IRQ_H - -void __trace_hardirqs_on(void); -void __trace_hardirqs_off(void); - -#endif /* __TRACE_IRQ_H */ From patchwork Wed Sep 28 16:20:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 12992566 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F2B0AC32771 for ; Wed, 28 Sep 2022 16:29:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=UbSSykJq3KcHOHFWPuqWhHRrRkPzRTkLqGTVYPh1oyc=; b=GTZatO63vq3Y1k KdBMO6I9MGwk28HXCuZ9356cbgLQ9WkEG2Iu7UQRUleqAo4/mgg7oso+orZGusJfv8ZwOoW54CdzV SgmKrZ2yyPV4/+/obZMPAwz/tKq6hAo+bNvGdHz/aOeTJ1nDYxW4HrRAZ16vTJ7wV7IupJvWz1UyD DCzx4AJ08ImtRyjyopz+mb/PJEA+YtYODU02jUXNkZHEBMzg+n3PrUmAveroKzsZjQavoIX1nZIbM OeP+QDsr9Xv8GDBWEEUoNSRR1mMCnHn77CcCdF6H6zjPjqC/mbxDgu0PkUyPQvR8EJVam+ZqJZBhZ ZSBciNhbeVYsFuEyv+qw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1odZwP-00H7hX-VV; Wed, 28 Sep 2022 16:29:45 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1odZwN-00H7go-FX for linux-riscv@lists.infradead.org; Wed, 28 Sep 2022 16:29:44 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0AA7CB82150; Wed, 28 Sep 2022 16:29:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D4BDEC43141; Wed, 28 Sep 2022 16:29:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1664382580; bh=ddcSdAqJtV7YjhFUF+/+mxLRCnz8byHozG9Vd7pQWN4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L0yMrfgpP/2Wki358yrQPk1PxJHQLaBxbk+n47BfB27WnVmsqgC3/CaeoNc+bLY+1 ZdfkPb6x80BIld7W2UThJT6FSnlg/loBoZt7bXM68ftsUzhGNHJWdnCOdXE1R5F9Ao zmhIP1O78RhuJ2UxlUV3J/viOaeh/sOkVyZ/XEQwt/4tqO9F0oCLttRrvGiAYpk/Ji iLRllCWwUZX/hmm7k2ig29XvO9RPwSv2OJ4MzpddQcI77zkSZbqlrxbQud/sNfZr/h 0v1WYIBbLG0+CJ3TKV2bS6wqbtkHb8i3nJXYhwxCUG2levbRMH0TIBCO+sr9/ObtYS go3GpoUmDwFnQ== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Nathan Chancellor , Nick Desaulniers , Guo Ren Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, llvm@lists.linux.dev Subject: [PATCH v2 2/4] riscv: consolidate ret_from_kernel_thread into ret_from_fork Date: Thu, 29 Sep 2022 00:20:05 +0800 Message-Id: <20220928162007.3791-3-jszhang@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220928162007.3791-1-jszhang@kernel.org> References: <20220928162007.3791-1-jszhang@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220928_092943_678215_A1C440C1 X-CRM114-Status: GOOD ( 12.50 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The ret_from_kernel_thread() behaves similarly with ret_from_fork(), the only difference is whether call the fn(arg) or not, this can be acchieved by testing fn is NULL or not, I.E s0 is 0 or not. Signed-off-by: Jisheng Zhang --- arch/riscv/kernel/entry.S | 11 +++-------- arch/riscv/kernel/process.c | 5 ++--- 2 files changed, 5 insertions(+), 11 deletions(-) diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 2207cf44a3bc..a3e1ed2fa2ac 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -323,20 +323,15 @@ END(handle_kernel_stack_overflow) ENTRY(ret_from_fork) call schedule_tail - move a0, sp /* pt_regs */ - la ra, ret_from_exception - tail syscall_exit_to_user_mode -ENDPROC(ret_from_fork) - -ENTRY(ret_from_kernel_thread) - call schedule_tail + beqz s0, 1f /* not from kernel thread */ /* Call fn(arg) */ move a0, s1 jalr s0 +1: move a0, sp /* pt_regs */ la ra, ret_from_exception tail syscall_exit_to_user_mode -ENDPROC(ret_from_kernel_thread) +ENDPROC(ret_from_fork) #ifdef CONFIG_IRQ_STACKS ENTRY(call_on_stack) diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index ceb9ebab6558..67e7cd123ceb 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -34,7 +34,6 @@ EXPORT_SYMBOL(__stack_chk_guard); #endif extern asmlinkage void ret_from_fork(void); -extern asmlinkage void ret_from_kernel_thread(void); void arch_cpu_idle(void) { @@ -172,7 +171,6 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) /* Supervisor/Machine, irqs on: */ childregs->status = SR_PP | SR_PIE; - p->thread.ra = (unsigned long)ret_from_kernel_thread; p->thread.s[0] = (unsigned long)args->fn; p->thread.s[1] = (unsigned long)args->fn_arg; } else { @@ -182,8 +180,9 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) if (clone_flags & CLONE_SETTLS) childregs->tp = tls; childregs->a0 = 0; /* Return value of fork() */ - p->thread.ra = (unsigned long)ret_from_fork; + p->thread.s[0] = 0; } + p->thread.ra = (unsigned long)ret_from_fork; p->thread.sp = (unsigned long)childregs; /* kernel sp */ return 0; } From patchwork Wed Sep 28 16:20:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 12992567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1F2D6C6FA8E for ; Wed, 28 Sep 2022 16:29:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0tl2hmNTVaCNKy7djGYs9Mcr88reXFIwCnBDbgseSn0=; b=pCchekr2JqMSpe n3jvy91wfOYfuUnGr6iGAjAgMfvu54mRT6ns4MiMPcHkFqST73So/eDalcpyU39HI4RlgbYlkQ9KH VOHSuTOT4n9ckuhuF55LlP29QX3U94UUqyQL8fzfZAmZ7qZwB1bfkHXzEEvfSprmsJToIWe+MDMcb CZ4eBYEvltiwWVjPtYQw3bAvhxCXvhOOWB+s8I5vKl5W798JnG4i6qfobcWzR0fyBsrTRLLxIWb5E YWOR0JfTN3evKfcrngCAgJ0b+0jcEYY5h+JCZwaY1DJ5Pysvwpoj+XJ+W+b4o+yiYnkZxBm97cgDX 2gmEiFPXBym6lI7qvwGA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1odZwT-00H7iy-BE; Wed, 28 Sep 2022 16:29:49 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1odZwP-00H7hQ-MQ for linux-riscv@lists.infradead.org; Wed, 28 Sep 2022 16:29:47 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 576C9B82171; Wed, 28 Sep 2022 16:29:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 381D4C433C1; Wed, 28 Sep 2022 16:29:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1664382583; bh=wABMgaCwjRhgQPH5R960eCDdxx+O/6ge/YL20Dl/YxY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MUnHENCYk0Jg3jPoRph/ormZglMLck6UypMwNXp+uNplwJFPHF1gb3iR6RY6bysN4 vDEDQ7uDr4IOtzbNwcWHyO+hTz2mW9AQ6rm5RDURuL3Qua8I0qUFrZTCe2pnD0kSBP pCqzjcQ5FDbk2wfSMBU9Tkv7+e28UKzddB0Z4OdDEKLT9tLg0oRlW7AUtdMTzQX3Cm 1tsHY6blpg0HziWOWJ6MDilqJEGZBL7lqdv4diV5/mYmOCI0xtWrCOoppJk/PoS6Nh d0HyFOe6OyNR1v0WHz4Pw3wuuyO3WrJw3DiCW14UpgHNyVpbQmrXZolU9ZhQ8GD/IL kRKcxJNeqgPRw== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Nathan Chancellor , Nick Desaulniers , Guo Ren Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, llvm@lists.linux.dev Subject: [PATCH v2 3/4] riscv: fix race when vmap stack overflow and remove shadow_stack Date: Thu, 29 Sep 2022 00:20:06 +0800 Message-Id: <20220928162007.3791-4-jszhang@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220928162007.3791-1-jszhang@kernel.org> References: <20220928162007.3791-1-jszhang@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220928_092946_045471_B577A61D X-CRM114-Status: GOOD ( 19.88 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Currently, when detecting vmap stack overflow, riscv firstly switches to the so called shadow stack, then use this shadow stack to call the get_overflow_stack() to get the overflow stack. However, there's a race here if two or more harts use the same shadow stack at the same time. To solve this race, we rely on two facts: 1. the content of kernel thread pointer I.E "tp" register can still be gotten from the the CSR_SCRATCH register, thus we can clobber tp under the condtion that we restore tp from CSR_SCRATCH later. 2. Once vmap stack overflow happen, panic is comming soon, no performance concern at all, so we don't need to define the overflow stack as percpu var, we can simplify it into a pointer array which points to allocated pages. Thus we can use tp as a tmp register to get the cpu id to calculate the offset of overflow stack pointer array for each cpu w/o shadow stack any more. Thus the race condition is removed as a side effect. NOTE: we can use similar mechanism to let each cpu use different shadow stack to fix the race codition, but if we can remove shadow stack usage totally, why not. Signed-off-by: Jisheng Zhang Fixes: 31da94c25aea ("riscv: add VMAP_STACK overflow detection") --- arch/riscv/include/asm/asm-prototypes.h | 1 - arch/riscv/include/asm/thread_info.h | 4 +- arch/riscv/kernel/asm-offsets.c | 1 + arch/riscv/kernel/entry.S | 56 ++++--------------------- arch/riscv/kernel/traps.c | 31 ++++++++------ 5 files changed, 29 insertions(+), 64 deletions(-) diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h index ef386fcf3939..4a06fa0f6493 100644 --- a/arch/riscv/include/asm/asm-prototypes.h +++ b/arch/riscv/include/asm/asm-prototypes.h @@ -25,7 +25,6 @@ DECLARE_DO_ERROR_INFO(do_trap_ecall_s); DECLARE_DO_ERROR_INFO(do_trap_ecall_m); DECLARE_DO_ERROR_INFO(do_trap_break); -asmlinkage unsigned long get_overflow_stack(void); asmlinkage void handle_bad_stack(struct pt_regs *regs); #endif /* _ASM_RISCV_PROTOTYPES_H */ diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h index c970d41dc4c6..c604a5212a73 100644 --- a/arch/riscv/include/asm/thread_info.h +++ b/arch/riscv/include/asm/thread_info.h @@ -28,14 +28,12 @@ #define THREAD_SHIFT (PAGE_SHIFT + THREAD_SIZE_ORDER) #define OVERFLOW_STACK_SIZE SZ_4K -#define SHADOW_OVERFLOW_STACK_SIZE (1024) +#define OVERFLOW_STACK_SHIFT 12 #define IRQ_STACK_SIZE THREAD_SIZE #ifndef __ASSEMBLY__ -extern long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE / sizeof(long)]; - #include #include diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index df9444397908..62bf3bacc322 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -37,6 +37,7 @@ void asm_offsets(void) OFFSET(TASK_TI_PREEMPT_COUNT, task_struct, thread_info.preempt_count); OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_sp); OFFSET(TASK_TI_USER_SP, task_struct, thread_info.user_sp); + OFFSET(TASK_TI_CPU, task_struct, thread_info.cpu); OFFSET(TASK_THREAD_F0, task_struct, thread.fstate.f[0]); OFFSET(TASK_THREAD_F1, task_struct, thread.fstate.f[1]); diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index a3e1ed2fa2ac..5a6171a90d81 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -223,54 +223,16 @@ END(ret_from_exception) #ifdef CONFIG_VMAP_STACK ENTRY(handle_kernel_stack_overflow) - la sp, shadow_stack - addi sp, sp, SHADOW_OVERFLOW_STACK_SIZE - - //save caller register to shadow stack - addi sp, sp, -(PT_SIZE_ON_STACK) - REG_S x1, PT_RA(sp) - REG_S x5, PT_T0(sp) - REG_S x6, PT_T1(sp) - REG_S x7, PT_T2(sp) - REG_S x10, PT_A0(sp) - REG_S x11, PT_A1(sp) - REG_S x12, PT_A2(sp) - REG_S x13, PT_A3(sp) - REG_S x14, PT_A4(sp) - REG_S x15, PT_A5(sp) - REG_S x16, PT_A6(sp) - REG_S x17, PT_A7(sp) - REG_S x28, PT_T3(sp) - REG_S x29, PT_T4(sp) - REG_S x30, PT_T5(sp) - REG_S x31, PT_T6(sp) - - la ra, restore_caller_reg - tail get_overflow_stack - -restore_caller_reg: - //save per-cpu overflow stack - REG_S a0, -8(sp) - //restore caller register from shadow_stack - REG_L x1, PT_RA(sp) - REG_L x5, PT_T0(sp) - REG_L x6, PT_T1(sp) - REG_L x7, PT_T2(sp) - REG_L x10, PT_A0(sp) - REG_L x11, PT_A1(sp) - REG_L x12, PT_A2(sp) - REG_L x13, PT_A3(sp) - REG_L x14, PT_A4(sp) - REG_L x15, PT_A5(sp) - REG_L x16, PT_A6(sp) - REG_L x17, PT_A7(sp) - REG_L x28, PT_T3(sp) - REG_L x29, PT_T4(sp) - REG_L x30, PT_T5(sp) - REG_L x31, PT_T6(sp) + la sp, overflow_stack + /* use tp as tmp register since we can restore it from CSR_SCRATCH */ + REG_L tp, TASK_TI_CPU(tp) + slli tp, tp, RISCV_LGPTR + add tp, sp, tp + REG_L sp, 0(tp) + + /* restore tp */ + csrr tp, CSR_SCRATCH - //load per-cpu overflow stack - REG_L sp, -8(sp) addi sp, sp, -(PT_SIZE_ON_STACK) //save context to overflow stack diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c index 73f06cd149d9..b6c64f0fb70f 100644 --- a/arch/riscv/kernel/traps.c +++ b/arch/riscv/kernel/traps.c @@ -216,23 +216,12 @@ int is_valid_bugaddr(unsigned long pc) #endif /* CONFIG_GENERIC_BUG */ #ifdef CONFIG_VMAP_STACK -static DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], - overflow_stack)__aligned(16); -/* - * shadow stack, handled_ kernel_ stack_ overflow(in kernel/entry.S) is used - * to get per-cpu overflow stack(get_overflow_stack). - */ -long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE/sizeof(long)]; -asmlinkage unsigned long get_overflow_stack(void) -{ - return (unsigned long)this_cpu_ptr(overflow_stack) + - OVERFLOW_STACK_SIZE; -} +void *overflow_stack[NR_CPUS] __ro_after_init __aligned(16); asmlinkage void handle_bad_stack(struct pt_regs *regs) { unsigned long tsk_stk = (unsigned long)current->stack; - unsigned long ovf_stk = (unsigned long)this_cpu_ptr(overflow_stack); + unsigned long ovf_stk = (unsigned long)overflow_stack[raw_smp_processor_id()]; console_verbose(); @@ -248,4 +237,20 @@ asmlinkage void handle_bad_stack(struct pt_regs *regs) for (;;) wait_for_interrupt(); } + +static int __init alloc_overflow_stacks(void) +{ + u8 *s; + int cpu; + + for_each_possible_cpu(cpu) { + s = (u8 *)__get_free_pages(GFP_KERNEL, get_order(OVERFLOW_STACK_SIZE)); + if (WARN_ON(!s)) + return -ENOMEM; + overflow_stack[cpu] = &s[OVERFLOW_STACK_SIZE]; + printk("%px\n", overflow_stack[cpu]); + } + return 0; +} +early_initcall(alloc_overflow_stacks); #endif From patchwork Wed Sep 28 16:20:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 12992568 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A3B87C04A95 for ; Wed, 28 Sep 2022 16:29:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JZvNfmf0RIhnywKYOELbCYzAw56pJiGZ5pJS5D0Gce0=; b=G1Vb7xV4wO1nTt 8Ard+rZuLuv1HIR9PpwVz8vDz83hb7HPMfZoSk1V8pGgLI7cwPCyJJH5OrGLBoK7SX96q1Un6sRcw NjAeDnds5Zs8grtQGTRxxe2/tao1aPnBsmF+PUFa6O0tVHqmg6JYTtlhO9iHy06Rym6ZcrFevaRXy TsoSI5IiJp3riFmwPu1XNHi3fdsh307RXHhfnjXR/p168jJglb6rItfS8Ol3MV0EcQiFkF050wX8A vsRKonZWB1qvJFyr6iTcIenTrJ5YzFGp/nv3JzneyrBew1MRM9Ma9yqzfnujSCZxCBypUNR6gGopq 0T1rmF9loX9KxPoowARg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1odZwU-00H7jm-OS; Wed, 28 Sep 2022 16:29:50 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1odZwS-00H7iF-3v for linux-riscv@lists.infradead.org; Wed, 28 Sep 2022 16:29:50 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CAFF2B82172; Wed, 28 Sep 2022 16:29:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F829C43141; Wed, 28 Sep 2022 16:29:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1664382585; bh=fdpi10kQlRazyTr+SlOn77cvFkgoSevjT+ijKpXJ/sA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U0eXGiUahBnc1trC2eUbTTEwU5CEKFe9nfnL69aZSqjDFhV5csBJpIPA3O/lJHOGz 6ueK5LQQWpAxqL5F6PTiirlV2lRUsZ+JsvJ+Nj4sfinwpH+ygi+nu6yF/Mje9NKdrf aSlYNx4/TRcIwBg4ii4b2wa9SnsEXV0iihHfqQB+hTbbqXx5g+nWVTx2KGS20FGpc9 G65p5G/+hBO6kLnoKsHeYTG0UZX2IuWwjazSyocGMF9GRvc+nnRucazD481qZfTpcS R+T8Z14mSVtgsbmOZjGJPg1R4Tl/IfO2Sevq0ecqYGjs7nlblOjRKzAE1MQp/B1jlB BBqYmLiwhZvlA== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Nathan Chancellor , Nick Desaulniers , Guo Ren Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, llvm@lists.linux.dev Subject: [PATCH v2 4/4] riscv: entry: consolidate general regs saving into save_gp Date: Thu, 29 Sep 2022 00:20:07 +0800 Message-Id: <20220928162007.3791-5-jszhang@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220928162007.3791-1-jszhang@kernel.org> References: <20220928162007.3791-1-jszhang@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220928_092948_469997_C02926B1 X-CRM114-Status: GOOD ( 10.19 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Consolidate the saving/restoring GPs(except ra, sp and tp) into save_gp/restore_gp macro. No functional change intended. Signed-off-by: Jisheng Zhang --- arch/riscv/include/asm/asm.h | 65 +++++++++++++++++++++++++ arch/riscv/kernel/entry.S | 87 ++-------------------------------- arch/riscv/kernel/mcount-dyn.S | 58 +---------------------- 3 files changed, 70 insertions(+), 140 deletions(-) diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h index 1b471ff73178..2f3b49536e9d 100644 --- a/arch/riscv/include/asm/asm.h +++ b/arch/riscv/include/asm/asm.h @@ -68,6 +68,7 @@ #endif #ifdef __ASSEMBLY__ +#include /* Common assembly source macros */ @@ -80,6 +81,70 @@ .endr .endm + /* save all GPs except ra, sp and tp */ + .macro save_gp + REG_S x3, PT_GP(sp) + REG_S x5, PT_T0(sp) + REG_S x6, PT_T1(sp) + REG_S x7, PT_T2(sp) + REG_S x8, PT_S0(sp) + REG_S x9, PT_S1(sp) + REG_S x10, PT_A0(sp) + REG_S x11, PT_A1(sp) + REG_S x12, PT_A2(sp) + REG_S x13, PT_A3(sp) + REG_S x14, PT_A4(sp) + REG_S x15, PT_A5(sp) + REG_S x16, PT_A6(sp) + REG_S x17, PT_A7(sp) + REG_S x18, PT_S2(sp) + REG_S x19, PT_S3(sp) + REG_S x20, PT_S4(sp) + REG_S x21, PT_S5(sp) + REG_S x22, PT_S6(sp) + REG_S x23, PT_S7(sp) + REG_S x24, PT_S8(sp) + REG_S x25, PT_S9(sp) + REG_S x26, PT_S10(sp) + REG_S x27, PT_S11(sp) + REG_S x28, PT_T3(sp) + REG_S x29, PT_T4(sp) + REG_S x30, PT_T5(sp) + REG_S x31, PT_T6(sp) + .endm + + /* restore all GPs except ra, sp and tp */ + .macro restore_gp + REG_L x3, PT_GP(sp) + REG_L x5, PT_T0(sp) + REG_L x6, PT_T1(sp) + REG_L x7, PT_T2(sp) + REG_L x8, PT_S0(sp) + REG_L x9, PT_S1(sp) + REG_L x10, PT_A0(sp) + REG_L x11, PT_A1(sp) + REG_L x12, PT_A2(sp) + REG_L x13, PT_A3(sp) + REG_L x14, PT_A4(sp) + REG_L x15, PT_A5(sp) + REG_L x16, PT_A6(sp) + REG_L x17, PT_A7(sp) + REG_L x18, PT_S2(sp) + REG_L x19, PT_S3(sp) + REG_L x20, PT_S4(sp) + REG_L x21, PT_S5(sp) + REG_L x22, PT_S6(sp) + REG_L x23, PT_S7(sp) + REG_L x24, PT_S8(sp) + REG_L x25, PT_S9(sp) + REG_L x26, PT_S10(sp) + REG_L x27, PT_S11(sp) + REG_L x28, PT_T3(sp) + REG_L x29, PT_T4(sp) + REG_L x30, PT_T5(sp) + REG_L x31, PT_T6(sp) + .endm + #endif /* __ASSEMBLY__ */ #endif /* _ASM_RISCV_ASM_H */ diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 5a6171a90d81..a90f17ed2822 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -40,34 +40,7 @@ _save_context: REG_L sp, TASK_TI_KERNEL_SP(tp) addi sp, sp, -(PT_SIZE_ON_STACK) REG_S x1, PT_RA(sp) - REG_S x3, PT_GP(sp) - REG_S x5, PT_T0(sp) - REG_S x6, PT_T1(sp) - REG_S x7, PT_T2(sp) - REG_S x8, PT_S0(sp) - REG_S x9, PT_S1(sp) - REG_S x10, PT_A0(sp) - REG_S x11, PT_A1(sp) - REG_S x12, PT_A2(sp) - REG_S x13, PT_A3(sp) - REG_S x14, PT_A4(sp) - REG_S x15, PT_A5(sp) - REG_S x16, PT_A6(sp) - REG_S x17, PT_A7(sp) - REG_S x18, PT_S2(sp) - REG_S x19, PT_S3(sp) - REG_S x20, PT_S4(sp) - REG_S x21, PT_S5(sp) - REG_S x22, PT_S6(sp) - REG_S x23, PT_S7(sp) - REG_S x24, PT_S8(sp) - REG_S x25, PT_S9(sp) - REG_S x26, PT_S10(sp) - REG_S x27, PT_S11(sp) - REG_S x28, PT_T3(sp) - REG_S x29, PT_T4(sp) - REG_S x30, PT_T5(sp) - REG_S x31, PT_T6(sp) + save_gp /* * Disable user-mode memory access as it should only be set in the @@ -182,35 +155,8 @@ ENTRY(ret_from_exception) csrw CSR_STATUS, a0 REG_L x1, PT_RA(sp) - REG_L x3, PT_GP(sp) REG_L x4, PT_TP(sp) - REG_L x5, PT_T0(sp) - REG_L x6, PT_T1(sp) - REG_L x7, PT_T2(sp) - REG_L x8, PT_S0(sp) - REG_L x9, PT_S1(sp) - REG_L x10, PT_A0(sp) - REG_L x11, PT_A1(sp) - REG_L x12, PT_A2(sp) - REG_L x13, PT_A3(sp) - REG_L x14, PT_A4(sp) - REG_L x15, PT_A5(sp) - REG_L x16, PT_A6(sp) - REG_L x17, PT_A7(sp) - REG_L x18, PT_S2(sp) - REG_L x19, PT_S3(sp) - REG_L x20, PT_S4(sp) - REG_L x21, PT_S5(sp) - REG_L x22, PT_S6(sp) - REG_L x23, PT_S7(sp) - REG_L x24, PT_S8(sp) - REG_L x25, PT_S9(sp) - REG_L x26, PT_S10(sp) - REG_L x27, PT_S11(sp) - REG_L x28, PT_T3(sp) - REG_L x29, PT_T4(sp) - REG_L x30, PT_T5(sp) - REG_L x31, PT_T6(sp) + restore_gp REG_L x2, PT_SP(sp) @@ -237,34 +183,7 @@ ENTRY(handle_kernel_stack_overflow) //save context to overflow stack REG_S x1, PT_RA(sp) - REG_S x3, PT_GP(sp) - REG_S x5, PT_T0(sp) - REG_S x6, PT_T1(sp) - REG_S x7, PT_T2(sp) - REG_S x8, PT_S0(sp) - REG_S x9, PT_S1(sp) - REG_S x10, PT_A0(sp) - REG_S x11, PT_A1(sp) - REG_S x12, PT_A2(sp) - REG_S x13, PT_A3(sp) - REG_S x14, PT_A4(sp) - REG_S x15, PT_A5(sp) - REG_S x16, PT_A6(sp) - REG_S x17, PT_A7(sp) - REG_S x18, PT_S2(sp) - REG_S x19, PT_S3(sp) - REG_S x20, PT_S4(sp) - REG_S x21, PT_S5(sp) - REG_S x22, PT_S6(sp) - REG_S x23, PT_S7(sp) - REG_S x24, PT_S8(sp) - REG_S x25, PT_S9(sp) - REG_S x26, PT_S10(sp) - REG_S x27, PT_S11(sp) - REG_S x28, PT_T3(sp) - REG_S x29, PT_T4(sp) - REG_S x30, PT_T5(sp) - REG_S x31, PT_T6(sp) + save_gp REG_L s0, TASK_TI_KERNEL_SP(tp) csrr s1, CSR_STATUS diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S index d171eca623b6..1b4b0aecf4f5 100644 --- a/arch/riscv/kernel/mcount-dyn.S +++ b/arch/riscv/kernel/mcount-dyn.S @@ -68,35 +68,8 @@ REG_L x1, PT_EPC(sp) REG_S x2, PT_SP(sp) - REG_S x3, PT_GP(sp) REG_S x4, PT_TP(sp) - REG_S x5, PT_T0(sp) - REG_S x6, PT_T1(sp) - REG_S x7, PT_T2(sp) - REG_S x8, PT_S0(sp) - REG_S x9, PT_S1(sp) - REG_S x10, PT_A0(sp) - REG_S x11, PT_A1(sp) - REG_S x12, PT_A2(sp) - REG_S x13, PT_A3(sp) - REG_S x14, PT_A4(sp) - REG_S x15, PT_A5(sp) - REG_S x16, PT_A6(sp) - REG_S x17, PT_A7(sp) - REG_S x18, PT_S2(sp) - REG_S x19, PT_S3(sp) - REG_S x20, PT_S4(sp) - REG_S x21, PT_S5(sp) - REG_S x22, PT_S6(sp) - REG_S x23, PT_S7(sp) - REG_S x24, PT_S8(sp) - REG_S x25, PT_S9(sp) - REG_S x26, PT_S10(sp) - REG_S x27, PT_S11(sp) - REG_S x28, PT_T3(sp) - REG_S x29, PT_T4(sp) - REG_S x30, PT_T5(sp) - REG_S x31, PT_T6(sp) + save_gp .endm .macro RESTORE_ALL @@ -106,35 +79,8 @@ addi sp, sp, -PT_SIZE_ON_STACK REG_L x1, PT_EPC(sp) REG_L x2, PT_SP(sp) - REG_L x3, PT_GP(sp) REG_L x4, PT_TP(sp) - REG_L x5, PT_T0(sp) - REG_L x6, PT_T1(sp) - REG_L x7, PT_T2(sp) - REG_L x8, PT_S0(sp) - REG_L x9, PT_S1(sp) - REG_L x10, PT_A0(sp) - REG_L x11, PT_A1(sp) - REG_L x12, PT_A2(sp) - REG_L x13, PT_A3(sp) - REG_L x14, PT_A4(sp) - REG_L x15, PT_A5(sp) - REG_L x16, PT_A6(sp) - REG_L x17, PT_A7(sp) - REG_L x18, PT_S2(sp) - REG_L x19, PT_S3(sp) - REG_L x20, PT_S4(sp) - REG_L x21, PT_S5(sp) - REG_L x22, PT_S6(sp) - REG_L x23, PT_S7(sp) - REG_L x24, PT_S8(sp) - REG_L x25, PT_S9(sp) - REG_L x26, PT_S10(sp) - REG_L x27, PT_S11(sp) - REG_L x28, PT_T3(sp) - REG_L x29, PT_T4(sp) - REG_L x30, PT_T5(sp) - REG_L x31, PT_T6(sp) + restore_gp addi sp, sp, PT_SIZE_ON_STACK addi sp, sp, SZREG