From patchwork Wed Sep 27 22:47:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 13401735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CA603E80ABD for ; Wed, 27 Sep 2023 22:48:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=y7+nTYj/AcXCubBq0x8215CB44s23wGLJG2ZOyidD9o=; b=cpg7WEU50H0Y5Ln/5lDDDyY6R0 TyllwCKpQLm/dkNktdNjslsTqeXSwvxKuYRaBXBFZuDk1wUSGbvgz9RgXydNBu3dpR2rD8TKY5lxv OwkLNs8MSOwDs2IKkGSYNUv95GM0y5DbzV3Ce9OBX2H+Zw6cm7UTRuLF5UVsXB1ZPR6rVLfHE4rG+ DGfyOK7HyZNfVr4Fg5RJ4F04YA5Es7EkVg7P8nOEDXpktOv3N9eTfvehEZZqSMC3vlmhugGsBO1Ym Sred+ohX9qZf1zDahS53l/NC9XtXNeAet3nHXohyj2nh7qZ9Y1Ywpuis0EbelRnjx2xs3HJp/VXer O8ZD91zw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qldKE-001s0I-1x; Wed, 27 Sep 2023 22:48:10 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qldKA-001rx3-2G for linux-riscv@lists.infradead.org; Wed, 27 Sep 2023 22:48:08 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d852a6749bcso18019696276.0 for ; Wed, 27 Sep 2023 15:48:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695854883; x=1696459683; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mNJN6BYrhbygu/rBGeXZ0fXLwVgosVxKqBS6YQtnbPk=; b=l6+QjNUNA9+jUltd98DhWj9xKCr+FakOJcsjMC5I6AKp4z0Db153Wt4JR/g9sr0tIM 54VJXZDmWHoI+L+7m9NPzc1sJavXX6MOu7/AJ9RHZsvOdX0yahgNWAV6FjJcNA9qdChY jojWfKeNsiSdr/duHxVtVNctnTz2GHI4Us4aaaPr0ZLEN1Dqw+ylgTgIvbCD16dMRnJs 1gQj/ye6yDs6Z9FAUPyRMFGV74QKKKPX+x6GJHQEsK5DnKvDIAX3cJo/HjWL7E5pdAA+ qo618hQHnhfLP5xYXhe9G8JLSkTzkgvo/mFNh5Bg/mXeLtkE1H+G6lWqzONbP43ttlYT +d+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695854883; x=1696459683; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mNJN6BYrhbygu/rBGeXZ0fXLwVgosVxKqBS6YQtnbPk=; b=EG72elDIP4ohqxYW01DbdNwEu1kNzr02w8J7KIlICjxUZ9zb3Ua4vU+pN3rIyfuUsa xsJk6obpvO2AC2gko+E6PqxH1Ws8aYZ4OzDrsNQ6e0iCnN+eFXaBoQr2rJUTPdo2ktXJ ++Q5Nhcp40LppBZYiyL2yITW7o14dcq33OpeHp4bajyn191p1xgDdNPn+hyAzq5jORiY 0z2gW5BoeK7HqNGcgt1YkOJS3vwwf+AJgJ6UfNkX4BA/IxwwJzrDf22j8nem6cqgDAI7 R4PUzOn6WTdRMHIF43v2NDYmeaXLxFbVcHTTerNBo02AmkImQCLBXd92tangh3XEt0P6 w+cA== X-Gm-Message-State: AOJu0YzV9iNoz3eHx6/j5T/dczRSPFLiBz4mtXRE+06JtoRZ9KYRaBi2 eL+hnMnDtWLi+kv1ssvtwiwYSCb3EOelxGx0vA8= X-Google-Smtp-Source: AGHT+IF/uV4uTqoiLD6FAHwnyzBdJB5W9JHQtgylQAvHQQhsocM4hWgTySj0gDX8NfhBoR/pGnAeJ6AF0ghcqe+dPQg= X-Received: from samitolvanen.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:4f92]) (user=samitolvanen job=sendgmr) by 2002:a25:83d0:0:b0:c78:c530:6345 with SMTP id v16-20020a2583d0000000b00c78c5306345mr48094ybm.7.1695854883653; Wed, 27 Sep 2023 15:48:03 -0700 (PDT) Date: Wed, 27 Sep 2023 22:47:59 +0000 In-Reply-To: <20230927224757.1154247-8-samitolvanen@google.com> Mime-Version: 1.0 References: <20230927224757.1154247-8-samitolvanen@google.com> X-Developer-Key: i=samitolvanen@google.com; a=openpgp; fpr=35CCFB63B283D6D3AEB783944CB5F6848BBC56EE X-Developer-Signature: v=1; a=openpgp-sha256; l=10961; i=samitolvanen@google.com; h=from:subject; bh=GJOR8OCraLgZs/lo8GjwnqPkf/sLrnWYlpF+fdB5yyU=; b=owEB7QES/pANAwAKAUy19oSLvFbuAcsmYgBlFLEdzk65AOG19tvr5oZD3GinKkgZy9vwq2kkc hrVm1bIiYWJAbMEAAEKAB0WIQQ1zPtjsoPW0663g5RMtfaEi7xW7gUCZRSxHQAKCRBMtfaEi7xW 7pQFC/4z1E6e3V9tysijinGpzs4dGPpQGBihGVx5WL+LfHDExfnBU/nAqM2LSWiyjmr/JS0wS6G gEg0ePcwf4I+awbLoT10E/Q0yxgmspPnYVRSoiT+6JrnheoQ1II/OQNk/k3+l1tOk1wTIhKwm4N 9h6FTzLFLSuMQ9drkPQGGcTuBAZkrG6CtEMOWB1W0avFdTW/ulApG51uRktGAHm4V2MqVWgA64U X2vHCCDvtANeiyxUiEyzQM6XXJgVGVh2Di6JyGGm2qao9V+s9ih2CyGK/063dJOrtd/5Th+wR/m N5lcOJv7kYji+8pWte3GlSXQhqH69J07qKCPl3Lru4EjWsi9qVUROJFjL4oTHnlZy3FD11JJWsT 8RRLNTpwdWir2SsnjGG5T+1cN/ai45tRRGpDSU5J+qYGHUmtHcKskXIXZopv/c73ad1EtcaZYSk D1mDFA4bwrb220+CwkYXfqWlMyQxGRObkQVgjLVRG/e3QeymDd+cMAqbp2cEXFpGNEIes= X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230927224757.1154247-9-samitolvanen@google.com> Subject: [PATCH v4 1/6] riscv: VMAP_STACK overflow detection thread-safe From: Sami Tolvanen To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Kees Cook Cc: Clement Leger , Guo Ren , Deepak Gupta , Nathan Chancellor , Nick Desaulniers , Fangrui Song , linux-riscv@lists.infradead.org, llvm@lists.linux.dev, linux-kernel@vger.kernel.org, Jisheng Zhang , Sami Tolvanen X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230927_154806_739137_DB885EB6 X-CRM114-Status: GOOD ( 19.21 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Deepak Gupta commit 31da94c25aea ("riscv: add VMAP_STACK overflow detection") added support for CONFIG_VMAP_STACK. If overflow is detected, CPU switches to `shadow_stack` temporarily before switching finally to per-cpu `overflow_stack`. If two CPUs/harts are racing and end up in over flowing kernel stack, one or both will end up corrupting each other state because `shadow_stack` is not per-cpu. This patch optimizes per-cpu overflow stack switch by directly picking per-cpu `overflow_stack` and gets rid of `shadow_stack`. Following are the changes in this patch - Defines an asm macro to obtain per-cpu symbols in destination register. - In entry.S, when overflow is detected, per-cpu overflow stack is located using per-cpu asm macro. Computing per-cpu symbol requires a temporary register. x31 is saved away into CSR_SCRATCH (CSR_SCRATCH is anyways zero since we're in kernel). Please see Links for additional relevant disccussion and alternative solution. Tested by `echo EXHAUST_STACK > /sys/kernel/debug/provoke-crash/DIRECT` Kernel crash log below Insufficient stack space to handle exception!/debug/provoke-crash/DIRECT Task stack: [0xff20000010a98000..0xff20000010a9c000] Overflow stack: [0xff600001f7d98370..0xff600001f7d99370] CPU: 1 PID: 205 Comm: bash Not tainted 6.1.0-rc2-00001-g328a1f96f7b9 #34 Hardware name: riscv-virtio,qemu (DT) epc : __memset+0x60/0xfc ra : recursive_loop+0x48/0xc6 [lkdtm] epc : ffffffff808de0e4 ra : ffffffff0163a752 sp : ff20000010a97e80 gp : ffffffff815c0330 tp : ff600000820ea280 t0 : ff20000010a97e88 t1 : 000000000000002e t2 : 3233206874706564 s0 : ff20000010a982b0 s1 : 0000000000000012 a0 : ff20000010a97e88 a1 : 0000000000000000 a2 : 0000000000000400 a3 : ff20000010a98288 a4 : 0000000000000000 a5 : 0000000000000000 a6 : fffffffffffe43f0 a7 : 00007fffffffffff s2 : ff20000010a97e88 s3 : ffffffff01644680 s4 : ff20000010a9be90 s5 : ff600000842ba6c0 s6 : 00aaaaaac29e42b0 s7 : 00fffffff0aa3684 s8 : 00aaaaaac2978040 s9 : 0000000000000065 s10: 00ffffff8a7cad10 s11: 00ffffff8a76a4e0 t3 : ffffffff815dbaf4 t4 : ffffffff815dbaf4 t5 : ffffffff815dbab8 t6 : ff20000010a9bb48 status: 0000000200000120 badaddr: ff20000010a97e88 cause: 000000000000000f Kernel panic - not syncing: Kernel stack overflow CPU: 1 PID: 205 Comm: bash Not tainted 6.1.0-rc2-00001-g328a1f96f7b9 #34 Hardware name: riscv-virtio,qemu (DT) Call Trace: [] dump_backtrace+0x30/0x38 [] show_stack+0x40/0x4c [] dump_stack_lvl+0x44/0x5c [] dump_stack+0x18/0x20 [] panic+0x126/0x2fe [] walk_stackframe+0x0/0xf0 [] recursive_loop+0x48/0xc6 [lkdtm] SMP: stopping secondary CPUs ---[ end Kernel panic - not syncing: Kernel stack overflow ]--- Cc: Guo Ren Cc: Jisheng Zhang Link: https://lore.kernel.org/linux-riscv/Y347B0x4VUNOd6V7@xhacker/T/#t Link: https://lore.kernel.org/lkml/20221124094845.1907443-1-debug@rivosinc.com/ Signed-off-by: Deepak Gupta Co-developed-by: Sami Tolvanen Signed-off-by: Sami Tolvanen Acked-by: Guo Ren Tested-by: Nathan Chancellor --- arch/riscv/include/asm/asm-prototypes.h | 1 - arch/riscv/include/asm/asm.h | 22 ++++++++ arch/riscv/include/asm/thread_info.h | 3 -- arch/riscv/kernel/asm-offsets.c | 1 + arch/riscv/kernel/entry.S | 70 ++++--------------------- arch/riscv/kernel/traps.c | 36 +------------ 6 files changed, 34 insertions(+), 99 deletions(-) diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h index 61ba8ed43d8f..36b955c762ba 100644 --- a/arch/riscv/include/asm/asm-prototypes.h +++ b/arch/riscv/include/asm/asm-prototypes.h @@ -25,7 +25,6 @@ DECLARE_DO_ERROR_INFO(do_trap_ecall_s); DECLARE_DO_ERROR_INFO(do_trap_ecall_m); DECLARE_DO_ERROR_INFO(do_trap_break); -asmlinkage unsigned long get_overflow_stack(void); asmlinkage void handle_bad_stack(struct pt_regs *regs); asmlinkage void do_page_fault(struct pt_regs *regs); asmlinkage void do_irq(struct pt_regs *regs); diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h index 114bbadaef41..bfb4c26f113c 100644 --- a/arch/riscv/include/asm/asm.h +++ b/arch/riscv/include/asm/asm.h @@ -82,6 +82,28 @@ .endr .endm +#ifdef CONFIG_SMP +#ifdef CONFIG_32BIT +#define PER_CPU_OFFSET_SHIFT 2 +#else +#define PER_CPU_OFFSET_SHIFT 3 +#endif + +.macro asm_per_cpu dst sym tmp + REG_L \tmp, TASK_TI_CPU_NUM(tp) + slli \tmp, \tmp, PER_CPU_OFFSET_SHIFT + la \dst, __per_cpu_offset + add \dst, \dst, \tmp + REG_L \tmp, 0(\dst) + la \dst, \sym + add \dst, \dst, \tmp +.endm +#else /* CONFIG_SMP */ +.macro asm_per_cpu dst sym tmp + la \dst, \sym +.endm +#endif /* CONFIG_SMP */ + /* save all GPs except x1 ~ x5 */ .macro save_from_x6_to_x31 REG_S x6, PT_T1(sp) diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h index 1833beb00489..d18ce0113ca1 100644 --- a/arch/riscv/include/asm/thread_info.h +++ b/arch/riscv/include/asm/thread_info.h @@ -34,9 +34,6 @@ #ifndef __ASSEMBLY__ -extern long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE / sizeof(long)]; -extern unsigned long spin_shadow_stack; - #include #include diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index d6a75aac1d27..9f535d5de33f 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -39,6 +39,7 @@ void asm_offsets(void) OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_sp); OFFSET(TASK_TI_USER_SP, task_struct, thread_info.user_sp); + OFFSET(TASK_TI_CPU_NUM, task_struct, thread_info.cpu); OFFSET(TASK_THREAD_F0, task_struct, thread.fstate.f[0]); OFFSET(TASK_THREAD_F1, task_struct, thread.fstate.f[1]); OFFSET(TASK_THREAD_F2, task_struct, thread.fstate.f[2]); diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 143a2bb3e697..3d11aa3af105 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -10,9 +10,11 @@ #include #include #include +#include #include #include #include +#include SYM_CODE_START(handle_exception) /* @@ -170,67 +172,15 @@ SYM_CODE_END(ret_from_exception) #ifdef CONFIG_VMAP_STACK SYM_CODE_START_LOCAL(handle_kernel_stack_overflow) - /* - * Takes the psuedo-spinlock for the shadow stack, in case multiple - * harts are concurrently overflowing their kernel stacks. We could - * store any value here, but since we're overflowing the kernel stack - * already we only have SP to use as a scratch register. So we just - * swap in the address of the spinlock, as that's definately non-zero. - * - * Pairs with a store_release in handle_bad_stack(). - */ -1: la sp, spin_shadow_stack - REG_AMOSWAP_AQ sp, sp, (sp) - bnez sp, 1b - - la sp, shadow_stack - addi sp, sp, SHADOW_OVERFLOW_STACK_SIZE - - //save caller register to shadow stack - addi sp, sp, -(PT_SIZE_ON_STACK) - REG_S x1, PT_RA(sp) - REG_S x5, PT_T0(sp) - REG_S x6, PT_T1(sp) - REG_S x7, PT_T2(sp) - REG_S x10, PT_A0(sp) - REG_S x11, PT_A1(sp) - REG_S x12, PT_A2(sp) - REG_S x13, PT_A3(sp) - REG_S x14, PT_A4(sp) - REG_S x15, PT_A5(sp) - REG_S x16, PT_A6(sp) - REG_S x17, PT_A7(sp) - REG_S x28, PT_T3(sp) - REG_S x29, PT_T4(sp) - REG_S x30, PT_T5(sp) - REG_S x31, PT_T6(sp) - - la ra, restore_caller_reg - tail get_overflow_stack - -restore_caller_reg: - //save per-cpu overflow stack - REG_S a0, -8(sp) - //restore caller register from shadow_stack - REG_L x1, PT_RA(sp) - REG_L x5, PT_T0(sp) - REG_L x6, PT_T1(sp) - REG_L x7, PT_T2(sp) - REG_L x10, PT_A0(sp) - REG_L x11, PT_A1(sp) - REG_L x12, PT_A2(sp) - REG_L x13, PT_A3(sp) - REG_L x14, PT_A4(sp) - REG_L x15, PT_A5(sp) - REG_L x16, PT_A6(sp) - REG_L x17, PT_A7(sp) - REG_L x28, PT_T3(sp) - REG_L x29, PT_T4(sp) - REG_L x30, PT_T5(sp) - REG_L x31, PT_T6(sp) + /* we reach here from kernel context, sscratch must be 0 */ + csrrw x31, CSR_SCRATCH, x31 + asm_per_cpu sp, overflow_stack, x31 + li x31, OVERFLOW_STACK_SIZE + add sp, sp, x31 + /* zero out x31 again and restore x31 */ + xor x31, x31, x31 + csrrw x31, CSR_SCRATCH, x31 - //load per-cpu overflow stack - REG_L sp, -8(sp) addi sp, sp, -(PT_SIZE_ON_STACK) //save context to overflow stack diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c index 19807c4d3805..0063a195deca 100644 --- a/arch/riscv/kernel/traps.c +++ b/arch/riscv/kernel/traps.c @@ -402,48 +402,14 @@ int is_valid_bugaddr(unsigned long pc) #endif /* CONFIG_GENERIC_BUG */ #ifdef CONFIG_VMAP_STACK -/* - * Extra stack space that allows us to provide panic messages when the kernel - * has overflowed its stack. - */ -static DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], +DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack)__aligned(16); -/* - * A temporary stack for use by handle_kernel_stack_overflow. This is used so - * we can call into C code to get the per-hart overflow stack. Usage of this - * stack must be protected by spin_shadow_stack. - */ -long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE/sizeof(long)] __aligned(16); - -/* - * A pseudo spinlock to protect the shadow stack from being used by multiple - * harts concurrently. This isn't a real spinlock because the lock side must - * be taken without a valid stack and only a single register, it's only taken - * while in the process of panicing anyway so the performance and error - * checking a proper spinlock gives us doesn't matter. - */ -unsigned long spin_shadow_stack; - -asmlinkage unsigned long get_overflow_stack(void) -{ - return (unsigned long)this_cpu_ptr(overflow_stack) + - OVERFLOW_STACK_SIZE; -} asmlinkage void handle_bad_stack(struct pt_regs *regs) { unsigned long tsk_stk = (unsigned long)current->stack; unsigned long ovf_stk = (unsigned long)this_cpu_ptr(overflow_stack); - /* - * We're done with the shadow stack by this point, as we're on the - * overflow stack. Tell any other concurrent overflowing harts that - * they can proceed with panicing by releasing the pseudo-spinlock. - * - * This pairs with an amoswap.aq in handle_kernel_stack_overflow. - */ - smp_store_release(&spin_shadow_stack, 0); - console_verbose(); pr_emerg("Insufficient stack space to handle exception!\n");