From patchwork Tue Oct 8 22:36:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Deepak Gupta X-Patchwork-Id: 13827023 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28720217332 for ; Tue, 8 Oct 2024 22:38:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728427097; cv=none; b=fobzRwi1LkpvNzKlblLhTuRhqUN3XmIGOj3AYxx3Fivf6sN222bAF4SjGbXaIaqdqK805l32lBUQZcFxCXAlwjHWOHuXfF3fWipADeARDDECSKPFfWplTWGUvlBjnzc+4yiWz1O0qWHhilZG66qLpYkukcG0CMml3ENrpiQnyGE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728427097; c=relaxed/simple; bh=CDrMDNbu6DtC5GHKm5qUWIbscdQC66UZNuyMtpNITY0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=b+zGYqJxAV+1+OyEf9xUakAM1jjQxx6bROVg5zKNFhhNOO03J9xU+yTgKuF+f7RCGq9YBsHw1tUCupXyKTNpmCAxzAPXI4GcEaw3gTfnlakdk5ZUcfsj1+TwUjC09M/FfE0rME/6rFNF+0PJJC2SDJIUAjxD9KlOB3+y/CWmFwY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=Hkm20hTX; arc=none smtp.client-ip=209.85.210.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="Hkm20hTX" Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-71df67c6881so3149689b3a.3 for ; Tue, 08 Oct 2024 15:38:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1728427095; x=1729031895; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=HqH/AhXY0hzM1+WX3zWeccxGSysMowKCTsAPVGtoEZU=; b=Hkm20hTXuE2EiA5+rvxv6jsrHsZn1jMKhldL4dHudIDV0FquKlraf1Bp0tpzqsOW8t 2RGSinnpktXSvkMg97hcHI8vtyTEbBPWQcckqXQ0iZRVEV27336D861GWBHaFfJvANZL lqKcPT/nBmNygJrk5Z7oCn7ryYLazBsPqqWMmbYpHsaXzoR8qCHwqxossTjJedtsCoB3 VdLyaSGiOPGDemGNBFvbG6WnEDw2dz4FjQae3zxcKYPzk0BZybfOdHcfEwB0EVz1ZjHq PbwgD5V/1YZ5a3t0OWXZg94zqZQqkCc3z6qMLLoTmjfWj1dmBrPPmzEDE8yNYdemvc2Y 0Ftg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728427095; x=1729031895; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HqH/AhXY0hzM1+WX3zWeccxGSysMowKCTsAPVGtoEZU=; b=iD0kSZaEKdv3vXwZQ0LLtG5N04Px6peOEuieT/LrmA5u7JwcUfZH4qkRpJyjyVh9FI Pi1lyVhNGj6b1dCaybKgwpk8gcfMl/sHW8/xjvGwEN8NTPEwJz/glnofC2sknL/Ufd32 QiAO3iVXPiHxdTafUsDIxjvCVkWoHb+1mVMmxLN4VoMq1PBHikBX19NNri15lJBW1HW4 Q4kH3nniMmzrZtwwE6/bvsbQDwupGQJfIVOMUOLoqAzOKiDkT7AcHQ4cPHkY5BtPNA4M 1VBTg7SaUkO45DorIdfvIdquHwForbOx0SQMUtjDoJ1wwt/iDmomQ3D7wV1UV5vb40uI DrLA== X-Forwarded-Encrypted: i=1; AJvYcCW8DZfr2V4HJ31ln/LJyE1tWQmqANslEo9wH2QLnC1xa/OHkBCBoNe9I/cGl2Zpq0SkYPO+Rtqq2t6ia/o4oiA=@vger.kernel.org X-Gm-Message-State: AOJu0YwPqQ+1H/skBIi2wuSQx0resTRdFbfrDC6EPdOtXbB0woK/0/lD IKUfuOjL3v7wVfAoHJAOEj++/JX7Zf90etWNWQrGcPLohaTeXD0CreNlgRxwSTY= X-Google-Smtp-Source: AGHT+IGmH0pAT3k+B512WMEtl+jnoszZMXZJopQwCiYj/5o19BiNvFPol3OfLSxKWlmI805IJexCZA== X-Received: by 2002:a05:6a21:78d:b0:1d7:7ea:2f2d with SMTP id adf61e73a8af0-1d8a3be37d1mr856278637.6.1728427095593; Tue, 08 Oct 2024 15:38:15 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0ccc4b2sm6591270b3a.45.2024.10.08.15.38.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Oct 2024 15:38:15 -0700 (PDT) From: Deepak Gupta Date: Tue, 08 Oct 2024 15:36:52 -0700 Subject: [PATCH v6 10/33] riscv: usercfi state for task and save/restore of CSR_SSP on trap entry/exit Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241008-v5_user_cfi_series-v6-10-60d9fe073f37@rivosinc.com> References: <20241008-v5_user_cfi_series-v6-0-60d9fe073f37@rivosinc.com> In-Reply-To: <20241008-v5_user_cfi_series-v6-0-60d9fe073f37@rivosinc.com> To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Arnd Bergmann , Christian Brauner , Peter Zijlstra , Oleg Nesterov , Eric Biederman , Kees Cook , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, alistair.francis@wdc.com, richard.henderson@linaro.org, jim.shu@sifive.com, andybnac@gmail.com, kito.cheng@sifive.com, charlie@rivosinc.com, atishp@rivosinc.com, evan@rivosinc.com, cleger@rivosinc.com, alexghiti@rivosinc.com, samitolvanen@google.com, broonie@kernel.org, rick.p.edgecombe@intel.com, Deepak Gupta X-Mailer: b4 0.14.0 Carves out space in arch specific thread struct for cfi status and shadow stack in usermode on riscv. This patch does following - defines a new structure cfi_status with status bit for cfi feature - defines shadow stack pointer, base and size in cfi_status structure - defines offsets to new member fields in thread in asm-offsets.c - Saves and restore shadow stack pointer on trap entry (U --> S) and exit (S --> U) Shadow stack save/restore is gated on feature availiblity and implemented using alternative. CSR can be context switched in `switch_to` as well but soon as kernel shadow stack support gets rolled in, shadow stack pointer will need to be switched at trap entry/exit point (much like `sp`). It can be argued that kernel using shadow stack deployment scenario may not be as prevalant as user mode using this feature. But even if there is some minimal deployment of kernel shadow stack, that means that it needs to be supported. And thus save/restore of shadow stack pointer in entry.S instead of in `switch_to.h`. Signed-off-by: Deepak Gupta Reviewed-by: Charlie Jenkins --- arch/riscv/include/asm/processor.h | 1 + arch/riscv/include/asm/thread_info.h | 3 +++ arch/riscv/include/asm/usercfi.h | 24 ++++++++++++++++++++++++ arch/riscv/kernel/asm-offsets.c | 4 ++++ arch/riscv/kernel/entry.S | 26 ++++++++++++++++++++++++++ 5 files changed, 58 insertions(+) diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index aec3466a389c..5a8031384021 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -14,6 +14,7 @@ #include #include +#include #define arch_get_mmap_end(addr, len, flags) \ ({ \ diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h index ebe52f96da34..12263cef7518 100644 --- a/arch/riscv/include/asm/thread_info.h +++ b/arch/riscv/include/asm/thread_info.h @@ -57,6 +57,9 @@ struct thread_info { long user_sp; /* User stack pointer */ int cpu; unsigned long syscall_work; /* SYSCALL_WORK_ flags */ +#ifdef CONFIG_RISCV_USER_CFI + struct cfi_status user_cfi_state; +#endif #ifdef CONFIG_SHADOW_CALL_STACK void *scs_base; void *scs_sp; diff --git a/arch/riscv/include/asm/usercfi.h b/arch/riscv/include/asm/usercfi.h new file mode 100644 index 000000000000..4fa201b4fc4e --- /dev/null +++ b/arch/riscv/include/asm/usercfi.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0 + * Copyright (C) 2024 Rivos, Inc. + * Deepak Gupta + */ +#ifndef _ASM_RISCV_USERCFI_H +#define _ASM_RISCV_USERCFI_H + +#ifndef __ASSEMBLY__ +#include + +#ifdef CONFIG_RISCV_USER_CFI +struct cfi_status { + unsigned long ubcfi_en : 1; /* Enable for backward cfi. */ + unsigned long rsvd : ((sizeof(unsigned long)*8) - 1); + unsigned long user_shdw_stk; /* Current user shadow stack pointer */ + unsigned long shdw_stk_base; /* Base address of shadow stack */ + unsigned long shdw_stk_size; /* size of shadow stack */ +}; + +#endif /* CONFIG_RISCV_USER_CFI */ + +#endif /* __ASSEMBLY__ */ + +#endif /* _ASM_RISCV_USERCFI_H */ diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index e94180ba432f..766bd33f10cb 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -52,6 +52,10 @@ void asm_offsets(void) #endif OFFSET(TASK_TI_CPU_NUM, task_struct, thread_info.cpu); +#ifdef CONFIG_RISCV_USER_CFI + OFFSET(TASK_TI_CFI_STATUS, task_struct, thread_info.user_cfi_state); + OFFSET(TASK_TI_USER_SSP, task_struct, thread_info.user_cfi_state.user_shdw_stk); +#endif OFFSET(TASK_THREAD_F0, task_struct, thread.fstate.f[0]); OFFSET(TASK_THREAD_F1, task_struct, thread.fstate.f[1]); OFFSET(TASK_THREAD_F2, task_struct, thread.fstate.f[2]); diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index c200d329d4bd..8f7f477517e3 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -147,6 +147,20 @@ SYM_CODE_START(handle_exception) REG_L s0, TASK_TI_USER_SP(tp) csrrc s1, CSR_STATUS, t0 + /* + * If previous mode was U, capture shadow stack pointer and save it away + * Zero CSR_SSP at the same time for sanitization. + */ + ALTERNATIVE("nop; nop; nop; nop", + __stringify( \ + andi s2, s1, SR_SPP; \ + bnez s2, skip_ssp_save; \ + csrrw s2, CSR_SSP, x0; \ + REG_S s2, TASK_TI_USER_SSP(tp); \ + skip_ssp_save:), + 0, + RISCV_ISA_EXT_ZICFISS, + CONFIG_RISCV_USER_CFI) csrr s2, CSR_EPC csrr s3, CSR_TVAL csrr s4, CSR_CAUSE @@ -236,6 +250,18 @@ SYM_CODE_START_NOALIGN(ret_from_exception) * structures again. */ csrw CSR_SCRATCH, tp + + /* + * Going back to U mode, restore shadow stack pointer + */ + ALTERNATIVE("nop; nop", + __stringify( \ + REG_L s3, TASK_TI_USER_SSP(tp); \ + csrw CSR_SSP, s3), + 0, + RISCV_ISA_EXT_ZICFISS, + CONFIG_RISCV_USER_CFI) + 1: #ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE move a0, sp