From patchwork Mon Apr 7 16:19:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 14041243 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 12E7820E006; Mon, 7 Apr 2025 16:20:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744042810; cv=none; b=GFAakeD3CqrHc4KLhsQllKvCbbBv155bB5Uhwqy7nnEHekY/3JFaCea46Vx47l8NHB7KRBzMmFCvgZbJR5TNxAS6U64K+O+ZgwIR3mfiDPD3e7qdPt72/kzsF20uI6hRyGqQu4CQdGjfCbxotXOkT9YUHOCbnLT4rEuStqnQN9M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744042810; c=relaxed/simple; bh=VUN+9d827zbuUZqIKAvgr6e+RfWmpNtJ4Iry3M4n4/I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=USts9mCFL14LUP+VOMxwd1A4G9JwrJ+urosY9A+kjOWxyJXdO4renfU4GthB+wnxGUlQ+4zfkHwGpWpqQAVlbBpzq9LjxygFmfFHprsQIG/vkq4cmwFvEpsfTrRJx3MLNCl07RImAkXRrQU/Nt8N5ksJidP7pQzPsY62EkUnJMI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AAECE12FC; Mon, 7 Apr 2025 09:20:09 -0700 (PDT) Received: from u200865.usa.arm.com (usa-sjc-mx-foss1.foss.arm.com [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1E8EC3F694; Mon, 7 Apr 2025 09:20:08 -0700 (PDT) From: Jeremy Linton To: linux-trace-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, mhiramat@kernel.org, oleg@redhat.com, peterz@infradead.org, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, thiago.bauermann@linaro.org, broonie@kernel.org, yury.khrustalev@arm.com, kristina.martsenko@arm.com, liaochang1@huawei.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Jeremy Linton Subject: [PATCH v2 1/6] arm64/gcs: task_gcs_el0_enable() should use passed task Date: Mon, 7 Apr 2025 11:19:46 -0500 Message-ID: <20250407161951.560865-2-jeremy.linton@arm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250407161951.560865-1-jeremy.linton@arm.com> References: <20250407161951.560865-1-jeremy.linton@arm.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Mark Rutland noticed that the task parameter is ignored and 'current' is being used instead. Since this is usually what its passed, it hasn't yet been causing problems but likely will as the code gets more testing. But, once this is fixed, it creates a new bug in copy_thread_gcs() since the gcs_el_mode isn't yet set for the task before its being checked. Move gcs_alloc_thread_stack() after the new task's gcs_el0_mode initialization to avoid this. Fixes: fc84bc5378a8 ("arm64/gcs: Context switch GCS state for EL0") Signed-off-by: Jeremy Linton Reviewed-by: Mark Brown --- arch/arm64/include/asm/gcs.h | 2 +- arch/arm64/kernel/process.c | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/gcs.h b/arch/arm64/include/asm/gcs.h index f50660603ecf..5bc432234d3a 100644 --- a/arch/arm64/include/asm/gcs.h +++ b/arch/arm64/include/asm/gcs.h @@ -58,7 +58,7 @@ static inline u64 gcsss2(void) static inline bool task_gcs_el0_enabled(struct task_struct *task) { - return current->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE; + return task->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE; } void gcs_set_el0_mode(struct task_struct *task); diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 42faebb7b712..68b7021cda77 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -305,13 +305,13 @@ static int copy_thread_gcs(struct task_struct *p, p->thread.gcs_base = 0; p->thread.gcs_size = 0; + p->thread.gcs_el0_mode = current->thread.gcs_el0_mode; + p->thread.gcs_el0_locked = current->thread.gcs_el0_locked; + gcs = gcs_alloc_thread_stack(p, args); if (IS_ERR_VALUE(gcs)) return PTR_ERR((void *)gcs); - p->thread.gcs_el0_mode = current->thread.gcs_el0_mode; - p->thread.gcs_el0_locked = current->thread.gcs_el0_locked; - return 0; } From patchwork Mon Apr 7 16:19:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 14041244 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7F74120E02B; Mon, 7 Apr 2025 16:20:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744042812; cv=none; b=NVBUaeY2ChxFmBSFwCsTQxCs7ncuuqqp45p4+JzV8NtpkELdJMgVo4j5iRi2ho9Bu8NyUzuOp2fo4TK9nRUXFPX+Y0WAhxckCv5yagCfSIOzRaeQu9INp2j5Di3t9uhDc//tznRwKNNJeoTqAzA8O2HXWIQmsSRfySm6ob/jx4Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744042812; c=relaxed/simple; bh=sqMQDzvFmXnESvEW65BX+Gydje+6XtdWN5A4O+i9z+g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EEp8ejS0UucPw1/WTYNhfvVSRQ23Ls6pCUB+vAPuvtjzVTDUwBMCwaNAUD0+10Oek6kfbhG8QItIP5sWc9xpYwfCSM/JWKaYEqZeJQci8EZzec73bqArH801u95wK+A181SD+pb9JhMjDzTLeyyFlgqMLFWF2bkFVBC8vpkw7ds= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E04F21424; Mon, 7 Apr 2025 09:20:10 -0700 (PDT) Received: from u200865.usa.arm.com (usa-sjc-mx-foss1.foss.arm.com [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 532783F694; Mon, 7 Apr 2025 09:20:09 -0700 (PDT) From: Jeremy Linton To: linux-trace-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, mhiramat@kernel.org, oleg@redhat.com, peterz@infradead.org, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, thiago.bauermann@linaro.org, broonie@kernel.org, yury.khrustalev@arm.com, kristina.martsenko@arm.com, liaochang1@huawei.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Jeremy Linton Subject: [PATCH v2 2/6] arm64: probes: Break ret out from bl/blr Date: Mon, 7 Apr 2025 11:19:47 -0500 Message-ID: <20250407161951.560865-3-jeremy.linton@arm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250407161951.560865-1-jeremy.linton@arm.com> References: <20250407161951.560865-1-jeremy.linton@arm.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Prepare for GCS by breaking RET out into its own function, where it makes more sense to encapsulate the new behavior independent from the branch instructions. Signed-off-by: Jeremy Linton --- arch/arm64/kernel/probes/decode-insn.c | 7 ++++--- arch/arm64/kernel/probes/simulate-insn.c | 10 +++++++++- arch/arm64/kernel/probes/simulate-insn.h | 3 ++- 3 files changed, 15 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c index 6438bf62e753..4137cc5ef031 100644 --- a/arch/arm64/kernel/probes/decode-insn.c +++ b/arch/arm64/kernel/probes/decode-insn.c @@ -108,9 +108,10 @@ arm_probe_decode_insn(u32 insn, struct arch_probe_insn *api) aarch64_insn_is_bl(insn)) { api->handler = simulate_b_bl; } else if (aarch64_insn_is_br(insn) || - aarch64_insn_is_blr(insn) || - aarch64_insn_is_ret(insn)) { - api->handler = simulate_br_blr_ret; + aarch64_insn_is_blr(insn)) { + api->handler = simulate_br_blr; + } else if (aarch64_insn_is_ret(insn)) { + api->handler = simulate_ret; } else { /* * Instruction cannot be stepped out-of-line and we don't diff --git a/arch/arm64/kernel/probes/simulate-insn.c b/arch/arm64/kernel/probes/simulate-insn.c index 4c6d2d712fbd..09a0b36122d0 100644 --- a/arch/arm64/kernel/probes/simulate-insn.c +++ b/arch/arm64/kernel/probes/simulate-insn.c @@ -126,7 +126,7 @@ simulate_b_cond(u32 opcode, long addr, struct pt_regs *regs) } void __kprobes -simulate_br_blr_ret(u32 opcode, long addr, struct pt_regs *regs) +simulate_br_blr(u32 opcode, long addr, struct pt_regs *regs) { int xn = (opcode >> 5) & 0x1f; @@ -138,6 +138,14 @@ simulate_br_blr_ret(u32 opcode, long addr, struct pt_regs *regs) set_x_reg(regs, 30, addr + 4); } +void __kprobes +simulate_ret(u32 opcode, long addr, struct pt_regs *regs) +{ + int xn = (opcode >> 5) & 0x1f; + + instruction_pointer_set(regs, get_x_reg(regs, xn)); +} + void __kprobes simulate_cbz_cbnz(u32 opcode, long addr, struct pt_regs *regs) { diff --git a/arch/arm64/kernel/probes/simulate-insn.h b/arch/arm64/kernel/probes/simulate-insn.h index efb2803ec943..9e772a292d56 100644 --- a/arch/arm64/kernel/probes/simulate-insn.h +++ b/arch/arm64/kernel/probes/simulate-insn.h @@ -11,7 +11,8 @@ void simulate_adr_adrp(u32 opcode, long addr, struct pt_regs *regs); void simulate_b_bl(u32 opcode, long addr, struct pt_regs *regs); void simulate_b_cond(u32 opcode, long addr, struct pt_regs *regs); -void simulate_br_blr_ret(u32 opcode, long addr, struct pt_regs *regs); +void simulate_br_blr(u32 opcode, long addr, struct pt_regs *regs); +void simulate_ret(u32 opcode, long addr, struct pt_regs *regs); void simulate_cbz_cbnz(u32 opcode, long addr, struct pt_regs *regs); void simulate_tbz_tbnz(u32 opcode, long addr, struct pt_regs *regs); void simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs); From patchwork Mon Apr 7 16:19:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 14041245 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8023A20E333; Mon, 7 Apr 2025 16:20:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744042813; cv=none; b=n8Q4EKaqicWrAYYfHCQIRzuv7W7CiiYLwJAEklAWGD8eP61tKiedBI4TnngSNba3CQX80Q/BVtJVpYCGgDM146Pj73o+/IXwokOX7wnBmml+F3nPt8CAF0QRA0AA5ppSxLf5WR9gkLw+6MJ0ipXxQR8McWCYJoVz3+5WlxN2Gyg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744042813; c=relaxed/simple; bh=74l8WQJx+JHUOxctsCLtwn1uQDcdHtUHH9jZWyzig0g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FWOR9VhrDEqJbqWpANjk0acFFw/z12DpJ7frRG+pLchdxU+GJYt3eq+IGHZukFCKzomvYSv8acuQUZPjUa0PePcTohhzK4jyCdfP/TID0P4cWjEQ8+XyNllsCORWZq09Iiqv8ntwgc9s3nqHwoNP5V8qr8h1bm9nV9ubBsCh2MA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 006691BC0; Mon, 7 Apr 2025 09:20:12 -0700 (PDT) Received: from u200865.usa.arm.com (usa-sjc-mx-foss1.foss.arm.com [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 68C763F694; Mon, 7 Apr 2025 09:20:10 -0700 (PDT) From: Jeremy Linton To: linux-trace-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, mhiramat@kernel.org, oleg@redhat.com, peterz@infradead.org, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, thiago.bauermann@linaro.org, broonie@kernel.org, yury.khrustalev@arm.com, kristina.martsenko@arm.com, liaochang1@huawei.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Jeremy Linton Subject: [PATCH v2 3/6] arm64: uaccess: Add additional userspace GCS accessors Date: Mon, 7 Apr 2025 11:19:48 -0500 Message-ID: <20250407161951.560865-4-jeremy.linton@arm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250407161951.560865-1-jeremy.linton@arm.com> References: <20250407161951.560865-1-jeremy.linton@arm.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Uprobes need more advanced read, push, and pop userspace GCS functionality. Implement those features using the existing gcsstr() and copy_from_user(). Its important to note that GCS pages can be read by normal instructions, but the hardware validates that pages used by GCS specific operations, have a GCS privilege set. We aren't validating this in load_user_gcs because it requires stabilizing the VMA over the read which may fault. Signed-off-by: Jeremy Linton --- arch/arm64/include/asm/uaccess.h | 42 ++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 5b91803201ef..34a8b2cc8935 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -20,6 +20,7 @@ #include #include +#include #include #include #include @@ -539,6 +540,47 @@ static inline void put_user_gcs(unsigned long val, unsigned long __user *addr, uaccess_ttbr0_disable(); } +static __always_inline unsigned long __must_check +copy_from_user(void *to, const void __user *from, unsigned long n); + +/* + * Unlike put_user_gcs() above, the use of copy_from_user() may provide + * an opening for non GCS pages to be used to source data. Therefore this + * should only be used in contexts where that is acceptable. + */ +static inline u64 load_user_gcs(unsigned long __user *addr, int *err) +{ + unsigned long ret; + u64 load = 0; + + gcsb_dsync(); + ret = copy_from_user(&load, addr, sizeof(load)); + if (ret != 0) + *err = ret; + return load; +} + +static inline void push_user_gcs(unsigned long val, int *err) +{ + u64 gcspr = read_sysreg_s(SYS_GCSPR_EL0); + + gcspr -= sizeof(u64); + put_user_gcs(val, (unsigned long __user *)gcspr, err); + if (!*err) + write_sysreg_s(gcspr, SYS_GCSPR_EL0); +} + +static inline u64 pop_user_gcs(int *err) +{ + u64 gcspr = read_sysreg_s(SYS_GCSPR_EL0); + u64 read_val; + + read_val = load_user_gcs((unsigned long __user *)gcspr, err); + if (!*err) + write_sysreg_s(gcspr + sizeof(u64), SYS_GCSPR_EL0); + + return read_val; +} #endif /* CONFIG_ARM64_GCS */ From patchwork Mon Apr 7 16:19:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 14041246 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A7DFA214A98; Mon, 7 Apr 2025 16:20:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744042814; cv=none; b=iHnxhhTyBCSdV/Xq27EYFIXRmowPhZPnMZwRE/uaDEii8Z7a3VaOgtA/nxeC42aVVXgOFkKibYCrbmsUurSnTxlhqBo9hrtqgzb5b/U7XzPCnyM8FdFAZudE422sTaIHPe5xKDOfyXbSXf+p1XyUlqSLmLXafDD6klQdOp2+bSg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744042814; c=relaxed/simple; bh=5b73yu41v/Vs2YMf2xL/wL3h62L0da+a7qbolnMmI3U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=c/UeRw3KY5t6n7jkuLugarpl/Pv3ZUlGXopTgNipxct3U0FwufUwjUde444lJ7aSntCPJNL/K1jxp4wqf0GVNYxlsS3yWEzJtEYxcARak9VWO829YLpHA+kUrSvaGA4rdf0a7aobydKNJFfIeJLC80wy8OBYBA+coSQDsc5pZiU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0F6E91BCB; Mon, 7 Apr 2025 09:20:13 -0700 (PDT) Received: from u200865.usa.arm.com (usa-sjc-mx-foss1.foss.arm.com [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 784343F694; Mon, 7 Apr 2025 09:20:11 -0700 (PDT) From: Jeremy Linton To: linux-trace-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, mhiramat@kernel.org, oleg@redhat.com, peterz@infradead.org, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, thiago.bauermann@linaro.org, broonie@kernel.org, yury.khrustalev@arm.com, kristina.martsenko@arm.com, liaochang1@huawei.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Jeremy Linton Subject: [PATCH v2 4/6] arm64: probes: Add GCS support to bl/blr/ret Date: Mon, 7 Apr 2025 11:19:49 -0500 Message-ID: <20250407161951.560865-5-jeremy.linton@arm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250407161951.560865-1-jeremy.linton@arm.com> References: <20250407161951.560865-1-jeremy.linton@arm.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The arm64 probe simulation doesn't currently have logic in place to deal with GCS and this results in core dumps if probes are inserted at control flow locations. Fix-up bl, blr and ret to manipulate the shadow stack as needed. While we manipulate and validate the shadow stack correctly, the hardware provides additional security by only allowing GCS operations against pages which are marked to support GCS. For writing there is gcssttr() which enforces this, but there isn't an equivalent for reading. This means that uprobe users should be aware that probing on control flow instructions which require reading the shadow stack (ex: ret) offers lower security guarantees than what is achieved without the uprobe active. Signed-off-by: Jeremy Linton --- arch/arm64/kernel/probes/simulate-insn.c | 28 ++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/probes/simulate-insn.c b/arch/arm64/kernel/probes/simulate-insn.c index 09a0b36122d0..1fc9bb69b1eb 100644 --- a/arch/arm64/kernel/probes/simulate-insn.c +++ b/arch/arm64/kernel/probes/simulate-insn.c @@ -13,6 +13,7 @@ #include #include "simulate-insn.h" +#include "asm/gcs.h" #define bbl_displacement(insn) \ sign_extend32(((insn) & 0x3ffffff) << 2, 27) @@ -49,6 +50,18 @@ static inline u32 get_w_reg(struct pt_regs *regs, int reg) return lower_32_bits(pt_regs_read_reg(regs, reg)); } +static inline void update_lr(struct pt_regs *regs, long addr) +{ + int err = 0; + + if (user_mode(regs) && task_gcs_el0_enabled(current)) { + push_user_gcs(addr + 4, &err); + if (err) + force_sig(SIGSEGV); + } + procedure_link_pointer_set(regs, addr + 4); +} + static bool __kprobes check_cbz(u32 opcode, struct pt_regs *regs) { int xn = opcode & 0x1f; @@ -107,9 +120,8 @@ simulate_b_bl(u32 opcode, long addr, struct pt_regs *regs) { int disp = bbl_displacement(opcode); - /* Link register is x30 */ if (opcode & (1 << 31)) - set_x_reg(regs, 30, addr + 4); + update_lr(regs, addr); instruction_pointer_set(regs, addr + disp); } @@ -133,17 +145,25 @@ simulate_br_blr(u32 opcode, long addr, struct pt_regs *regs) /* update pc first in case we're doing a "blr lr" */ instruction_pointer_set(regs, get_x_reg(regs, xn)); - /* Link register is x30 */ if (((opcode >> 21) & 0x3) == 1) - set_x_reg(regs, 30, addr + 4); + update_lr(regs, addr); } void __kprobes simulate_ret(u32 opcode, long addr, struct pt_regs *regs) { + u64 ret_addr; + int err = 0; int xn = (opcode >> 5) & 0x1f; instruction_pointer_set(regs, get_x_reg(regs, xn)); + + if (user_mode(regs) && task_gcs_el0_enabled(current)) { + ret_addr = pop_user_gcs(&err); + if (err || ret_addr != procedure_link_pointer(regs)) + force_sig(SIGSEGV); + } + } void __kprobes From patchwork Mon Apr 7 16:19:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 14041247 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3C079215054; Mon, 7 Apr 2025 16:20:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744042814; cv=none; b=dwWxH5om+HUQZGpyG0ZDA5f3t/HR3VLOUbb/GidUczIbqIjoCJq4rMDoS2BH/B7yJCpja1TkZjVpUKX9hfxNnkpCtTf/2kUGFaRBMdL8BAhfJTGulBsV223MUsrtJs9rkxLzuv6g2lnjwyQx2yWAFXAygV5W1ltesMRydydhWXI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744042814; c=relaxed/simple; bh=Oub7b6VLUpd3UWInvg1BovoiCcB2LQjSYbovTmB9cxE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=e5+bf7xqYpm5ms4/FYZnv7bgayVhwzplHMAeclBQ4P+5cefZa1vbTyONemhQisAmo/hnNH+YbAgjiGrK64Jhnw/YWaXl6kMmF0fjU+B84eVzNNSaJQdnl/flRAITPBkZYQjWHjcQ6DaQWzmKUpQbR6UfGClvE+hFuWrt81gsUcw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1EAE21BF3; Mon, 7 Apr 2025 09:20:14 -0700 (PDT) Received: from u200865.usa.arm.com (usa-sjc-mx-foss1.foss.arm.com [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 822873F694; Mon, 7 Apr 2025 09:20:12 -0700 (PDT) From: Jeremy Linton To: linux-trace-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, mhiramat@kernel.org, oleg@redhat.com, peterz@infradead.org, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, thiago.bauermann@linaro.org, broonie@kernel.org, yury.khrustalev@arm.com, kristina.martsenko@arm.com, liaochang1@huawei.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Jeremy Linton , Steve Capper Subject: [PATCH v2 5/6] arm64: uprobes: Add GCS support to uretprobes Date: Mon, 7 Apr 2025 11:19:50 -0500 Message-ID: <20250407161951.560865-6-jeremy.linton@arm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250407161951.560865-1-jeremy.linton@arm.com> References: <20250407161951.560865-1-jeremy.linton@arm.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Ret probes work by changing the value in the link register at the probe location to return to the probe rather than the calling routine. Thus the GCS needs to be updated with this address as well. Since its possible to insert probes at locations where the current value of the LR doesn't match the GCS state this needs to be detected and handled in order to maintain the existing no-fault behavior. Co-developed-by: Steve Capper Signed-off-by: Steve Capper (updated to use new gcs accessors, and handle LR/GCS mismatches) Signed-off-by: Jeremy Linton --- arch/arm64/kernel/probes/uprobes.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c index cb3d05af36e3..5e72409a255a 100644 --- a/arch/arm64/kernel/probes/uprobes.c +++ b/arch/arm64/kernel/probes/uprobes.c @@ -159,11 +159,41 @@ arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, struct pt_regs *regs) { unsigned long orig_ret_vaddr; + unsigned long gcs_ret_vaddr; + int err = 0; + u64 gcspr; orig_ret_vaddr = procedure_link_pointer(regs); + + if (task_gcs_el0_enabled(current)) { + gcspr = read_sysreg_s(SYS_GCSPR_EL0); + gcs_ret_vaddr = load_user_gcs((unsigned long __user *)gcspr, &err); + if (err) { + force_sig(SIGSEGV); + goto out; + } + /* + * If the LR and GCS entry don't match, then some kind of PAC/control + * flow happened. Likely because the user is attempting to retprobe + * on something that isn't a function boundary or inside a leaf + * function. Explicitly abort this retprobe because it will generate + * a GCS exception. + */ + if (gcs_ret_vaddr != orig_ret_vaddr) { + orig_ret_vaddr = -1; + goto out; + } + put_user_gcs(trampoline_vaddr, (unsigned long __user *) gcspr, &err); + if (err) { + force_sig(SIGSEGV); + goto out; + } + } + /* Replace the return addr with trampoline addr */ procedure_link_pointer_set(regs, trampoline_vaddr); +out: return orig_ret_vaddr; } From patchwork Mon Apr 7 16:19:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 14041248 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5747121516B; Mon, 7 Apr 2025 16:20:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744042815; cv=none; b=Wja/yIvmaotA+xI+UdUMw8Y9kW8aeHuRQEzbpPsmbJuQB3buWcYeT3w+iWRqBnOJqegCFWSloNYUMFGjj5d9wse8BttKqLwDhiDNPCcdYf2o/2lc8qCfLpPTDbJ8/CznB3/4dQw84cpHbtar+uta/E5XsVBMrN3fKerYcLliGFE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744042815; c=relaxed/simple; bh=obM2gqiCgCvLNKL1QVcio0LYuOkw8F8jjmVGogoyKg4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uyjdTT3NfQHuOxeXlkkfrKCzYKpR/KXsKeQjqx0N0TettRrq9s8eWo0qFIP0LJo3ykuPBzuvHgwbDbe1Lbj0c7tupKhA659c4H2dY3WBFf2han2Mfy8L/olRjB0WAsoobI18AHTttg6vJywFplZTSfnavx+f2lrH3I6fmOCKmUs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3C4251CC4; Mon, 7 Apr 2025 09:20:15 -0700 (PDT) Received: from u200865.usa.arm.com (usa-sjc-mx-foss1.foss.arm.com [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9C87B3F694; Mon, 7 Apr 2025 09:20:13 -0700 (PDT) From: Jeremy Linton To: linux-trace-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, mhiramat@kernel.org, oleg@redhat.com, peterz@infradead.org, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, thiago.bauermann@linaro.org, broonie@kernel.org, yury.khrustalev@arm.com, kristina.martsenko@arm.com, liaochang1@huawei.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Jeremy Linton Subject: [PATCH v2 6/6] arm64: Kconfig: Remove GCS restrictions on UPROBES Date: Mon, 7 Apr 2025 11:19:51 -0500 Message-ID: <20250407161951.560865-7-jeremy.linton@arm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250407161951.560865-1-jeremy.linton@arm.com> References: <20250407161951.560865-1-jeremy.linton@arm.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Now that the uprobe paths have been made GCS compatible drop the Kconfig restriction. Signed-off-by: Jeremy Linton --- arch/arm64/Kconfig | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index a182295e6f08..b962a1321ebf 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2236,7 +2236,6 @@ config ARM64_GCS default y select ARCH_HAS_USER_SHADOW_STACK select ARCH_USES_HIGH_VMA_FLAGS - depends on !UPROBES help Guarded Control Stack (GCS) provides support for a separate stack with restricted access which contains only return