From patchwork Tue Mar 14 16:20:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13174721 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9A010C6FD1F for ; Tue, 14 Mar 2023 16:22:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xfgur/fESM4aDmjmQS/B40QPKBOoRmUKVan1BCjF5f8=; b=h3tWA0nlbL0bii VybhHK9kuBB+hONwZR7/MRd3wxlIDcqXolEMobyLaJ1wMbwesKlTxcKR0tpCMCxahbtCBXOok+2S+ hCVuKFTCh314dmReLm8zDxqaXWGcxYchh9Bj+SBwvqvz4f++RdU7NAf22d4EU3IMjP50ugOkKCuVh QA/Z8mrqQJubkoWOuQjah0Lj6t6wvHZeWRqACGkUKMpo+K/Prl9P0zx7exUmP69ByOZUEe5w14qmZ O9x4JSC3gPJXetnrCIEsv6cDE1Rk7Kb+P6/GF8PZzSnIO8b2LxTBfDp4/9IA/Gm5+inYs01xcdjQJ eaR8GHRT3JfCBPLFfbmg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pc7Oi-00Ansl-2V; Tue, 14 Mar 2023 16:21:12 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pc7OR-00Annr-2C for linux-arm-kernel@lists.infradead.org; Tue, 14 Mar 2023 16:21:03 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 91F98150C; Tue, 14 Mar 2023 09:21:37 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E56BB3F67D; Tue, 14 Mar 2023 09:20:52 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: amit.kachhap@arm.com, catalin.marinas@arm.com, james.morse@arm.com, kristina.martsenko@arm.com, mark.rutland@arm.com, will@kernel.org Subject: [PATCH 2/3] arm64: use XPACLRI to strip PAC Date: Tue, 14 Mar 2023 16:20:43 +0000 Message-Id: <20230314162044.888747-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230314162044.888747-1-mark.rutland@arm.com> References: <20230314162044.888747-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230314_092059_736268_005D63E9 X-CRM114-Status: GOOD ( 19.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently we strip the PAC from pointers using C code, which requires generating bitmasks, and conditionally clearing/setting bits depending on bit 55. We can do better by using XPACLRI directly. When the logic was originally written to strip PACs from user pointers, contemporary toolchains used for the kernel had assemblers which were unaware of the PAC instructions. As stripping the PAC from userspace pointers required unconditional clearing of a fixed set of bits (which could be performed with a single instruction), it was simpler to implement the masking in C than it was to make use of XPACI or XPACLRI. When support for in-kernel pointer authentication was added, the stripping logic was extended to cover TTBR1 pointers, requiring several instructions to handle whether to clear/set bits dependent on bit 55 of the pointer. These days, all supported toolchains have assemblers which are aware of the XPACI and XPACLRI instructions, and contemporary compilers use XPACLRI to strip the PAC in __builtin_return_address(). Do likewise within the kernel, and use XPACLRI to strip PACs from pointers. This lives in the HINT encoding space, and so can be use unconditionally. Using XPACLRI saves a few instructions (especially where tracepoints are enabled, as these can make heavy use of __builtin_return_address()), and in theory should be lower overhead. At the same time, I've split ptrauth_strip_insn_pac() into ptrauth_strip_user_insn_pac() and ptrauth_strip_kernel_insn_pac() helpers so that we can avoid unnecessary PAC stripping when pointer authentication is not in use in userspace or kernel respectively. The underlying xpaclri() macro uses inline assembly which clobbers x30. The clobber causes the compiler to save/restore the original x30 value in a frame record (protected with PACIASP and AUTIASP when in-kernel authentication is enabled), so this does not provide a gadget to alter the return address. Similarly this does not adversely affect unwinding due to the presence of the frame record. The ptrauth_user_pac_mask() and ptrauth_kernel_pac_mask() are exported from the kernel in ptrace and core dumps, so these are retained. A subsequent patch will move them out of . Signed-off-by: Mark Rutland Cc: Amit Daniel Kachhap Cc: Catalin Marinas Cc: James Morse Cc: Kristina Martsenko Cc: Will Deacon --- arch/arm64/include/asm/compiler.h | 32 +++++++++++++++++++++------ arch/arm64/include/asm/pointer_auth.h | 6 ----- arch/arm64/kernel/perf_callchain.c | 2 +- arch/arm64/kernel/process.c | 2 +- arch/arm64/kernel/stacktrace.c | 2 +- 5 files changed, 28 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h index 9033a0b2f46a..2476914e59a1 100644 --- a/arch/arm64/include/asm/compiler.h +++ b/arch/arm64/include/asm/compiler.h @@ -15,15 +15,33 @@ #define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) #define ptrauth_kernel_pac_mask() GENMASK_ULL(63, vabits_actual) -/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */ -#define ptrauth_clear_pac(ptr) \ - ((ptr & BIT_ULL(55)) ? (ptr | ptrauth_kernel_pac_mask()) : \ - (ptr & ~ptrauth_user_pac_mask())) +#define xpaclri(ptr) \ +({ \ + register unsigned long __xpaclri_ptr asm("x30") = (ptr); \ + \ + asm( \ + ARM64_ASM_PREAMBLE \ + " xpaclri\n" \ + : "+r" (__xpaclri_ptr)); \ + \ + __xpaclri_ptr; \ +}) -#if defined(CONFIG_ARM64_PTR_AUTH_KERNEL) && \ - !defined(CONFIG_BUILTIN_RETURN_ADDRESS_STRIPS_PAC) +#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL +#define ptrauth_strip_kernel_insn_pac(ptr) xpaclri(ptr) +#else +#define ptrauth_strip_kernel_insn_pac(ptr) (ptr) +#endif + +#ifdef CONFIG_ARM64_PTR_AUTH +#define ptrauth_strip_user_insn_pac(ptr) xpaclri(ptr) +#else +#define ptrauth_strip_user_insn_pac(ptr) (ptr) +#endif + +#if !defined(CONFIG_BUILTIN_RETURN_ADDRESS_STRIPS_PAC) #define __builtin_return_address(val) \ - (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val))) + (void *)(ptrauth_strip_kernel_insn_pac((unsigned long)__builtin_return_address(val))) #endif #endif /* __ASM_COMPILER_H */ diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index efb098de3a84..b0665db86aa6 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -97,11 +97,6 @@ extern int ptrauth_set_enabled_keys(struct task_struct *tsk, unsigned long keys, unsigned long enabled); extern int ptrauth_get_enabled_keys(struct task_struct *tsk); -static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) -{ - return ptrauth_clear_pac(ptr); -} - static __always_inline void ptrauth_enable(void) { if (!system_supports_address_auth()) @@ -133,7 +128,6 @@ static __always_inline void ptrauth_enable(void) #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) #define ptrauth_set_enabled_keys(tsk, keys, enabled) (-EINVAL) #define ptrauth_get_enabled_keys(tsk) (-EINVAL) -#define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_suspend_exit() #define ptrauth_thread_init_user() #define ptrauth_thread_switch_user(tsk) diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c index 65b196e3ca6c..6d157f32187b 100644 --- a/arch/arm64/kernel/perf_callchain.c +++ b/arch/arm64/kernel/perf_callchain.c @@ -38,7 +38,7 @@ user_backtrace(struct frame_tail __user *tail, if (err) return NULL; - lr = ptrauth_strip_insn_pac(buftail.lr); + lr = ptrauth_strip_user_insn_pac(buftail.lr); perf_callchain_store(entry, lr); diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 71d59b5abede..b5bed62483cb 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -217,7 +217,7 @@ void __show_regs(struct pt_regs *regs) if (!user_mode(regs)) { printk("pc : %pS\n", (void *)regs->pc); - printk("lr : %pS\n", (void *)ptrauth_strip_insn_pac(lr)); + printk("lr : %pS\n", (void *)ptrauth_strip_kernel_insn_pac(lr)); } else { printk("pc : %016llx\n", regs->pc); printk("lr : %016llx\n", lr); diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 83154303e682..fcc2d269787e 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -90,7 +90,7 @@ static int notrace unwind_next(struct unwind_state *state) if (err) return err; - state->pc = ptrauth_strip_insn_pac(state->pc); + state->pc = ptrauth_strip_kernel_insn_pac(state->pc); #ifdef CONFIG_FUNCTION_GRAPH_TRACER if (tsk->ret_stack &&