From patchwork Tue Jul 9 16:01:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 13728313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E53F6C3DA41 for ; Tue, 9 Jul 2024 16:11:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=vND3cHyOZT7hLRF4TgnJAFtDQG/S/RMQwZMMqIrEYXE=; b=W/jTZ9qpTgyksmRcDyR9jkma3T Yv2FZIsuqbVLKAEAxCAI6r17nKdQK0RAueBIIOZvfMWjes1eY4e9qCBrkMB0Ycqlem4jHQ249p6GO jw+L6DVVN+vZY853MWE4WMyVZEjnbGt1wS9hYg5mUaN9leTAdhHLePbsPVsQLdqsya+qssnmYLbgm +/p2KaRQNlSQLpPgvoHoT4mkDcUS46aXVXBjCvL1cujFZq1mwyEf0RRU6s0yrxR5zWrlsHi2ev9QC +ulTNC3VmGBNVKa4UZzzhmr4FicjVmRlxlGYb3DLarZXmP7mRv5m1Ka3RTk8X+u5vTjo2gNBBIo8y r2JCKvYA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sRDQt-00000007sI6-3cDr; Tue, 09 Jul 2024 16:11:11 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sRDQD-00000007s2K-1tdp for linux-arm-kernel@lists.infradead.org; Tue, 09 Jul 2024 16:10:31 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id BE27ACE12A5; Tue, 9 Jul 2024 16:10:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED6D7C3277B; Tue, 9 Jul 2024 16:10:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1720541427; bh=wian81o0Zi/+FyJBZWOtCcAyBZ1alHFmr+wjK76UD60=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TdmbCifRYefMnpRm9P/aNBVl4SW4zCIEz5mdWOi1FNSi9h4TC50dXQYQqAFHfpOUO fbc2g22Nx6mjZ+c/nr0ROsYp/dyUgavaYoIqa7hHJqihJyRUhOctHt+buTgOg8QqBT ql7sppmd4iYPxCntitRfmgDX0kHPpesvg4NNYYLA= From: Linus Torvalds To: Mark Rutland Cc: Linux ARM , Linus Torvalds Subject: [PATCH 1/3] arm64: start using 'asm goto' for get_user() when available Date: Tue, 9 Jul 2024 09:01:59 -0700 Message-ID: <20240709161022.1035500-2-torvalds@linux-foundation.org> X-Mailer: git-send-email 2.45.1.209.gc6f12300df In-Reply-To: <20240709161022.1035500-1-torvalds@linux-foundation.org> References: <20240709161022.1035500-1-torvalds@linux-foundation.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240709_091029_873861_469ED5FF X-CRM114-Status: GOOD ( 20.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This generates noticeably better code with compilers that support it, since we don't need to test the error register etc, the exception just jumps to the error handling directly. Note that this also marks SW_TTBR0_PAN incompatible with KCSAN support, since KCSAN wants to save and restore the user access state. KCSAN and SW_TTBR0_PAN were probably always incompatible, but it became obvious only when implementing the unsafe user access functions. At that point the default empty user_access_save/restore() functions weren't provided by the default fallback functions. Signed-off-by: Linus Torvalds --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/uaccess.h | 121 +++++++++++++++++++++++++------ arch/arm64/kernel/mte.c | 12 ++- 3 files changed, 104 insertions(+), 30 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 5d91259ee7b5..b6e8920364de 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1649,6 +1649,7 @@ config RODATA_FULL_DEFAULT_ENABLED config ARM64_SW_TTBR0_PAN bool "Emulate Privileged Access Never using TTBR0_EL1 switching" + depends on !KCSAN help Enabling this option prevents the kernel from accessing user-space memory directly by pointing TTBR0_EL1 to a reserved diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 14be5000c5a0..3e721f73bcaf 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -184,29 +184,40 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) * The "__xxx_error" versions set the third argument to -EFAULT if an error * occurs, and leave it unchanged on success. */ -#define __get_mem_asm(load, reg, x, addr, err, type) \ +#ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT +#define __get_mem_asm(load, reg, x, addr, label, type) \ + asm_goto_output( \ + "1: " load " " reg "0, [%1]\n" \ + _ASM_EXTABLE_##type##ACCESS_ERR(1b, %l2, %w0) \ + : "=r" (x) \ + : "r" (addr) : : label) +#else +#define __get_mem_asm(load, reg, x, addr, label, type) do { \ + int __gma_err = 0; \ asm volatile( \ "1: " load " " reg "1, [%2]\n" \ "2:\n" \ _ASM_EXTABLE_##type##ACCESS_ERR_ZERO(1b, 2b, %w0, %w1) \ - : "+r" (err), "=r" (x) \ - : "r" (addr)) + : "+r" (__gma_err), "=r" (x) \ + : "r" (addr)); \ + if (__gma_err) goto label; } while (0) +#endif -#define __raw_get_mem(ldr, x, ptr, err, type) \ +#define __raw_get_mem(ldr, x, ptr, label, type) \ do { \ unsigned long __gu_val; \ switch (sizeof(*(ptr))) { \ case 1: \ - __get_mem_asm(ldr "b", "%w", __gu_val, (ptr), (err), type); \ + __get_mem_asm(ldr "b", "%w", __gu_val, (ptr), label, type); \ break; \ case 2: \ - __get_mem_asm(ldr "h", "%w", __gu_val, (ptr), (err), type); \ + __get_mem_asm(ldr "h", "%w", __gu_val, (ptr), label, type); \ break; \ case 4: \ - __get_mem_asm(ldr, "%w", __gu_val, (ptr), (err), type); \ + __get_mem_asm(ldr, "%w", __gu_val, (ptr), label, type); \ break; \ case 8: \ - __get_mem_asm(ldr, "%x", __gu_val, (ptr), (err), type); \ + __get_mem_asm(ldr, "%x", __gu_val, (ptr), label, type); \ break; \ default: \ BUILD_BUG(); \ @@ -219,27 +230,34 @@ do { \ * uaccess_ttbr0_disable(). As `x` and `ptr` could contain blocking functions, * we must evaluate these outside of the critical section. */ -#define __raw_get_user(x, ptr, err) \ +#define __raw_get_user(x, ptr, label) \ do { \ __typeof__(*(ptr)) __user *__rgu_ptr = (ptr); \ __typeof__(x) __rgu_val; \ __chk_user_ptr(ptr); \ - \ - uaccess_ttbr0_enable(); \ - __raw_get_mem("ldtr", __rgu_val, __rgu_ptr, err, U); \ - uaccess_ttbr0_disable(); \ - \ - (x) = __rgu_val; \ + do { \ + __label__ __rgu_failed; \ + uaccess_ttbr0_enable(); \ + __raw_get_mem("ldtr", __rgu_val, __rgu_ptr, __rgu_failed, U); \ + uaccess_ttbr0_disable(); \ + (x) = __rgu_val; \ + break; \ + __rgu_failed: \ + uaccess_ttbr0_disable(); \ + goto label; \ + } while (0); \ } while (0) #define __get_user_error(x, ptr, err) \ do { \ + __label__ __gu_failed; \ __typeof__(*(ptr)) __user *__p = (ptr); \ might_fault(); \ if (access_ok(__p, sizeof(*__p))) { \ __p = uaccess_mask_ptr(__p); \ - __raw_get_user((x), __p, (err)); \ + __raw_get_user((x), __p, __gu_failed); \ } else { \ + __gu_failed: \ (x) = (__force __typeof__(x))0; (err) = -EFAULT; \ } \ } while (0) @@ -262,15 +280,18 @@ do { \ do { \ __typeof__(dst) __gkn_dst = (dst); \ __typeof__(src) __gkn_src = (src); \ - int __gkn_err = 0; \ + do { \ + __label__ __gkn_label; \ \ - __mte_enable_tco_async(); \ - __raw_get_mem("ldr", *((type *)(__gkn_dst)), \ - (__force type *)(__gkn_src), __gkn_err, K); \ - __mte_disable_tco_async(); \ - \ - if (unlikely(__gkn_err)) \ + __mte_enable_tco_async(); \ + __raw_get_mem("ldr", *((type *)(__gkn_dst)), \ + (__force type *)(__gkn_src), __gkn_label, K); \ + __mte_disable_tco_async(); \ + break; \ + __gkn_label: \ + __mte_disable_tco_async(); \ goto err_label; \ + } while (0); \ } while (0) #define __put_mem_asm(store, reg, x, addr, err, type) \ @@ -381,6 +402,60 @@ extern unsigned long __must_check __arch_copy_to_user(void __user *to, const voi __actu_ret; \ }) +static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len) +{ + if (unlikely(!access_ok(ptr,len))) + return 0; + uaccess_ttbr0_enable(); + return 1; +} +#define user_access_begin(a,b) user_access_begin(a,b) +#define user_access_end() uaccess_ttbr0_disable() + +/* + * The arm64 inline asms should learn abut asm goto, and we should + * teach user_access_begin() about address masking. + */ +#define unsafe_put_user(x, ptr, label) do { \ + int __upu_err = 0; \ + __raw_put_mem("sttr", x, uaccess_mask_ptr(ptr), __upu_err, U); \ + if (__upu_err) goto label; \ +} while (0) + +#define unsafe_get_user(x, ptr, label) \ + __raw_get_mem("ldtr", x, uaccess_mask_ptr(ptr), label, U) + +/* + * KCSAN uses these to save and restore ttbr state. + * We do not support KCSAN with ARM64_SW_TTBR0_PAN, so + * they are no-ops. + */ +static inline unsigned long user_access_save(void) { return 0; } +static inline void user_access_restore(unsigned long enabled) { } + +/* + * We want the unsafe accessors to always be inlined and use + * the error labels - thus the macro games. + */ +#define unsafe_copy_loop(dst, src, len, type, label) \ + while (len >= sizeof(type)) { \ + unsafe_put_user(*(type *)(src),(type __user *)(dst),label); \ + dst += sizeof(type); \ + src += sizeof(type); \ + len -= sizeof(type); \ + } + +#define unsafe_copy_to_user(_dst,_src,_len,label) \ +do { \ + char __user *__ucu_dst = (_dst); \ + const char *__ucu_src = (_src); \ + size_t __ucu_len = (_len); \ + unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u64, label); \ + unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u32, label); \ + unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u16, label); \ + unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u8, label); \ +} while (0) + #define INLINE_COPY_TO_USER #define INLINE_COPY_FROM_USER diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index dcdcccd40891..6174671be7c1 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -582,12 +582,9 @@ subsys_initcall(register_mte_tcf_preferred_sysctl); size_t mte_probe_user_range(const char __user *uaddr, size_t size) { const char __user *end = uaddr + size; - int err = 0; char val; - __raw_get_user(val, uaddr, err); - if (err) - return size; + __raw_get_user(val, uaddr, efault); uaddr = PTR_ALIGN(uaddr, MTE_GRANULE_SIZE); while (uaddr < end) { @@ -595,12 +592,13 @@ size_t mte_probe_user_range(const char __user *uaddr, size_t size) * A read is sufficient for mte, the caller should have probed * for the pte write permission if required. */ - __raw_get_user(val, uaddr, err); - if (err) - return end - uaddr; + __raw_get_user(val, uaddr, efault); uaddr += MTE_GRANULE_SIZE; } (void)val; return 0; + +efault: + return end - uaddr; } From patchwork Tue Jul 9 16:02:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 13728312 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E99DDC3DA41 for ; Tue, 9 Jul 2024 16:11:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=aq3ouA2p7YvDNsQr9crOGEb/Hz22qdltAWWcPHL+Fic=; b=VxO5lBgk/wqdHp7TbDRsvZBdha h6Y937+ZJ5ZWM7nAmntEymmR8SEgSyoRZfbi9vEbU0xMvdMVKHNYDl88CF3X1ZF+ZVLraYpVeNZSw cF0w63YvnHzCAxlYP+ADoKkq76usv2x2sqZDZSNN/ZnVCAgvcC6Hgbvtwc4lhvxpUyR9GuE4+E8U/ YAU7/ZNUCKhQ9ywObhrSPbEe3PJG2SaPd40/l00M638biSiCjUargLD6tYqsoC+XrNWS837wiluPY joqAoblcYZ1S/VUzDhNehtpZfUhzya9mnktkcCRDmjSR3eJfY5hD+GLn0ZgedKvCw/6SRHP+VxlgO 3Dc235hg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sRDQf-00000007sBe-14iO; Tue, 09 Jul 2024 16:10:57 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sRDQD-00000007s2O-0zqx for linux-arm-kernel@lists.infradead.org; Tue, 09 Jul 2024 16:10:30 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C198A61491; Tue, 9 Jul 2024 16:10:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6D43DC3277B; Tue, 9 Jul 2024 16:10:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1720541428; bh=OY7Pr82vNmo/QCTC/HsDE5qdqpMP2S5jBodQ3ecoJ8A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=c4GS2rxWErehTj1BGXvpZNmMFLPXlJe+CV9ARj0drUoXM9uauCDA5xFslqjMPU20V GE4tPjvVAeZh7X5kuIxC89U6H7xYwzMppzYqJ/fSpddmS1k9caxc1xyMiJtakUZdi3 D36ywRhaPByuqI30om0pj8VzoUy0TV0BnO7N9AKk= From: Linus Torvalds To: Mark Rutland Cc: Linux ARM , Linus Torvalds Subject: [PATCH 2/3] arm64: start using 'asm goto' for put_user() Date: Tue, 9 Jul 2024 09:02:00 -0700 Message-ID: <20240709161022.1035500-3-torvalds@linux-foundation.org> X-Mailer: git-send-email 2.45.1.209.gc6f12300df In-Reply-To: <20240709161022.1035500-1-torvalds@linux-foundation.org> References: <20240709161022.1035500-1-torvalds@linux-foundation.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240709_091029_384112_EF8B847C X-CRM114-Status: GOOD ( 13.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This generates noticeably better code since we don't need to test the error register etc, the exception just jumps to the error handling directly. Unlike get_user(), there's no need to worry about old compilers. All supported compilers support the regular non-output 'asm goto', as pointed out by Nathan Chancellor. Signed-off-by: Linus Torvalds --- arch/arm64/include/asm/asm-extable.h | 3 ++ arch/arm64/include/asm/uaccess.h | 70 ++++++++++++++-------------- 2 files changed, 39 insertions(+), 34 deletions(-) diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 980d1dd8e1a3..b8a5861dc7b7 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -112,6 +112,9 @@ #define _ASM_EXTABLE_KACCESS_ERR(insn, fixup, err) \ _ASM_EXTABLE_KACCESS_ERR_ZERO(insn, fixup, err, wzr) +#define _ASM_EXTABLE_KACCESS(insn, fixup) \ + _ASM_EXTABLE_KACCESS_ERR_ZERO(insn, fixup, wzr, wzr) + #define _ASM_EXTABLE_LOAD_UNALIGNED_ZEROPAD(insn, fixup, data, addr) \ __DEFINE_ASM_GPR_NUMS \ __ASM_EXTABLE_RAW(#insn, #fixup, \ diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 3e721f73bcaf..28f665e0975a 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -294,29 +294,28 @@ do { \ } while (0); \ } while (0) -#define __put_mem_asm(store, reg, x, addr, err, type) \ - asm volatile( \ - "1: " store " " reg "1, [%2]\n" \ +#define __put_mem_asm(store, reg, x, addr, label, type) \ + asm goto( \ + "1: " store " " reg "0, [%1]\n" \ "2:\n" \ - _ASM_EXTABLE_##type##ACCESS_ERR(1b, 2b, %w0) \ - : "+r" (err) \ - : "rZ" (x), "r" (addr)) + _ASM_EXTABLE_##type##ACCESS(1b, %l2) \ + : : "rZ" (x), "r" (addr) : : label) -#define __raw_put_mem(str, x, ptr, err, type) \ +#define __raw_put_mem(str, x, ptr, label, type) \ do { \ __typeof__(*(ptr)) __pu_val = (x); \ switch (sizeof(*(ptr))) { \ case 1: \ - __put_mem_asm(str "b", "%w", __pu_val, (ptr), (err), type); \ + __put_mem_asm(str "b", "%w", __pu_val, (ptr), label, type); \ break; \ case 2: \ - __put_mem_asm(str "h", "%w", __pu_val, (ptr), (err), type); \ + __put_mem_asm(str "h", "%w", __pu_val, (ptr), label, type); \ break; \ case 4: \ - __put_mem_asm(str, "%w", __pu_val, (ptr), (err), type); \ + __put_mem_asm(str, "%w", __pu_val, (ptr), label, type); \ break; \ case 8: \ - __put_mem_asm(str, "%x", __pu_val, (ptr), (err), type); \ + __put_mem_asm(str, "%x", __pu_val, (ptr), label, type); \ break; \ default: \ BUILD_BUG(); \ @@ -328,25 +327,34 @@ do { \ * uaccess_ttbr0_disable(). As `x` and `ptr` could contain blocking functions, * we must evaluate these outside of the critical section. */ -#define __raw_put_user(x, ptr, err) \ +#define __raw_put_user(x, ptr, label) \ do { \ + __label__ __rpu_failed; \ __typeof__(*(ptr)) __user *__rpu_ptr = (ptr); \ __typeof__(*(ptr)) __rpu_val = (x); \ __chk_user_ptr(__rpu_ptr); \ \ - uaccess_ttbr0_enable(); \ - __raw_put_mem("sttr", __rpu_val, __rpu_ptr, err, U); \ - uaccess_ttbr0_disable(); \ + do { \ + uaccess_ttbr0_enable(); \ + __raw_put_mem("sttr", __rpu_val, __rpu_ptr, __rpu_failed, U); \ + uaccess_ttbr0_disable(); \ + break; \ + __rpu_failed: \ + uaccess_ttbr0_disable(); \ + goto label; \ + } while (0); \ } while (0) #define __put_user_error(x, ptr, err) \ do { \ + __label__ __pu_failed; \ __typeof__(*(ptr)) __user *__p = (ptr); \ might_fault(); \ if (access_ok(__p, sizeof(*__p))) { \ __p = uaccess_mask_ptr(__p); \ - __raw_put_user((x), __p, (err)); \ + __raw_put_user((x), __p, __pu_failed); \ } else { \ + __pu_failed: \ (err) = -EFAULT; \ } \ } while (0) @@ -369,15 +377,18 @@ do { \ do { \ __typeof__(dst) __pkn_dst = (dst); \ __typeof__(src) __pkn_src = (src); \ - int __pkn_err = 0; \ \ - __mte_enable_tco_async(); \ - __raw_put_mem("str", *((type *)(__pkn_src)), \ - (__force type *)(__pkn_dst), __pkn_err, K); \ - __mte_disable_tco_async(); \ - \ - if (unlikely(__pkn_err)) \ + do { \ + __label__ __pkn_err; \ + __mte_enable_tco_async(); \ + __raw_put_mem("str", *((type *)(__pkn_src)), \ + (__force type *)(__pkn_dst), __pkn_err, K); \ + __mte_disable_tco_async(); \ + break; \ + __pkn_err: \ + __mte_disable_tco_async(); \ goto err_label; \ + } while (0); \ } while(0) extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n); @@ -411,17 +422,8 @@ static __must_check __always_inline bool user_access_begin(const void __user *pt } #define user_access_begin(a,b) user_access_begin(a,b) #define user_access_end() uaccess_ttbr0_disable() - -/* - * The arm64 inline asms should learn abut asm goto, and we should - * teach user_access_begin() about address masking. - */ -#define unsafe_put_user(x, ptr, label) do { \ - int __upu_err = 0; \ - __raw_put_mem("sttr", x, uaccess_mask_ptr(ptr), __upu_err, U); \ - if (__upu_err) goto label; \ -} while (0) - +#define unsafe_put_user(x, ptr, label) \ + __raw_put_mem("sttr", x, uaccess_mask_ptr(ptr), label, U) #define unsafe_get_user(x, ptr, label) \ __raw_get_mem("ldtr", x, uaccess_mask_ptr(ptr), label, U) From patchwork Tue Jul 9 16:02:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 13728314 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 84DE3C2BD09 for ; Tue, 9 Jul 2024 16:11:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=i0JF7oJ2UZKiui7uzhM+GaLMaLwkd8+GLmc/jcTKWtw=; b=NQuTGr6gdJqCf5Yuxdf+ms2ONs tssBq4qji2H9Ec5345BnPGjfv1zTbpg+jQK/4DStI+7ip5RGEmrMCqfsWjYpumal+sHI8is1xsKgi BvyA6iAES/RrjIO+Sed66t5c3J/7ZIvIq9iCPlva5mLuKQl+oJuvDWCkLg7IJpysOs664uPTDdXZA 63yP0bpVVe4z9eSMAnyQUbdRRNhpp5lQgK0UbXHqR8/sUMv86viSebR+OMZQFQZd7MmSwrfiu7tLO wpxUMsEGHecgUCo5SoqPxWtbw1PkK2uFhy9F1249SnS0Cl2nnkSpgGoCX3vpT7j15fikMqIW9xHh1 d9lC+SVA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sRDR7-00000007sLt-0QSJ; Tue, 09 Jul 2024 16:11:25 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sRDQE-00000007s3U-3Tlr for linux-arm-kernel@lists.infradead.org; Tue, 09 Jul 2024 16:10:32 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 47BE8614BC; Tue, 9 Jul 2024 16:10:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E7941C3277B; Tue, 9 Jul 2024 16:10:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1720541430; bh=JiF3RZjwFHTLjremsIUGtIRqMKzbsREZpa5fiAqC1ow=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Sa7bTIPVrn0CUv6vDC7OChbTajY29wYE/7+U6H0Vn9mq+8k6AHett6yCoa9OJR/dY /XyzjirShHcyQx9Yke22ftV55PkuRa7zQFPxJrRRexu4OGqEhE1z9qCwOfMG4cJaww LTrkTEqVfzTy5RSeOWV9w2ZIFoAkKHSLnYDzCKAY= From: Linus Torvalds To: Mark Rutland Cc: Linux ARM , Linus Torvalds Subject: [PATCH 3/3] arm64: access_ok() optimization Date: Tue, 9 Jul 2024 09:02:01 -0700 Message-ID: <20240709161022.1035500-4-torvalds@linux-foundation.org> X-Mailer: git-send-email 2.45.1.209.gc6f12300df In-Reply-To: <20240709161022.1035500-1-torvalds@linux-foundation.org> References: <20240709161022.1035500-1-torvalds@linux-foundation.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240709_091031_014773_F64A3CB7 X-CRM114-Status: GOOD ( 19.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The TBI setup on arm64 is very strange: HW is set up to always do TBI, but the kernel enforcement for system calls is purely a software contract, and user space is supposed to mask off the top bits before the system call. Except all the actual brk/mmap/etc() system calls then mask it in kernel space anyway, and accept any TBI address. This basically unifies things and makes access_ok() also ignore it. This is an ABI change, but the current situation is very odd, and this change avoids the current mess and makes the kernel more permissive, and as such is unlikely to break anything. The way forward - for some possible future situation when people want to use more bits - is probably to introduce a new "I actually want the full 64-bit address space" prctl. But we should make sure that the software and hardware rules actually match at that point. Signed-off-by: Linus Torvalds --- arch/arm64/include/asm/uaccess.h | 23 ++++++++++------------- 1 file changed, 10 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 28f665e0975a..1f21190d4db5 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -30,23 +30,20 @@ static inline int __access_ok(const void __user *ptr, unsigned long size); /* * Test whether a block of memory is a valid user space address. - * Returns 1 if the range is valid, 0 otherwise. * - * This is equivalent to the following test: - * (u65)addr + (u65)size <= (u65)TASK_SIZE_MAX + * We only care that the address cannot reach the kernel mapping, and + * that an invalid address will fault. */ -static inline int access_ok(const void __user *addr, unsigned long size) +static inline int access_ok(const void __user *p, unsigned long size) { - /* - * Asynchronous I/O running in a kernel thread does not have the - * TIF_TAGGED_ADDR flag of the process owning the mm, so always untag - * the user address before checking. - */ - if (IS_ENABLED(CONFIG_ARM64_TAGGED_ADDR_ABI) && - (current->flags & PF_KTHREAD || test_thread_flag(TIF_TAGGED_ADDR))) - addr = untagged_addr(addr); + unsigned long addr = (unsigned long)p; - return likely(__access_ok(addr, size)); + /* Only bit 55 of the address matters */ + addr |= addr+size; + addr = (addr >> 55) & 1; + size >>= 55; + + return !(addr | size); } #define access_ok access_ok