From patchwork Mon Jun 10 20:48:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 13692423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CB0D1C27C55 for ; Mon, 10 Jun 2024 20:49:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7WliFGjD2k6AR24G/r+5CpDxIQFaWtnpW5QgHWs2+bk=; b=XG4sNHsUypCiIm dQ5K+Q+kADNdZ5GTVeHgnXsBcHfoJDxx8IudYk6ZFHN4XNm9DSa8eS9K0CV79SGKOfzM+ezLzeHyB hcS+hCqXz22vEjiIVnIPDcBL0PLRBLc28ETjLB5h23ep00PzbBY6PYxszq06cKmaNm4qP5m8QPFzv O70mNKFgNy9N0LQgDbb6lj2cazz4iqqu6OIVHZA05KwQ/3Sxhpx+dXoLB9RjXfEL3Z7HI+554zmDP lWp2cDF6asras46RvzKZHGQJVPa4upIDkSllyMKs6flswUxZI5tHY5A7fHP5GTC/FIzlgSyaD1Wmv gcbqWunVHHo42Dg90Lzg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sGlxM-00000006RRZ-1BEm; Mon, 10 Jun 2024 20:49:32 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sGlwp-00000006R37-13ML for linux-arm-kernel@lists.infradead.org; Mon, 10 Jun 2024 20:49:01 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id CAD65CE1839; Mon, 10 Jun 2024 20:48:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D708EC4AF48; Mon, 10 Jun 2024 20:48:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1718052536; bh=HEom2f9nqPDH+BblCEPQfasllgGSfBKQH1eKuClsvcM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tsQpcNeB57uThZqtmJz2uP7dIDIbOiuBLKtYN1S0h9KofsPY+LiMsarc3KDWA800n vZ87ZsrW9o+P/HKUD3jAGoNbkk6gC8IvyFp9EifHDY3/X+cXSqpq8GA3ooI0lHiMLB lFocJn6d0REtPkDmMazP7SwwKsNdR8Ami0VhulN4= From: Linus Torvalds To: Peter Anvin , Ingo Molnar , Borislav Petkov , Thomas Gleixner , Rasmus Villemoes , Josh Poimboeuf , Catalin Marinas , Will Deacon Cc: Linux Kernel Mailing List , the arch/x86 maintainers , linux-arm-kernel@lists.infradead.org, linux-arch , Linus Torvalds Subject: [PATCH 6/7] arm64: start using 'asm goto' for put_user() when available Date: Mon, 10 Jun 2024 13:48:20 -0700 Message-ID: <20240610204821.230388-7-torvalds@linux-foundation.org> X-Mailer: git-send-email 2.45.1.209.gc6f12300df In-Reply-To: <20240610204821.230388-1-torvalds@linux-foundation.org> References: <20240610204821.230388-1-torvalds@linux-foundation.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240610_134859_701102_543A3853 X-CRM114-Status: GOOD ( 13.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This generates noticeably better code with compilers that support it, since we don't need to test the error register etc, the exception just jumps to the error handling directly. Signed-off-by: Linus Torvalds --- arch/arm64/include/asm/uaccess.h | 77 +++++++++++++++++++------------- 1 file changed, 46 insertions(+), 31 deletions(-) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 23c2edf517ed..4ab3938290ab 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -294,29 +294,41 @@ do { \ } while (0); \ } while (0) -#define __put_mem_asm(store, reg, x, addr, err, type) \ +#ifdef CONFIG_CC_HAS_ASM_GOTO +#define __put_mem_asm(store, reg, x, addr, label, type) \ + asm goto( \ + "1: " store " " reg "0, [%1]\n" \ + "2:\n" \ + _ASM_EXTABLE_##type##ACCESS_ZERO(1b, %l2) \ + : : "rZ" (x), "r" (addr) : : label) +#else +#define __put_mem_asm(store, reg, x, addr, label, type) do { \ + int __pma_err = 0; \ asm volatile( \ "1: " store " " reg "1, [%2]\n" \ "2:\n" \ _ASM_EXTABLE_##type##ACCESS_ERR(1b, 2b, %w0) \ - : "+r" (err) \ - : "rZ" (x), "r" (addr)) + : "+r" (__pma_err) \ + : "rZ" (x), "r" (addr)); \ + if (__pma_err) goto label; \ +} while (0) +#endif -#define __raw_put_mem(str, x, ptr, err, type) \ +#define __raw_put_mem(str, x, ptr, label, type) \ do { \ __typeof__(*(ptr)) __pu_val = (x); \ switch (sizeof(*(ptr))) { \ case 1: \ - __put_mem_asm(str "b", "%w", __pu_val, (ptr), (err), type); \ + __put_mem_asm(str "b", "%w", __pu_val, (ptr), label, type); \ break; \ case 2: \ - __put_mem_asm(str "h", "%w", __pu_val, (ptr), (err), type); \ + __put_mem_asm(str "h", "%w", __pu_val, (ptr), label, type); \ break; \ case 4: \ - __put_mem_asm(str, "%w", __pu_val, (ptr), (err), type); \ + __put_mem_asm(str, "%w", __pu_val, (ptr), label, type); \ break; \ case 8: \ - __put_mem_asm(str, "%x", __pu_val, (ptr), (err), type); \ + __put_mem_asm(str, "%x", __pu_val, (ptr), label, type); \ break; \ default: \ BUILD_BUG(); \ @@ -328,25 +340,34 @@ do { \ * uaccess_ttbr0_disable(). As `x` and `ptr` could contain blocking functions, * we must evaluate these outside of the critical section. */ -#define __raw_put_user(x, ptr, err) \ +#define __raw_put_user(x, ptr, label) \ do { \ + __label__ __rpu_failed; \ __typeof__(*(ptr)) __user *__rpu_ptr = (ptr); \ __typeof__(*(ptr)) __rpu_val = (x); \ __chk_user_ptr(__rpu_ptr); \ \ - uaccess_ttbr0_enable(); \ - __raw_put_mem("sttr", __rpu_val, __rpu_ptr, err, U); \ - uaccess_ttbr0_disable(); \ + do { \ + uaccess_ttbr0_enable(); \ + __raw_put_mem("sttr", __rpu_val, __rpu_ptr, __rpu_failed, U); \ + uaccess_ttbr0_disable(); \ + break; \ + __rpu_failed: \ + uaccess_ttbr0_disable(); \ + goto label; \ + } while (0); \ } while (0) #define __put_user_error(x, ptr, err) \ do { \ + __label__ __pu_failed; \ __typeof__(*(ptr)) __user *__p = (ptr); \ might_fault(); \ if (access_ok(__p, sizeof(*__p))) { \ __p = uaccess_mask_ptr(__p); \ - __raw_put_user((x), __p, (err)); \ + __raw_put_user((x), __p, __pu_failed); \ } else { \ + __pu_failed: \ (err) = -EFAULT; \ } \ } while (0) @@ -369,15 +390,18 @@ do { \ do { \ __typeof__(dst) __pkn_dst = (dst); \ __typeof__(src) __pkn_src = (src); \ - int __pkn_err = 0; \ \ - __mte_enable_tco_async(); \ - __raw_put_mem("str", *((type *)(__pkn_src)), \ - (__force type *)(__pkn_dst), __pkn_err, K); \ - __mte_disable_tco_async(); \ - \ - if (unlikely(__pkn_err)) \ + do { \ + __label__ __pkn_err; \ + __mte_enable_tco_async(); \ + __raw_put_mem("str", *((type *)(__pkn_src)), \ + (__force type *)(__pkn_dst), __pkn_err, K); \ + __mte_disable_tco_async(); \ + break; \ + __pkn_err: \ + __mte_disable_tco_async(); \ goto err_label; \ + } while (0); \ } while(0) extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n); @@ -411,17 +435,8 @@ static __must_check __always_inline bool user_access_begin(const void __user *pt } #define user_access_begin(a,b) user_access_begin(a,b) #define user_access_end() uaccess_ttbr0_disable() - -/* - * The arm64 inline asms should learn abut asm goto, and we should - * teach user_access_begin() about address masking. - */ -#define unsafe_put_user(x, ptr, label) do { \ - int __upu_err = 0; \ - __raw_put_mem("sttr", x, uaccess_mask_ptr(ptr), __upu_err, U); \ - if (__upu_err) goto label; \ -} while (0) - +#define unsafe_put_user(x, ptr, label) \ + __raw_put_mem("sttr", x, uaccess_mask_ptr(ptr), label, U) #define unsafe_get_user(x, ptr, label) \ __raw_get_mem("ldtr", x, uaccess_mask_ptr(ptr), label, U)