From patchwork Tue Mar 14 15:36:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13174659 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DE7ADC6FD1C for ; Tue, 14 Mar 2023 15:38:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uMFaaKsd5lXOgVosuXX2w+ny7+JXBMKJCsF99S91p8k=; b=nI35fe+HF5yI5f Ih5VD8XapNWAOvXU01c9dE378/Z+iofqYkXv7CV/X9zTqMKiBbyv+S4uulZPfqyppqIeOJccyaOvI YK7SekS/esGddnTk0ngIvIJWhnb4hf7+cp/7wImcI/5aUYe8L1i53KxSXFWGc9gVX46l6FCE6Curz 2tPb9n0c0JvEt5N7DXIQCifVdb+WAuC4JNXaBj7BFiP6xnIV0iig5wT6i5ug9lwdXxB0zj9vRNAys 7QF8a63NbrLQJ8KMmPchKzLM3K5kHSvrzH1nJWAuuf2Vlw6ryLUDC0oavSXo5n1y1xBzweUOO0dtz snkjEZKVANfkUQdF0O1Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pc6iA-00AdQu-1u; Tue, 14 Mar 2023 15:37:14 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pc6i6-00AdOD-0M for linux-arm-kernel@lists.infradead.org; Tue, 14 Mar 2023 15:37:11 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C9EBB150C; Tue, 14 Mar 2023 08:37:52 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 44C323F67D; Tue, 14 Mar 2023 08:37:08 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, mark.rutland@arm.com, peterz@infradead.org, robin.murphy@arm.com, will@kernel.org Subject: [PATCH v2 1/4] arm64: atomics: lse: improve cmpxchg implementation Date: Tue, 14 Mar 2023 15:36:57 +0000 Message-Id: <20230314153700.787701-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230314153700.787701-1-mark.rutland@arm.com> References: <20230314153700.787701-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230314_083710_254239_35F2BEB7 X-CRM114-Status: GOOD ( 15.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For historical reasons, the LSE implementation of cmpxchg*() hard-codes the GPRs to use, and shuffles registers around with MOVs. This is no longer necessary, and can be simplified. When the LSE cmpxchg implementation was added in commit: c342f78217e822d2 ("arm64: cmpxchg: patch in lse instructions when supported by the CPU") ... the LL/SC implementation of cmpxchg() would be placed out-of-line, and the in-line assembly for cmpxchg would default to: NOP BL NOP The LL/SC implementation of each cmpxchg() function accepted arguments as per AAPCS64 rules, to it was necessary to place the pointer in x0, the older value in X1, and the new value in x2, and acquire the return value from x0. The LL/SC implementation required a temporary register (e.g. for the STXR status value). As the LL/SC implementation preserved the old value, the LSE implementation does likewise. Since commit: addfc38672c73efd ("arm64: atomics: avoid out-of-line ll/sc atomics") ... the LSE and LL/SC implementations of cmpxchg are inlined as separate asm blocks, with another branch choosing between thw two. Due to this, it is no longer necessary for the LSE implementation to match the register constraints of the LL/SC implementation. This was partially dealt with by removing the hard-coded use of x30 in commit: 3337cb5aea594e40 ("arm64: avoid using hard-coded registers for LSE atomics") ... but we didn't clean up the hard-coding of x0, x1, and x2. This patch simplifies the LSE implementation of cmpxchg, removing the register shuffling and directly clobbering the 'old' argument. This gives the compiler greater freedom for register allocation, and avoids redundant work. The new constraints permit 'old' (Rs) and 'new' (Rt) to be allocated to the same register when the initial values of the two are the same, e.g. resulting in: CAS X0, X0, [X1] This is safe as Rs is only written back after the initial values of Rs and Rt are consumed, and there are no UNPREDICTABLE behaviours to avoid when Rs == Rt. The new constraints also permit 'new' to be allocated to the zero register, avoiding a MOV in a few cases. The same cannot be done for 'old' as it is both an input and output, and any caller of cmpxchg() should care about the output value. Note that for CAS* the use of the zero register never affects the ordering (while for SWP* the use of the zero regsiter for the 'old' value drops any ACQUIRE semantic). Compared to v6.2-rc4, a defconfig vmlinux is ~116KiB smaller, though the resulting Image is the same size due to internal alignment and padding: [mark@lakrids:~/src/linux]% ls -al vmlinux-* -rwxr-xr-x 1 mark mark 137269304 Jan 16 11:59 vmlinux-after -rwxr-xr-x 1 mark mark 137387936 Jan 16 10:54 vmlinux-before [mark@lakrids:~/src/linux]% ls -al Image-* -rw-r--r-- 1 mark mark 38711808 Jan 16 11:59 Image-after -rw-r--r-- 1 mark mark 38711808 Jan 16 10:54 Image-before This patch does not touch cmpxchg_double*() as that requires contiguous register pairs, and separate patches will replace it with cmpxchg128*(). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Peter Zijlstra Cc: Robin Murphy Cc: Will Deacon --- arch/arm64/include/asm/atomic_lse.h | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h index a94d6dacc029..319958b95cfd 100644 --- a/arch/arm64/include/asm/atomic_lse.h +++ b/arch/arm64/include/asm/atomic_lse.h @@ -251,22 +251,15 @@ __lse__cmpxchg_case_##name##sz(volatile void *ptr, \ u##sz old, \ u##sz new) \ { \ - register unsigned long x0 asm ("x0") = (unsigned long)ptr; \ - register u##sz x1 asm ("x1") = old; \ - register u##sz x2 asm ("x2") = new; \ - unsigned long tmp; \ - \ asm volatile( \ __LSE_PREAMBLE \ - " mov %" #w "[tmp], %" #w "[old]\n" \ - " cas" #mb #sfx "\t%" #w "[tmp], %" #w "[new], %[v]\n" \ - " mov %" #w "[ret], %" #w "[tmp]" \ - : [ret] "+r" (x0), [v] "+Q" (*(u##sz *)ptr), \ - [tmp] "=&r" (tmp) \ - : [old] "r" (x1), [new] "r" (x2) \ + " cas" #mb #sfx " %" #w "[old], %" #w "[new], %[v]\n" \ + : [v] "+Q" (*(u##sz *)ptr), \ + [old] "+r" (old) \ + : [new] "rZ" (new) \ : cl); \ \ - return x0; \ + return old; \ } __CMPXCHG_CASE(w, b, , 8, ) From patchwork Tue Mar 14 15:36:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13174662 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 98F7DC7618A for ; Tue, 14 Mar 2023 15:38:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=j8sWsvWAyCgN7bebJpAgUSZKQjHxSe01l5NVxjJQ1PE=; b=rQhEGkbqohoyHe 7gsGbmrkfwR95/gQpK9nu8Hr8X/4WjDMWhR5VgMVdZE3ytbSJ1BGxK2NjuN0AWmS6kUaI2vYTih89 lWTca4VdNXFphpmkdrbhSzbHqI6FB8kb+/whdecpINQu7bsCSeS2/VGkYx+NaoBIiyivuQU+GMcaV dBGqsTxm5aKx7miFEPLFgKhQphpGwEXTR6aTy/kthcDDOB2ro7S/5uGIDaWY0u0ee323c2dlUe9o7 UmGk4ROeNIPbY3/FQab2wt15+SWAaq04wS/QG8sSCnalxeYYwFxUpnOQH8osxY8jXu3cRo3dBxL56 wOBKPqrjRN7O4C/EY47w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pc6iJ-00AdUA-0v; Tue, 14 Mar 2023 15:37:23 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pc6i8-00AdPb-34 for linux-arm-kernel@lists.infradead.org; Tue, 14 Mar 2023 15:37:14 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6CBF2169C; Tue, 14 Mar 2023 08:37:54 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DBE103F67D; Tue, 14 Mar 2023 08:37:09 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, mark.rutland@arm.com, peterz@infradead.org, robin.murphy@arm.com, will@kernel.org Subject: [PATCH v2 2/4] arm64: uaccess: permit __smp_store_release() to use zero register Date: Tue, 14 Mar 2023 15:36:58 +0000 Message-Id: <20230314153700.787701-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230314153700.787701-1-mark.rutland@arm.com> References: <20230314153700.787701-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230314_083713_042533_55133D1A X-CRM114-Status: GOOD ( 10.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently the asm constraints for __smp_store_release() require that the value is placed in a "real" GPR (i.e. one other than [XW]ZR or SP). This means that for cases such as: __smp_store_release(ptr, 0) ... the compiler has to move '0' into "real" GPR, e.g. mov xN, #0 stlr xN, [] This is unfortunate, as using the zero register would require fewer instructions and save a "real" GPR for other usage, allowing the compiler to generate: stlr xzr, [] Modify the asm constaints for __smp_store_release() to permit the use of the zero register for the value. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Robin Murphy Cc: Will Deacon --- arch/arm64/include/asm/barrier.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h index 3dd8982a9ce3..cf2987464c18 100644 --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -131,25 +131,25 @@ do { \ case 1: \ asm volatile ("stlrb %w1, %0" \ : "=Q" (*__p) \ - : "r" (*(__u8 *)__u.__c) \ + : "rZ" (*(__u8 *)__u.__c) \ : "memory"); \ break; \ case 2: \ asm volatile ("stlrh %w1, %0" \ : "=Q" (*__p) \ - : "r" (*(__u16 *)__u.__c) \ + : "rZ" (*(__u16 *)__u.__c) \ : "memory"); \ break; \ case 4: \ asm volatile ("stlr %w1, %0" \ : "=Q" (*__p) \ - : "r" (*(__u32 *)__u.__c) \ + : "rZ" (*(__u32 *)__u.__c) \ : "memory"); \ break; \ case 8: \ - asm volatile ("stlr %1, %0" \ + asm volatile ("stlr %x1, %0" \ : "=Q" (*__p) \ - : "r" (*(__u64 *)__u.__c) \ + : "rZ" (*(__u64 *)__u.__c) \ : "memory"); \ break; \ } \ From patchwork Tue Mar 14 15:36:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13174661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 76E90C7618B for ; Tue, 14 Mar 2023 15:38:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BtJU3B34+EwGllPjqXyiGS1HxSJX6L4sJyyssJJjaeQ=; b=VnqVWwhq0GZBLE NTGDIBddvAoV5KUvDCJE7xRumwTRvr8CDODfa+NVr1oXa7EKUriLYhFbHFvCwYfLJ/C9PYQQJUcVh kR21FNyqJQvEZfobNDEfAOHvtI24jhWS7zilTqCFSg6nCyvnmycGWNmM9by0858TqfJlDqtqdrrIy Zmp/y1xjuFrwVNakNzKvCbmRbsMpcqasZ5BcT+9fISGF+CdXyc36Y0XZ0sa66TrrT+njVuzU0uLyP XkaJOfWHu4j8p+sO1Kt3LyosdmSmgEcczIi7+lhIjRBM5474SO+a5fx20bs7LQe+48OQvlNJS+9rx r/b9Y7itJOc1FD3243Zw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pc6iI-00AdTg-0l; Tue, 14 Mar 2023 15:37:22 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pc6i8-00AdPh-34 for linux-arm-kernel@lists.infradead.org; Tue, 14 Mar 2023 15:37:14 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E9790169E; Tue, 14 Mar 2023 08:37:55 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 646C83F67D; Tue, 14 Mar 2023 08:37:11 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, mark.rutland@arm.com, peterz@infradead.org, robin.murphy@arm.com, will@kernel.org Subject: [PATCH v2 3/4] arm64: uaccess: permit put_{user,kernel} to use zero register Date: Tue, 14 Mar 2023 15:36:59 +0000 Message-Id: <20230314153700.787701-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230314153700.787701-1-mark.rutland@arm.com> References: <20230314153700.787701-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230314_083713_045088_C0127DE6 X-CRM114-Status: GOOD ( 10.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently the asm constraints for __put_mem_asm() require that the value is placed in a "real" GPR (i.e. one other than [XW]ZR or SP). This means that for cases such as: __put_user(0, addr) ... the compiler has to move '0' into "real" GPR, e.g. mov xN, #0 sttr xN, [] This is unfortunate, as using the zero register would require fewer instructions and save a "real" GPR for other usage, allowing the compiler to generate: sttr xzr, [] Modify the asm constaints for __put_mem_asm() to permit the use of the zero register for the value. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Robin Murphy Cc: Will Deacon --- arch/arm64/include/asm/uaccess.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 5c7b2f9d5913..4ee5aa7bd5a2 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -327,7 +327,7 @@ do { \ "2:\n" \ _ASM_EXTABLE_##type##ACCESS_ERR(1b, 2b, %w0) \ : "+r" (err) \ - : "r" (x), "r" (addr)) + : "rZ" (x), "r" (addr)) #define __raw_put_mem(str, x, ptr, err, type) \ do { \ From patchwork Tue Mar 14 15:37:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13174663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5ABA4C6FD1F for ; Tue, 14 Mar 2023 15:38:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VioCMAELKVmOp3P2xw2VIypUNWoA3U7fvTkDi693TX4=; b=w8OFgJhC0U98m8 ZIFPfXdUrL1NHax4kHUGnaPH01oCP2MEWuZSA7myUc3XDMznzTlkrClm8TbLQqDLI8VZchVZS7iK9 ZAh7XwSRZxEnXb2Kd4BBMybIvUhnpqWO3vqPwLmi1FGx7OGGqmLeIE2rDTGxk7o6xbOn8Jy9o9fxX 8g8lwQ3KPwH72r/QJxjGLV4wL6XHeqFHStDr3M9EnzatPjPRH+iKgGVAXx+Jns/dNoDd/fyTybxDq euWBv4UN185MO2rBlc24qXWBpFhW+EeFW5WgNQZtimmjz5xCPvCSYcXuRtq+v32is6q5z19zaCIx2 F/qFTy+sdLYg7ac41ddw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pc6iK-00AdUg-1a; Tue, 14 Mar 2023 15:37:24 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pc6iB-00AdPh-1B for linux-arm-kernel@lists.infradead.org; Tue, 14 Mar 2023 15:37:16 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8541E4B3; Tue, 14 Mar 2023 08:37:57 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 000EA3F67D; Tue, 14 Mar 2023 08:37:12 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, mark.rutland@arm.com, peterz@infradead.org, robin.murphy@arm.com, will@kernel.org Subject: [PATCH v2 4/4] arm64: uaccess: remove unnecessary earlyclobber Date: Tue, 14 Mar 2023 15:37:00 +0000 Message-Id: <20230314153700.787701-5-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230314153700.787701-1-mark.rutland@arm.com> References: <20230314153700.787701-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230314_083715_480242_89EEE985 X-CRM114-Status: GOOD ( 11.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently the asm constriangs for __get_mem_asm() mark the value register as an earlyclobber operand. This means that the compiler can't reuse the same register for both the address and value, even when the value is not subseqeuently used. There's no need for the value register to be marked as earlyclobber, as it's only written to after the address register is consumed, even when the access faults. Remove the unnecessary earlyclobber. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Robin Murphy Cc: Will Deacon --- arch/arm64/include/asm/uaccess.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 4ee5aa7bd5a2..deaf4f8f0672 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -237,7 +237,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) "1: " load " " reg "1, [%2]\n" \ "2:\n" \ _ASM_EXTABLE_##type##ACCESS_ERR_ZERO(1b, 2b, %w0, %w1) \ - : "+r" (err), "=&r" (x) \ + : "+r" (err), "=r" (x) \ : "r" (addr)) #define __raw_get_mem(ldr, x, ptr, err, type) \