From patchwork Wed Oct 4 15:58:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 9985075 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C95696028E for ; Wed, 4 Oct 2017 16:01:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B8E0828B45 for ; Wed, 4 Oct 2017 16:01:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ADAF028B4D; Wed, 4 Oct 2017 16:01:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 04FBD28B45 for ; Wed, 4 Oct 2017 16:01:34 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dzm4g-0000HJ-9V; Wed, 04 Oct 2017 15:59:06 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dzm4f-0000EZ-9W for xen-devel@lists.xenproject.org; Wed, 04 Oct 2017 15:59:05 +0000 Received: from [193.109.254.147] by server-1.bemta-6.messagelabs.com id DB/28-03414-84505D95; Wed, 04 Oct 2017 15:59:04 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpjkeJIrShJLcpLzFFi42K52LJdRteD9Wq kwakVrBbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8a2o/1MBbP9Kp79ns3cwPjcsYuRi0NIYDaT xPEr71i6GDk5WATsJbrPH2cHsRkFyiS623YxQ9i5Eou/tzJBNGxklFi0/QETSIJNQEvi+N+TQ A0cHCICQhJL79aBhJkFjrBKvLioCWILC8RJ/D26jAlivqrEtbkLWUFsXoFIibXTf4HZnALmEj v6lrGB2EICZhLbr60D2yshoC3x+PR0Rgi7j1Hi6O6ICYz8CxgZVjFqFKcWlaUW6Rqa6yUVZaZ nlOQmZuboGhqY6eWmFhcnpqfmJCYV6yXn525iBIYPAxDsYLy9MeAQoyQHk5Io76+fVyKF+JLy UyozEosz4otKc1KLDzHKcHAoSfDGslyNFBIsSk1PrUjLzAEGMkxagoNHSYR3NUiat7ggMbc4M x0idYpRUUqcdx5IQgAkkVGaB9cGi55LjLJSwryMQIcI8RSkFuVmlqDKv2IU52BUEubtAJnCk5 lXAjf9FdBiJqDFc5qugCwuSURISTUw6i6Qy99bp7PgXXB17gf5LJFQpebjemxLjj0t5TZqlki u3V/gs8S4pO/1ucAvCVbrHqfZHV+/ZXrf44bC4o9SWy1DDyz5E6L8OsMg9Oqmr0uMN302KPJv Tbwh9dZz38ZdFyblij7VPzdf3fXVp5u/P1zauOF8fRDTepfOgg1752RInn87n8/knRJLcUaio RZzUXEiAFhnRDKZAgAA X-Env-Sender: jpoimboe@redhat.com X-Msg-Ref: server-15.tower-27.messagelabs.com!1507132742!67060227!1 X-Originating-IP: [209.132.183.28] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 52846 invoked from network); 4 Oct 2017 15:59:03 -0000 Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 4 Oct 2017 15:59:03 -0000 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0BB50800A8; Wed, 4 Oct 2017 15:59:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 0BB50800A8 Received: from treble.redhat.com (ovpn-120-76.rdu2.redhat.com [10.10.120.76]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0555762930; Wed, 4 Oct 2017 15:58:59 +0000 (UTC) From: Josh Poimboeuf To: x86@kernel.org Date: Wed, 4 Oct 2017 10:58:30 -0500 Message-Id: <9e97ee7a68cab00993c7afa8e429fd8fe5a7015f.1507128293.git.jpoimboe@redhat.com> In-Reply-To: References: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Wed, 04 Oct 2017 15:59:02 +0000 (UTC) Cc: Juergen Gross , Rusty Russell , xen-devel@lists.xenproject.org, Peter Zijlstra , Jiri Slaby , Boris Ostrovsky , Mike Galbraith , linux-kernel@vger.kernel.org, Sasha Levin , Chris Wright , Thomas Gleixner , Andy Lutomirski , "H. Peter Anvin" , Borislav Petkov , live-patching@vger.kernel.org, Alok Kataria , virtualization@lists.linux-foundation.org, Linus Torvalds , Ingo Molnar Subject: [Xen-devel] [PATCH 09/13] x86/asm: Convert ALTERNATIVE*() assembler macros to preprocessor macros X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The ALTERNATIVE() and ALTERNATIVE_2() macros are GNU assembler macros, which makes them quite inflexible for future changes. Convert them to preprocessor macros. Signed-off-by: Josh Poimboeuf Reviewed-by: Juergen Gross --- arch/x86/entry/entry_32.S | 12 +++--- arch/x86/entry/entry_64.S | 10 ++--- arch/x86/entry/entry_64_compat.S | 8 ++-- arch/x86/entry/vdso/vdso32/system_call.S | 10 ++--- arch/x86/include/asm/alternative-asm.h | 68 +++++++++++++++----------------- arch/x86/include/asm/smap.h | 4 +- arch/x86/lib/copy_page_64.S | 2 +- arch/x86/lib/memcpy_64.S | 4 +- arch/x86/lib/memmove_64.S | 3 +- arch/x86/lib/memset_64.S | 4 +- 10 files changed, 59 insertions(+), 66 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 21d1197779a4..338dc838a9a8 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -443,8 +443,8 @@ ENTRY(entry_SYSENTER_32) movl %esp, %eax call do_fast_syscall_32 /* XEN PV guests always use IRET path */ - ALTERNATIVE "testl %eax, %eax; jz .Lsyscall_32_done", \ - "jmp .Lsyscall_32_done", X86_FEATURE_XENPV + #define JMP_IF_IRET testl %eax, %eax; jz .Lsyscall_32_done + ALTERNATIVE(JMP_IF_IRET, jmp .Lsyscall_32_done, X86_FEATURE_XENPV) /* Opportunistic SYSEXIT */ TRACE_IRQS_ON /* User mode traces as IRQs on. */ @@ -536,7 +536,7 @@ restore_all: TRACE_IRQS_IRET .Lrestore_all_notrace: #ifdef CONFIG_X86_ESPFIX32 - ALTERNATIVE "jmp .Lrestore_nocheck", "", X86_BUG_ESPFIX + ALTERNATIVE(jmp .Lrestore_nocheck, , X86_BUG_ESPFIX) movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS /* @@ -692,9 +692,9 @@ ENTRY(simd_coprocessor_error) pushl $0 #ifdef CONFIG_X86_INVD_BUG /* AMD 486 bug: invd from userspace calls exception 19 instead of #GP */ - ALTERNATIVE "pushl $do_general_protection", \ - "pushl $do_simd_coprocessor_error", \ - X86_FEATURE_XMM + ALTERNATIVE(pushl $do_general_protection, + pushl $do_simd_coprocessor_error, + X86_FEATURE_XMM) #else pushl $do_simd_coprocessor_error #endif diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index c7c85724d7e0..49733c72619a 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -925,7 +925,7 @@ ENTRY(native_load_gs_index) SWAPGS .Lgs_change: movl %edi, %gs -2: ALTERNATIVE "", "mfence", X86_BUG_SWAPGS_FENCE +2: ALTERNATIVE(, mfence, X86_BUG_SWAPGS_FENCE) SWAPGS popfq FRAME_END @@ -938,12 +938,8 @@ EXPORT_SYMBOL(native_load_gs_index) /* running with kernelgs */ bad_gs: SWAPGS /* switch back to user gs */ -.macro ZAP_GS - /* This can't be a string because the preprocessor needs to see it. */ - movl $__USER_DS, %eax - movl %eax, %gs -.endm - ALTERNATIVE "", "ZAP_GS", X86_BUG_NULL_SEG + #define ZAP_GS movl $__USER_DS, %eax; movl %eax, %gs + ALTERNATIVE(, ZAP_GS, X86_BUG_NULL_SEG) xorl %eax, %eax movl %eax, %gs jmp 2b diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S index 4d9385529c39..16e82b5103b5 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -124,8 +124,8 @@ ENTRY(entry_SYSENTER_compat) movq %rsp, %rdi call do_fast_syscall_32 /* XEN PV guests always use IRET path */ - ALTERNATIVE "testl %eax, %eax; jz .Lsyscall_32_done", \ - "jmp .Lsyscall_32_done", X86_FEATURE_XENPV + #define JMP_IF_IRET testl %eax, %eax; jz .Lsyscall_32_done + ALTERNATIVE(JMP_IF_IRET, jmp .Lsyscall_32_done, X86_FEATURE_XENPV) jmp sysret32_from_system_call .Lsysenter_fix_flags: @@ -224,8 +224,8 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe) movq %rsp, %rdi call do_fast_syscall_32 /* XEN PV guests always use IRET path */ - ALTERNATIVE "testl %eax, %eax; jz .Lsyscall_32_done", \ - "jmp .Lsyscall_32_done", X86_FEATURE_XENPV + ALTERNATIVE(JMP_IF_IRET, + jmp .Lsyscall_32_done, X86_FEATURE_XENPV) /* Opportunistic SYSRET */ sysret32_from_system_call: diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S index ed4bc9731cbb..a0c5f9e8226c 100644 --- a/arch/x86/entry/vdso/vdso32/system_call.S +++ b/arch/x86/entry/vdso/vdso32/system_call.S @@ -48,15 +48,15 @@ __kernel_vsyscall: CFI_ADJUST_CFA_OFFSET 4 CFI_REL_OFFSET ebp, 0 - #define SYSENTER_SEQUENCE "movl %esp, %ebp; sysenter" - #define SYSCALL_SEQUENCE "movl %ecx, %ebp; syscall" + #define SYSENTER_SEQUENCE movl %esp, %ebp; sysenter + #define SYSCALL_SEQUENCE movl %ecx, %ebp; syscall #ifdef CONFIG_X86_64 /* If SYSENTER (Intel) or SYSCALL32 (AMD) is available, use it. */ - ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SYSENTER32, \ - SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32 + ALTERNATIVE_2(, SYSENTER_SEQUENCE, X86_FEATURE_SYSENTER32, + SYSCALL_SEQUENCE, X86_FEATURE_SYSCALL32) #else - ALTERNATIVE "", SYSENTER_SEQUENCE, X86_FEATURE_SEP + ALTERNATIVE(, SYSENTER_SEQUENCE, X86_FEATURE_SEP) #endif /* Enter using int $0x80 */ diff --git a/arch/x86/include/asm/alternative-asm.h b/arch/x86/include/asm/alternative-asm.h index e7636bac7372..60073947350d 100644 --- a/arch/x86/include/asm/alternative-asm.h +++ b/arch/x86/include/asm/alternative-asm.h @@ -39,23 +39,21 @@ * @newinstr. ".skip" directive takes care of proper instruction padding * in case @newinstr is longer than @oldinstr. */ -.macro ALTERNATIVE oldinstr, newinstr, feature -140: - \oldinstr -141: - .skip -(((144f-143f)-(141b-140b)) > 0) * ((144f-143f)-(141b-140b)),0x90 -142: - - .pushsection .altinstructions,"a" - altinstruction_entry 140b,143f,\feature,142b-140b,144f-143f,142b-141b - .popsection - - .pushsection .altinstr_replacement,"ax" -143: - \newinstr -144: +#define ALTERNATIVE(oldinstr, newinstr, feature) \ +140:; \ + oldinstr; \ +141:; \ + .skip -(((144f-143f)-(141b-140b)) > 0) * \ + ((144f-143f)-(141b-140b)),0x90; \ +142:; \ + .pushsection .altinstructions, "a"; \ + altinstruction_entry 140b,143f,feature,142b-140b,144f-143f,142b-141b;\ + .popsection; \ + .pushsection .altinstr_replacement, "ax"; \ +143:; \ + newinstr; \ +144:; \ .popsection -.endm #define old_len 141b-140b #define new_len1 144f-143f @@ -73,27 +71,25 @@ * has @feature1, it replaces @oldinstr with @newinstr1. If CPU has * @feature2, it replaces @oldinstr with @feature2. */ -.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2 -140: - \oldinstr -141: - .skip -((alt_max_short(new_len1, new_len2) - (old_len)) > 0) * \ - (alt_max_short(new_len1, new_len2) - (old_len)),0x90 -142: - - .pushsection .altinstructions,"a" - altinstruction_entry 140b,143f,\feature1,142b-140b,144f-143f,142b-141b - altinstruction_entry 140b,144f,\feature2,142b-140b,145f-144f,142b-141b - .popsection - - .pushsection .altinstr_replacement,"ax" -143: - \newinstr1 -144: - \newinstr2 -145: +#define ALTERNATIVE_2(oldinstr, newinstr1, feature1, \ + newinstr2, feature2) \ +140:; \ + oldinstr; \ +141:; \ + .skip -((alt_max_short(new_len1, new_len2) - (old_len)) > 0) * \ + (alt_max_short(new_len1, new_len2) - (old_len)),0x90; \ +142:; \ + .pushsection .altinstructions, "a"; \ + altinstruction_entry 140b,143f,feature1,142b-140b,144f-143f,142b-141b; \ + altinstruction_entry 140b,144f,feature2,142b-140b,145f-144f,142b-141b; \ + .popsection; \ + .pushsection .altinstr_replacement, "ax"; \ +143:; \ + newinstr1; \ +144:; \ + newinstr2; \ +145:; \ .popsection -.endm #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/smap.h b/arch/x86/include/asm/smap.h index db333300bd4b..b1264cff8906 100644 --- a/arch/x86/include/asm/smap.h +++ b/arch/x86/include/asm/smap.h @@ -28,10 +28,10 @@ #ifdef CONFIG_X86_SMAP #define ASM_CLAC \ - ALTERNATIVE "", __stringify(__ASM_CLAC), X86_FEATURE_SMAP + ALTERNATIVE(, __ASM_CLAC, X86_FEATURE_SMAP) #define ASM_STAC \ - ALTERNATIVE "", __stringify(__ASM_STAC), X86_FEATURE_SMAP + ALTERNATIVE(, __ASM_STAC, X86_FEATURE_SMAP) #else /* CONFIG_X86_SMAP */ diff --git a/arch/x86/lib/copy_page_64.S b/arch/x86/lib/copy_page_64.S index e8508156c99d..0611dee51760 100644 --- a/arch/x86/lib/copy_page_64.S +++ b/arch/x86/lib/copy_page_64.S @@ -13,7 +13,7 @@ */ ALIGN ENTRY(copy_page) - ALTERNATIVE "jmp copy_page_regs", "", X86_FEATURE_REP_GOOD + ALTERNATIVE(jmp copy_page_regs, , X86_FEATURE_REP_GOOD) movl $4096/8, %ecx rep movsq ret diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S index 9a53a06e5a3e..7ada0513864b 100644 --- a/arch/x86/lib/memcpy_64.S +++ b/arch/x86/lib/memcpy_64.S @@ -28,8 +28,8 @@ */ ENTRY(__memcpy) ENTRY(memcpy) - ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \ - "jmp memcpy_erms", X86_FEATURE_ERMS + ALTERNATIVE_2(jmp memcpy_orig, , X86_FEATURE_REP_GOOD, + jmp memcpy_erms, X86_FEATURE_ERMS) movq %rdi, %rax movq %rdx, %rcx diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S index 15de86cd15b0..ca6c39effa2f 100644 --- a/arch/x86/lib/memmove_64.S +++ b/arch/x86/lib/memmove_64.S @@ -42,7 +42,8 @@ ENTRY(__memmove) jg 2f .Lmemmove_begin_forward: - ALTERNATIVE "", "movq %rdx, %rcx; rep movsb; retq", X86_FEATURE_ERMS + #define ERMS_MOVSB_RET movq %rdx, %rcx; rep movsb; retq + ALTERNATIVE(, ERMS_MOVSB_RET, X86_FEATURE_ERMS) /* * movsq instruction have many startup latency diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S index 55b95db30a61..d86825a11724 100644 --- a/arch/x86/lib/memset_64.S +++ b/arch/x86/lib/memset_64.S @@ -26,8 +26,8 @@ ENTRY(__memset) * * Otherwise, use original memset function. */ - ALTERNATIVE_2 "jmp memset_orig", "", X86_FEATURE_REP_GOOD, \ - "jmp memset_erms", X86_FEATURE_ERMS + ALTERNATIVE_2(jmp memset_orig, , X86_FEATURE_REP_GOOD, + jmp memset_erms, X86_FEATURE_ERMS) movq %rdi,%r9 movq %rdx,%rcx