From patchwork Mon Jul 27 04:31:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Neri X-Patchwork-Id: 11686333 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D51B13B6 for ; Mon, 27 Jul 2020 04:32:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5C75420714 for ; Mon, 27 Jul 2020 04:32:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726194AbgG0EcJ (ORCPT ); Mon, 27 Jul 2020 00:32:09 -0400 Received: from mga17.intel.com ([192.55.52.151]:24188 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725787AbgG0EcJ (ORCPT ); Mon, 27 Jul 2020 00:32:09 -0400 IronPort-SDR: EBgElcoLQ1uCt7QVtz5Yh3KX3Ppah0yDVDlhjhh5bk0dhpo9k/2VPBBgA0+qnEyUp1f8i599f/ TSY6H1lxq1Rw== X-IronPort-AV: E=McAfee;i="6000,8403,9694"; a="131016889" X-IronPort-AV: E=Sophos;i="5.75,401,1589266800"; d="scan'208";a="131016889" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jul 2020 21:32:08 -0700 IronPort-SDR: 44mgJUxxoytHivjDr9PwqzBFsQYsM8cXniSJ+ZMqj/mBzE0YrVXm07Vd7k/Q8XBHCYlnhUQbnC w26Z94A5ubxA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,401,1589266800"; d="scan'208";a="329550032" Received: from ranerica-svr.sc.intel.com ([172.25.110.23]) by orsmga007.jf.intel.com with ESMTP; 26 Jul 2020 21:32:08 -0700 From: Ricardo Neri To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , x86@kernel.org Cc: Dave Hansen , Tony Luck , Cathy Zhang , Fenghua Yu , "H. Peter Anvin" , Kyung Min Park , "Peter Zijlstra (Intel)" , "Ravi V. Shankar" , Sean Christopherson , linux-kernel@vger.kernel.org, Ricardo Neri , Ricardo Neri , linux-edac@vger.kernel.org Subject: [PATCH 1/4] x86/cpufeatures: Add enumeration for SERIALIZE instruction Date: Sun, 26 Jul 2020 21:31:29 -0700 Message-Id: <20200727043132.15082-2-ricardo.neri-calderon@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200727043132.15082-1-ricardo.neri-calderon@linux.intel.com> References: <20200727043132.15082-1-ricardo.neri-calderon@linux.intel.com> Sender: linux-edac-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org The Intel architecture defines a set of Serializing Instructions (a detailed definition can be found in Vol.3 Section 8.3 of the Intel "main" manual, SDM). However, these instructions do more than what is required, have side effects and/or may be rather invasive. Furthermore, some of these instructions are only available in kernel mode or may cause VMExits. Thus, software using these instructions only to serialize execution (as defined in the manual) must handle the undesired side effects. As indicated in the name, SERIALIZE is a new Intel architecture Serializing Instruction. Crucially, it does not have any of the mentioned side effects. Also, it does not cause VMExit and can be used in user mode. This new instruction is currently documented in the latest "extensions" manual (ISE). It will appear in the "main" manual in the future. Cc: Andy Lutomirski Cc: Cathy Zhang Cc: Fenghua Yu Cc: "H. Peter Anvin" Cc: Kyung Min Park Cc: Peter Zijlstra Cc: "Ravi V. Shankar" Cc: Sean Christopherson Cc: Tony Luck Cc: linux-edac@vger.kernel.org Cc: linux-kernel@vger.kernel.org Acked-by: Dave Hansen Reviewed-by: Tony Luck Signed-off-by: Ricardo Neri --- --- arch/x86/include/asm/cpufeatures.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 03390a1ef8e7..2901d5df4366 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -367,6 +367,7 @@ #define X86_FEATURE_SRBDS_CTRL (18*32+ 9) /* "" SRBDS mitigation MSR available */ #define X86_FEATURE_MD_CLEAR (18*32+10) /* VERW clears CPU buffers */ #define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */ +#define X86_FEATURE_SERIALIZE (18*32+14) /* SERIALIZE instruction */ #define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */ #define X86_FEATURE_ARCH_LBR (18*32+19) /* Intel ARCH LBR */ #define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */ From patchwork Mon Jul 27 04:31:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Neri X-Patchwork-Id: 11686335 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8401C912 for ; Mon, 27 Jul 2020 04:32:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6F1F120714 for ; Mon, 27 Jul 2020 04:32:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726272AbgG0EcU (ORCPT ); Mon, 27 Jul 2020 00:32:20 -0400 Received: from mga17.intel.com ([192.55.52.151]:24188 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725775AbgG0EcR (ORCPT ); Mon, 27 Jul 2020 00:32:17 -0400 IronPort-SDR: sv1s+BB8PkcTfceI8wWKA2Dn24YJQDuX+eDgjTRkqdAVV4XIw+zq8fwn05DmNSCBqQEu5OLhSX HN6aRwyA+K1w== X-IronPort-AV: E=McAfee;i="6000,8403,9694"; a="131016891" X-IronPort-AV: E=Sophos;i="5.75,401,1589266800"; d="scan'208";a="131016891" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jul 2020 21:32:09 -0700 IronPort-SDR: lxSV+MiKR0bzOj6qJY8QtMUe9NSEMaWVg5PuBKbisZG8NidVF5ibPYIN++lx55pJXEALcR5cLp EU2h9K96mziQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,401,1589266800"; d="scan'208";a="329550035" Received: from ranerica-svr.sc.intel.com ([172.25.110.23]) by orsmga007.jf.intel.com with ESMTP; 26 Jul 2020 21:32:08 -0700 From: Ricardo Neri To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , x86@kernel.org Cc: Dave Hansen , Tony Luck , Cathy Zhang , Fenghua Yu , "H. Peter Anvin" , Kyung Min Park , "Peter Zijlstra (Intel)" , "Ravi V. Shankar" , Sean Christopherson , linux-kernel@vger.kernel.org, Ricardo Neri , Ricardo Neri , Dave Hansen , Dimitri Sivanich , linux-edac@vger.kernel.org Subject: [PATCH 2/4] x86/cpu: Relocate sync_core() to sync_core.h Date: Sun, 26 Jul 2020 21:31:30 -0700 Message-Id: <20200727043132.15082-3-ricardo.neri-calderon@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200727043132.15082-1-ricardo.neri-calderon@linux.intel.com> References: <20200727043132.15082-1-ricardo.neri-calderon@linux.intel.com> Sender: linux-edac-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org Having sync_core() in processor.h is problematic since it is not possible to check for hardware capabilities via the *cpu_has() family of macros. The latter needs the definitions in processor.h. It also looks more intuitive to relocate the function to sync_core.h. This changeset does not make changes in functionality. Cc: Andy Lutomirski Cc: Cathy Zhang Cc: Dave Hansen Cc: Dimitri Sivanich Cc: Fenghua Yu Cc: "H. Peter Anvin" Cc: Kyung Min Park Cc: Peter Zijlstra Cc: "Ravi V. Shankar" Cc: Sean Christopherson Cc: linux-edac@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Tony Luck Signed-off-by: Ricardo Neri --- --- arch/x86/include/asm/processor.h | 64 ----------------------------- arch/x86/include/asm/sync_core.h | 64 +++++++++++++++++++++++++++++ arch/x86/kernel/alternative.c | 1 + arch/x86/kernel/cpu/mce/core.c | 1 + drivers/misc/sgi-gru/grufault.c | 1 + drivers/misc/sgi-gru/gruhandles.c | 1 + drivers/misc/sgi-gru/grukservices.c | 1 + 7 files changed, 69 insertions(+), 64 deletions(-) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 2a1f7e1d7151..97143d87994c 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -676,70 +676,6 @@ static inline unsigned int cpuid_edx(unsigned int op) return edx; } -/* - * This function forces the icache and prefetched instruction stream to - * catch up with reality in two very specific cases: - * - * a) Text was modified using one virtual address and is about to be executed - * from the same physical page at a different virtual address. - * - * b) Text was modified on a different CPU, may subsequently be - * executed on this CPU, and you want to make sure the new version - * gets executed. This generally means you're calling this in a IPI. - * - * If you're calling this for a different reason, you're probably doing - * it wrong. - */ -static inline void sync_core(void) -{ - /* - * There are quite a few ways to do this. IRET-to-self is nice - * because it works on every CPU, at any CPL (so it's compatible - * with paravirtualization), and it never exits to a hypervisor. - * The only down sides are that it's a bit slow (it seems to be - * a bit more than 2x slower than the fastest options) and that - * it unmasks NMIs. The "push %cs" is needed because, in - * paravirtual environments, __KERNEL_CS may not be a valid CS - * value when we do IRET directly. - * - * In case NMI unmasking or performance ever becomes a problem, - * the next best option appears to be MOV-to-CR2 and an - * unconditional jump. That sequence also works on all CPUs, - * but it will fault at CPL3 (i.e. Xen PV). - * - * CPUID is the conventional way, but it's nasty: it doesn't - * exist on some 486-like CPUs, and it usually exits to a - * hypervisor. - * - * Like all of Linux's memory ordering operations, this is a - * compiler barrier as well. - */ -#ifdef CONFIG_X86_32 - asm volatile ( - "pushfl\n\t" - "pushl %%cs\n\t" - "pushl $1f\n\t" - "iret\n\t" - "1:" - : ASM_CALL_CONSTRAINT : : "memory"); -#else - unsigned int tmp; - - asm volatile ( - "mov %%ss, %0\n\t" - "pushq %q0\n\t" - "pushq %%rsp\n\t" - "addq $8, (%%rsp)\n\t" - "pushfq\n\t" - "mov %%cs, %0\n\t" - "pushq %q0\n\t" - "pushq $1f\n\t" - "iretq\n\t" - "1:" - : "=&r" (tmp), ASM_CALL_CONSTRAINT : : "cc", "memory"); -#endif -} - extern void select_idle_routine(const struct cpuinfo_x86 *c); extern void amd_e400_c1e_apic_setup(void); diff --git a/arch/x86/include/asm/sync_core.h b/arch/x86/include/asm/sync_core.h index c67caafd3381..9c5573f2c333 100644 --- a/arch/x86/include/asm/sync_core.h +++ b/arch/x86/include/asm/sync_core.h @@ -6,6 +6,70 @@ #include #include +/* + * This function forces the icache and prefetched instruction stream to + * catch up with reality in two very specific cases: + * + * a) Text was modified using one virtual address and is about to be executed + * from the same physical page at a different virtual address. + * + * b) Text was modified on a different CPU, may subsequently be + * executed on this CPU, and you want to make sure the new version + * gets executed. This generally means you're calling this in a IPI. + * + * If you're calling this for a different reason, you're probably doing + * it wrong. + */ +static inline void sync_core(void) +{ + /* + * There are quite a few ways to do this. IRET-to-self is nice + * because it works on every CPU, at any CPL (so it's compatible + * with paravirtualization), and it never exits to a hypervisor. + * The only down sides are that it's a bit slow (it seems to be + * a bit more than 2x slower than the fastest options) and that + * it unmasks NMIs. The "push %cs" is needed because, in + * paravirtual environments, __KERNEL_CS may not be a valid CS + * value when we do IRET directly. + * + * In case NMI unmasking or performance ever becomes a problem, + * the next best option appears to be MOV-to-CR2 and an + * unconditional jump. That sequence also works on all CPUs, + * but it will fault at CPL3 (i.e. Xen PV). + * + * CPUID is the conventional way, but it's nasty: it doesn't + * exist on some 486-like CPUs, and it usually exits to a + * hypervisor. + * + * Like all of Linux's memory ordering operations, this is a + * compiler barrier as well. + */ +#ifdef CONFIG_X86_32 + asm volatile ( + "pushfl\n\t" + "pushl %%cs\n\t" + "pushl $1f\n\t" + "iret\n\t" + "1:" + : ASM_CALL_CONSTRAINT : : "memory"); +#else + unsigned int tmp; + + asm volatile ( + "mov %%ss, %0\n\t" + "pushq %q0\n\t" + "pushq %%rsp\n\t" + "addq $8, (%%rsp)\n\t" + "pushfq\n\t" + "mov %%cs, %0\n\t" + "pushq %q0\n\t" + "pushq $1f\n\t" + "iretq\n\t" + "1:" + : "=&r" (tmp), ASM_CALL_CONSTRAINT : : "cc", "memory"); +#endif +} + /* * Ensure that a core serializing instruction is issued before returning * to user-mode. x86 implements return to user-space through sysexit, diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 20e07feb4064..3abc1316f91b 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include #include diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 1a0139d9a34b..e76c1ddd35e7 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -42,6 +42,7 @@ #include #include #include +#include #include #include diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c index b1521112dbbd..723825524ea0 100644 --- a/drivers/misc/sgi-gru/grufault.c +++ b/drivers/misc/sgi-gru/grufault.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include "gru.h" #include "grutables.h" diff --git a/drivers/misc/sgi-gru/gruhandles.c b/drivers/misc/sgi-gru/gruhandles.c index f7224f90f413..1d75d5e540bc 100644 --- a/drivers/misc/sgi-gru/gruhandles.c +++ b/drivers/misc/sgi-gru/gruhandles.c @@ -16,6 +16,7 @@ #define GRU_OPERATION_TIMEOUT (((cycles_t) local_cpu_data->itc_freq)*10) #define CLKS2NSEC(c) ((c) *1000000000 / local_cpu_data->itc_freq) #else +#include #include #define GRU_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000) #define CLKS2NSEC(c) ((c) * 1000000 / tsc_khz) diff --git a/drivers/misc/sgi-gru/grukservices.c b/drivers/misc/sgi-gru/grukservices.c index 0197441a1eae..f6e600bfac5d 100644 --- a/drivers/misc/sgi-gru/grukservices.c +++ b/drivers/misc/sgi-gru/grukservices.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include #include From patchwork Mon Jul 27 04:31:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Neri X-Patchwork-Id: 11686329 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90D0713B6 for ; Mon, 27 Jul 2020 04:32:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7403C206E7 for ; Mon, 27 Jul 2020 04:32:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726313AbgG0EcL (ORCPT ); Mon, 27 Jul 2020 00:32:11 -0400 Received: from mga17.intel.com ([192.55.52.151]:24190 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726213AbgG0EcK (ORCPT ); Mon, 27 Jul 2020 00:32:10 -0400 IronPort-SDR: PGLO3TRlfbAwYKZ/pApuuOMh5MGzxLsN4ci+HKggEhaTJbrOQOKhLSiZfc8pkwspoHoDQb29sP jlo8KjwHDceQ== X-IronPort-AV: E=McAfee;i="6000,8403,9694"; a="131016893" X-IronPort-AV: E=Sophos;i="5.75,401,1589266800"; d="scan'208";a="131016893" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jul 2020 21:32:09 -0700 IronPort-SDR: EFguC1D63IEEvtnJE+miNIFEqv9bA3khDlh+AQAb/WX3f0F1SHb1Worytt0+MdZLvG0EoMZ5j4 wblJXTIVkCug== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,401,1589266800"; d="scan'208";a="329550040" Received: from ranerica-svr.sc.intel.com ([172.25.110.23]) by orsmga007.jf.intel.com with ESMTP; 26 Jul 2020 21:32:09 -0700 From: Ricardo Neri To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , x86@kernel.org Cc: Dave Hansen , Tony Luck , Cathy Zhang , Fenghua Yu , "H. Peter Anvin" , Kyung Min Park , "Peter Zijlstra (Intel)" , "Ravi V. Shankar" , Sean Christopherson , linux-kernel@vger.kernel.org, Ricardo Neri , Ricardo Neri , Dave Hansen , linux-edac@vger.kernel.org Subject: [PATCH 3/4] x86/cpu: Refactor sync_core() for readability Date: Sun, 26 Jul 2020 21:31:31 -0700 Message-Id: <20200727043132.15082-4-ricardo.neri-calderon@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200727043132.15082-1-ricardo.neri-calderon@linux.intel.com> References: <20200727043132.15082-1-ricardo.neri-calderon@linux.intel.com> Sender: linux-edac-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-edac@vger.kernel.org Instead of having #ifdef/#endif blocks inside sync_core() for X86_64 and X86_32, implement the new function iret_to_self() with two versions. In this manner, avoid having to use even more more #ifdef/#endif blocks when adding support for SERIALIZE in sync_core(). Cc: Andy Lutomirski Cc: Cathy Zhang Cc: Dave Hansen Cc: Fenghua Yu Cc: "H. Peter Anvin" Cc: Kyung Min Park Cc: Peter Zijlstra Cc: "Ravi V. Shankar" Cc: Sean Christopherson Cc: linux-edac@vger.kernel.org Cc: linux-kernel@vger.kernel.org Co-developed-by: Tony Luck Signed-off-by: Tony Luck Signed-off-by: Ricardo Neri --- --- arch/x86/include/asm/special_insns.h | 1 - arch/x86/include/asm/sync_core.h | 56 ++++++++++++++++------------ 2 files changed, 32 insertions(+), 25 deletions(-) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index eb8e781c4353..59a3e13204c3 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -234,7 +234,6 @@ static inline void clwb(volatile void *__p) #define nop() asm volatile ("nop") - #endif /* __KERNEL__ */ #endif /* _ASM_X86_SPECIAL_INSNS_H */ diff --git a/arch/x86/include/asm/sync_core.h b/arch/x86/include/asm/sync_core.h index 9c5573f2c333..fdb5b356e59b 100644 --- a/arch/x86/include/asm/sync_core.h +++ b/arch/x86/include/asm/sync_core.h @@ -6,6 +6,37 @@ #include #include +#ifdef CONFIG_X86_32 +static inline void iret_to_self(void) +{ + asm volatile ( + "pushfl\n\t" + "pushl %%cs\n\t" + "pushl $1f\n\t" + "iret\n\t" + "1:" + : ASM_CALL_CONSTRAINT : : "memory"); +} +#else +static inline void iret_to_self(void) +{ + unsigned int tmp; + + asm volatile ( + "mov %%ss, %0\n\t" + "pushq %q0\n\t" + "pushq %%rsp\n\t" + "addq $8, (%%rsp)\n\t" + "pushfq\n\t" + "mov %%cs, %0\n\t" + "pushq %q0\n\t" + "pushq $1f\n\t" + "iretq\n\t" + "1:" + : "=&r" (tmp), ASM_CALL_CONSTRAINT : : "cc", "memory"); +} +#endif /* CONFIG_X86_32 */ + /* * This function forces the icache and prefetched instruction stream to * catch up with reality in two very specific cases: @@ -44,30 +75,7 @@ static inline void sync_core(void) * Like all of Linux's memory ordering operations, this is a * compiler barrier as well. */ -#ifdef CONFIG_X86_32 - asm volatile ( - "pushfl\n\t" - "pushl %%cs\n\t" - "pushl $1f\n\t" - "iret\n\t" - "1:" - : ASM_CALL_CONSTRAINT : : "memory"); -#else - unsigned int tmp; - - asm volatile ( - "mov %%ss, %0\n\t" - "pushq %q0\n\t" - "pushq %%rsp\n\t" - "addq $8, (%%rsp)\n\t" - "pushfq\n\t" - "mov %%cs, %0\n\t" - "pushq %q0\n\t" - "pushq $1f\n\t" - "iretq\n\t" - "1:" - : "=&r" (tmp), ASM_CALL_CONSTRAINT : : "cc", "memory"); -#endif + iret_to_self(); } /*