From patchwork Wed Jul 11 11:29:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 10519437 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1042A603D7 for ; Wed, 11 Jul 2018 11:30:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1685628D2E for ; Wed, 11 Jul 2018 11:30:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 145E228D4D; Wed, 11 Jul 2018 11:30:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4123828D2E for ; Wed, 11 Jul 2018 11:30:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6273A6B0273; Wed, 11 Jul 2018 07:30:10 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5B65D6B0278; Wed, 11 Jul 2018 07:30:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D3446B0274; Wed, 11 Jul 2018 07:30:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id B47F56B0275 for ; Wed, 11 Jul 2018 07:30:09 -0400 (EDT) Received: by mail-ed1-f70.google.com with SMTP id f8-v6so4038936eds.6 for ; Wed, 11 Jul 2018 04:30:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=zNyLWx9cFEqiUYtbkBiUcoUnWhLM6LytsnMLugC2F04=; b=e5BeneZL4JHM35K3GZTCr/DfGGBQY8vR8I6ark0Eag4dnIAZPCCpwW/hHgm2hM+iix +UwvPF9vgJDGFbbdeAaz6O3GTgBxh8FpgKRHN0TevYWkTOr2R+k4sZOa8g3qKbQ2bszo Wqa/nap6aw8RsJqYW6wosFDA5dZmmhho4Avz/Oomk0r03V1gFDpR86Y5KCMdMCoPBTDu NlosLnELkeEj9rF0vhBd/wvOZ6veARwm2upibmwJ0GnWpOq/cBiEjhXPB3vTZ+Cn/xcy IHSxUEyTAth7N3C0Q0E2HNsw7dhmXggGFUWylj/b9BOnijVwA2skPqSoTQA4jv+QlL0j y52Q== X-Gm-Message-State: APt69E1e1l5EL9W5RXHXtXkVG12gZg/irUYifiQE0UNVeGFnGCrSNL2s +ScSCPcyl0pLqRBWuOpr/Ef0Atb1OUPpt3dgTlZlBPr1XReTv7ky38JJSzyvVgh/+r7xPGn+00S chZUi7DlDvdO/fHNzzYKTK2MA1lJEQZRlkR71Qf+EE7gSxhYcu79QUpCLzKjMFjJSGQ== X-Received: by 2002:a50:8818:: with SMTP id b24-v6mr31889066edb.274.1531308609260; Wed, 11 Jul 2018 04:30:09 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdk0txQEvZlZJ2JH8IN3jb+866ozGL+Uupt6VCtMRNti5VVR/+IAMBzNwK/Pa196l2Alcrt X-Received: by 2002:a50:8818:: with SMTP id b24-v6mr31889006edb.274.1531308608292; Wed, 11 Jul 2018 04:30:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531308608; cv=none; d=google.com; s=arc-20160816; b=rNxkWL+BO0jzpAY8Zyl7VTe/K3XSn0LBKgvCZD0IaOPr954ilIsKCFfHzSbJHtZnFn nuiRuGD/9Ac1FY9WyaVwjNLyN4LF2ma/QnpK8oMXG1V3nkAJdjLX6e/Kh+JhlL0t13fM 8Ac/XYpMwT1rG7WFcOd5uQe8UzuMniRrOLo5H5OZZMn8K44Zv5OBH3YlcPB1bNgwpyuv oeAKk7cFELa1yzu+Us+vufyfnQcInmFpNttLLSnuSAdxiq+0vc1TFOYG9cC3L2byghRZ vDy0pZzS7P7us/hzs2YTwjhIJAohWz5/5KNoqAmd/jGoDp0C4mcrODgbTCdHlTLfEmQW qjxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=zNyLWx9cFEqiUYtbkBiUcoUnWhLM6LytsnMLugC2F04=; b=JhLU5iQ2RzvZnp2eO8nSokCGlSungHf5UWj2Nj7Pe1qvVWvJuq/PCc02oK15ptN1Bj JHl551U/wnSJprtZqzhB8IiaSa8IKkpKgTmTTOyQCqh6MMVOpgVz/JLQ4CH8P5NY4zXg XQ6UEf0vcV2hgW6pyEaB0JYRIpwshRFPtpfkFjBYPIx3x/I8z6oj+VJq/OuagH3/NnVW uPnrim/Wx2h4HXyaH7JLNMhaUbzhls7A/S9Kz3Y9BwXD5tEHYehCyBNv0Ff3+WoTa0bx //wAIc2sxNA7lpMUl7GtVTBUD5akUA69TIFZmuQIqhUZ5ZIfL1TtRnKkQ7x45LQL6UmE P1lQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=RRZhaHst; spf=pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: from theia.8bytes.org (8bytes.org. [2a01:238:4383:600:38bc:a715:4b6d:a889]) by mx.google.com with ESMTPS id k28-v6si2187851edj.250.2018.07.11.04.30.08 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Jul 2018 04:30:08 -0700 (PDT) Received-SPF: pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) client-ip=2a01:238:4383:600:38bc:a715:4b6d:a889; Authentication-Results: mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=RRZhaHst; spf=pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: by theia.8bytes.org (Postfix, from userid 1000) id C310F691; Wed, 11 Jul 2018 13:30:00 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1531308600; bh=PVDQwoyxBeMkLq1IdamXn/EKU2x0nMaR+EcALkk2SWU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RRZhaHstqjCgbkimSnvmJ2NuTC0ub28EshNEruDwJLPTj8wKDu0F+RjLYRTXdcZbT hCJL+hnEa1mrxuY8fCvshOt8NjBeJC+N0h/NgPUPwVxQ4DZLT1YkvSQT7HvJOYnNnM 6I7rQetTdrqDR+NhR+gLzmbe39F7lsilmbw6VYYNu+ZQ8Zg5iDMkB7CcnWeV9zUAO6 KmLQz9vLxe7ZQDvFt6LQTwcpR4ByC2VMxWke8UA5jeKrlt2zmL//HSLPbPKxokx+Bl IDy96TIG5kiajDQlgmv6X4q/fsr9qOUN0VUyZsGaCZL48hc5KDL8KbUG3wxRA68e4p 1cSu7Bfms+W3g== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , "David H . Gutteridge" , jroedel@suse.de, joro@8bytes.org Subject: [PATCH 07/39] x86/entry/32: Enter the kernel via trampoline stack Date: Wed, 11 Jul 2018 13:29:14 +0200 Message-Id: <1531308586-29340-8-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1531308586-29340-1-git-send-email-joro@8bytes.org> References: <1531308586-29340-1-git-send-email-joro@8bytes.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Joerg Roedel Use the entry-stack as a trampoline to enter the kernel. The entry-stack is already in the cpu_entry_area and will be mapped to userspace when PTI is enabled. Signed-off-by: Joerg Roedel --- arch/x86/entry/entry_32.S | 136 +++++++++++++++++++++++++++++++-------- arch/x86/include/asm/switch_to.h | 6 +- arch/x86/kernel/asm-offsets.c | 1 + arch/x86/kernel/cpu/common.c | 5 +- arch/x86/kernel/process.c | 2 - arch/x86/kernel/process_32.c | 10 +-- 6 files changed, 121 insertions(+), 39 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 61303fa..528db7d 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -154,25 +154,36 @@ #endif /* CONFIG_X86_32_LAZY_GS */ -.macro SAVE_ALL pt_regs_ax=%eax +.macro SAVE_ALL pt_regs_ax=%eax switch_stacks=0 cld + /* Push segment registers and %eax */ PUSH_GS pushl %fs pushl %es pushl %ds pushl \pt_regs_ax + + /* Load kernel segments */ + movl $(__USER_DS), %eax + movl %eax, %ds + movl %eax, %es + movl $(__KERNEL_PERCPU), %eax + movl %eax, %fs + SET_KERNEL_GS %eax + + /* Push integer registers and complete PT_REGS */ pushl %ebp pushl %edi pushl %esi pushl %edx pushl %ecx pushl %ebx - movl $(__USER_DS), %edx - movl %edx, %ds - movl %edx, %es - movl $(__KERNEL_PERCPU), %edx - movl %edx, %fs - SET_KERNEL_GS %edx + + /* Switch to kernel stack if necessary */ +.if \switch_stacks > 0 + SWITCH_TO_KERNEL_STACK +.endif + .endm /* @@ -269,6 +280,72 @@ .Lend_\@: #endif /* CONFIG_X86_ESPFIX32 */ .endm + + +/* + * Called with pt_regs fully populated and kernel segments loaded, + * so we can access PER_CPU and use the integer registers. + * + * We need to be very careful here with the %esp switch, because an NMI + * can happen everywhere. If the NMI handler finds itself on the + * entry-stack, it will overwrite the task-stack and everything we + * copied there. So allocate the stack-frame on the task-stack and + * switch to it before we do any copying. + */ +.macro SWITCH_TO_KERNEL_STACK + + ALTERNATIVE "", "jmp .Lend_\@", X86_FEATURE_XENPV + + /* Are we on the entry stack? Bail out if not! */ + movl PER_CPU_VAR(cpu_entry_area), %edi + addl $CPU_ENTRY_AREA_entry_stack, %edi + cmpl %esp, %edi + jae .Lend_\@ + + /* Load stack pointer into %esi and %edi */ + movl %esp, %esi + movl %esi, %edi + + /* Move %edi to the top of the entry stack */ + andl $(MASK_entry_stack), %edi + addl $(SIZEOF_entry_stack), %edi + + /* Load top of task-stack into %edi */ + movl TSS_entry_stack(%edi), %edi + + /* Bytes to copy */ + movl $PTREGS_SIZE, %ecx + +#ifdef CONFIG_VM86 + testl $X86_EFLAGS_VM, PT_EFLAGS(%esi) + jz .Lcopy_pt_regs_\@ + + /* + * Stack-frame contains 4 additional segment registers when + * coming from VM86 mode + */ + addl $(4 * 4), %ecx + +.Lcopy_pt_regs_\@: +#endif + + /* Allocate frame on task-stack */ + subl %ecx, %edi + + /* Switch to task-stack */ + movl %edi, %esp + + /* + * We are now on the task-stack and can safely copy over the + * stack-frame + */ + shrl $2, %ecx + cld + rep movsl + +.Lend_\@: +.endm + /* * %eax: prev task * %edx: next task @@ -461,6 +538,7 @@ ENTRY(xen_sysenter_target) */ ENTRY(entry_SYSENTER_32) movl TSS_entry_stack(%esp), %esp + .Lsysenter_past_esp: pushl $__USER_DS /* pt_regs->ss */ pushl %ebp /* pt_regs->sp (stashed in bp) */ @@ -469,7 +547,7 @@ ENTRY(entry_SYSENTER_32) pushl $__USER_CS /* pt_regs->cs */ pushl $0 /* pt_regs->ip = 0 (placeholder) */ pushl %eax /* pt_regs->orig_ax */ - SAVE_ALL pt_regs_ax=$-ENOSYS /* save rest */ + SAVE_ALL pt_regs_ax=$-ENOSYS /* save rest, stack already switched */ /* * SYSENTER doesn't filter flags, so we need to clear NT, AC @@ -580,7 +658,8 @@ ENDPROC(entry_SYSENTER_32) ENTRY(entry_INT80_32) ASM_CLAC pushl %eax /* pt_regs->orig_ax */ - SAVE_ALL pt_regs_ax=$-ENOSYS /* save rest */ + + SAVE_ALL pt_regs_ax=$-ENOSYS switch_stacks=1 /* save rest */ /* * User mode is traced as though IRQs are on, and the interrupt gate @@ -677,7 +756,8 @@ END(irq_entries_start) common_interrupt: ASM_CLAC addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */ - SAVE_ALL + + SAVE_ALL switch_stacks=1 ENCODE_FRAME_POINTER TRACE_IRQS_OFF movl %esp, %eax @@ -685,16 +765,16 @@ common_interrupt: jmp ret_from_intr ENDPROC(common_interrupt) -#define BUILD_INTERRUPT3(name, nr, fn) \ -ENTRY(name) \ - ASM_CLAC; \ - pushl $~(nr); \ - SAVE_ALL; \ - ENCODE_FRAME_POINTER; \ - TRACE_IRQS_OFF \ - movl %esp, %eax; \ - call fn; \ - jmp ret_from_intr; \ +#define BUILD_INTERRUPT3(name, nr, fn) \ +ENTRY(name) \ + ASM_CLAC; \ + pushl $~(nr); \ + SAVE_ALL switch_stacks=1; \ + ENCODE_FRAME_POINTER; \ + TRACE_IRQS_OFF \ + movl %esp, %eax; \ + call fn; \ + jmp ret_from_intr; \ ENDPROC(name) #define BUILD_INTERRUPT(name, nr) \ @@ -926,16 +1006,20 @@ common_exception: pushl %es pushl %ds pushl %eax + movl $(__USER_DS), %eax + movl %eax, %ds + movl %eax, %es + movl $(__KERNEL_PERCPU), %eax + movl %eax, %fs pushl %ebp pushl %edi pushl %esi pushl %edx pushl %ecx pushl %ebx + SWITCH_TO_KERNEL_STACK ENCODE_FRAME_POINTER cld - movl $(__KERNEL_PERCPU), %ecx - movl %ecx, %fs UNWIND_ESPFIX_STACK GS_TO_REG %ecx movl PT_GS(%esp), %edi # get the function address @@ -943,9 +1027,6 @@ common_exception: movl $-1, PT_ORIG_EAX(%esp) # no syscall to restart REG_TO_PTGS %ecx SET_KERNEL_GS %ecx - movl $(__USER_DS), %ecx - movl %ecx, %ds - movl %ecx, %es TRACE_IRQS_OFF movl %esp, %eax # pt_regs pointer CALL_NOSPEC %edi @@ -964,6 +1045,7 @@ ENTRY(debug) */ ASM_CLAC pushl $-1 # mark this as an int + SAVE_ALL ENCODE_FRAME_POINTER xorl %edx, %edx # error code 0 @@ -999,6 +1081,7 @@ END(debug) */ ENTRY(nmi) ASM_CLAC + #ifdef CONFIG_X86_ESPFIX32 pushl %eax movl %ss, %eax @@ -1066,7 +1149,8 @@ END(nmi) ENTRY(int3) ASM_CLAC pushl $-1 # mark this as an int - SAVE_ALL + + SAVE_ALL switch_stacks=1 ENCODE_FRAME_POINTER TRACE_IRQS_OFF xorl %edx, %edx # zero error code diff --git a/arch/x86/include/asm/switch_to.h b/arch/x86/include/asm/switch_to.h index eb5f799..20e5f7ab 100644 --- a/arch/x86/include/asm/switch_to.h +++ b/arch/x86/include/asm/switch_to.h @@ -89,13 +89,9 @@ static inline void refresh_sysenter_cs(struct thread_struct *thread) /* This is used when switching tasks or entering/exiting vm86 mode. */ static inline void update_sp0(struct task_struct *task) { - /* On x86_64, sp0 always points to the entry trampoline stack, which is constant: */ -#ifdef CONFIG_X86_32 - load_sp0(task->thread.sp0); -#else + /* sp0 always points to the entry trampoline stack, which is constant: */ if (static_cpu_has(X86_FEATURE_XENPV)) load_sp0(task_top_of_stack(task)); -#endif } #endif /* _ASM_X86_SWITCH_TO_H */ diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index a1e1628..01de31d 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -103,6 +103,7 @@ void common(void) { OFFSET(CPU_ENTRY_AREA_entry_trampoline, cpu_entry_area, entry_trampoline); OFFSET(CPU_ENTRY_AREA_entry_stack, cpu_entry_area, entry_stack_page); DEFINE(SIZEOF_entry_stack, sizeof(struct entry_stack)); + DEFINE(MASK_entry_stack, (~(sizeof(struct entry_stack) - 1))); /* Offset for sp0 and sp1 into the tss_struct */ OFFSET(TSS_sp0, tss_struct, x86_tss.sp0); diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index eb4cb3e..43a927e 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1804,11 +1804,12 @@ void cpu_init(void) enter_lazy_tlb(&init_mm, curr); /* - * Initialize the TSS. Don't bother initializing sp0, as the initial - * task never enters user mode. + * Initialize the TSS. sp0 points to the entry trampoline stack + * regardless of what task is running. */ set_tss_desc(cpu, &get_cpu_entry_area(cpu)->tss.x86_tss); load_TR_desc(); + load_sp0((unsigned long)(cpu_entry_stack(cpu) + 1)); load_mm_ldt(&init_mm); diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 30ca2d1..c93fcfd 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -57,14 +57,12 @@ __visible DEFINE_PER_CPU_PAGE_ALIGNED(struct tss_struct, cpu_tss_rw) = { */ .sp0 = (1UL << (BITS_PER_LONG-1)) + 1, -#ifdef CONFIG_X86_64 /* * .sp1 is cpu_current_top_of_stack. The init task never * runs user code, but cpu_current_top_of_stack should still * be well defined before the first context switch. */ .sp1 = TOP_OF_INIT_STACK, -#endif #ifdef CONFIG_X86_32 .ss0 = __KERNEL_DS, diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c index ec62cc7..04bbf93 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -287,10 +287,12 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) */ update_sp0(next_p); refresh_sysenter_cs(next); - this_cpu_write(cpu_current_top_of_stack, - (unsigned long)task_stack_page(next_p) + - THREAD_SIZE); - /* SYSENTER reads the task-stack from tss.sp1 */ + this_cpu_write(cpu_current_top_of_stack, task_top_of_stack(next_p)); + /* + * TODO: Find a way to let cpu_current_top_of_stack point to + * cpu_tss_rw.x86_tss.sp1. Doing so now results in stack corruption with + * iret exceptions. + */ this_cpu_write(cpu_tss_rw.x86_tss.sp1, next_p->thread.sp0); /*