From patchwork Wed Jul 18 09:40:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 10531839 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 418B3600F4 for ; Wed, 18 Jul 2018 09:42:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 32BEF1FF0B for ; Wed, 18 Jul 2018 09:42:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 261A21FF6A; Wed, 18 Jul 2018 09:42:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8F78D207A7 for ; Wed, 18 Jul 2018 09:42:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5F58C6B0278; Wed, 18 Jul 2018 05:41:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 553476B027E; Wed, 18 Jul 2018 05:41:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41D346B027D; Wed, 18 Jul 2018 05:41:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by kanga.kvack.org (Postfix) with ESMTP id C9A436B0278 for ; Wed, 18 Jul 2018 05:41:31 -0400 (EDT) Received: by mail-ed1-f71.google.com with SMTP id w10-v6so1684938eds.7 for ; Wed, 18 Jul 2018 02:41:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=PTOUajaWmAA5yGJQ2WNLTm2NdJuab0CQTCzg+yNyxbk=; b=Ax8NOP0r/AdnLusmIWuOE0mfSfAxFZVMyuuCo28FWZoo5vHNuJ/oM0Hd6CxpKpE44c h8CaY0HDBOarDiW5Qp3fJdgjxZIbA1RoDwgfkrwPYXczxP0Tu+CkHj2U4gkyzTDVM2DR PJ8X2WePYaCbmFt1jXUKFR9w5PeRd1MioOTS8c1G7uYGvv6Bcp7YsL0g4QYTksvCTtrq +jek6wG2d7jDRUWKjYHRaVDz2HvJpWr7VUfhgjKLDUL11X/APfGjQbpwlhJr3gfvApDA rhfOB3P4iLsG+jmeydOCBtBzkqyOfBnPLg1qUtY8cRL0jR8k2v1PLVMx4S7djszspni4 CNpw== X-Gm-Message-State: AOUpUlH/nukNiqZKT5foqJ/KMjxEI/YgLYkyUSxLTFdArugv7sBLwKEf odDHOccV0JGGFdSN69L7fuUt5gIVjKKSnMkjeNIvuRU6kWAE9J7YFcPVlTyP5iEi+9rVdi0SbwX EPEcWwLVrWmhL+2GJ4qXU8d27ABdzCRqO+KbR4alJ49tQKCAN9F3q5C6d8U4rTxf+fA== X-Received: by 2002:a50:8386:: with SMTP id 6-v6mr6325957edi.170.1531906891344; Wed, 18 Jul 2018 02:41:31 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfSNctX8ddByQRFZBxnj9cmnTb6LCXro2gg+EBLOv931RiO1Z/K8nNWXf3aA4N8T7hxDhpp X-Received: by 2002:a50:8386:: with SMTP id 6-v6mr6325914edi.170.1531906890592; Wed, 18 Jul 2018 02:41:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531906890; cv=none; d=google.com; s=arc-20160816; b=OYgZEJame8dgugirZl7kpKhe97OSF3MQzzC9Oy2qSBbWFoRWB42ibyVHQwH6ku1qNf F+eGMBXVmWisCVcOrrt22ugT8mKAIj0Fzd1YtXgDFbVTPiLZ1rEbJxedS4e5sfEHnR72 PV1YSKeWvT4u4On/A7GN+Xc8IUhzGyPzXW0wOETtcQaUSkgi0jpDUXsciqL5MKVkjOMZ X0oilnNIl8NYl1LYir5ESXMhtZ1Sfy9jiNK8CJ7huzDdH21QHN0+t/5LG65vUpU6deqc Oogb0q3fj5r7wv5Vo/wAsSGCBfE3dp5OISN/gnTZdajgL6zCWawIbZMftTLvpUMSaDOs /79w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=PTOUajaWmAA5yGJQ2WNLTm2NdJuab0CQTCzg+yNyxbk=; b=VgUWZjSg8gfqkSqney+WIHEvmeMhXe4mV0PUypZw4OHVYLAMf398NrX1+O4c6OzPCN OdA5LBMFMmT9faujQeNGoQLP9w71LhbKR6gI7O5vjp9/Q3nUoz6uO4Yx2MqYdJdgyO68 4uPqNawuArsNmLlquj18sEWk3orKVVDBuLFY20Lku0+N9w17Cg/uvzzkfpLz/NknsLIp 2fPS6oBdzRagHU9QvZc5PyXBMi97uU96PhJ10NX72IOFq+d+Wm1kHPnzg8wAKCcBUKo6 EytLUAiq61MnijTMFKtW3dhaWNS0JO+8APAbsH2WN1LyyR7EY3iVYQlZNKxCEQXH5E0w 99aw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=KN7NnniG; spf=pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: from theia.8bytes.org (8bytes.org. [2a01:238:4383:600:38bc:a715:4b6d:a889]) by mx.google.com with ESMTPS id n4-v6si2835962eda.62.2018.07.18.02.41.30 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 18 Jul 2018 02:41:30 -0700 (PDT) Received-SPF: pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) client-ip=2a01:238:4383:600:38bc:a715:4b6d:a889; Authentication-Results: mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=KN7NnniG; spf=pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: by theia.8bytes.org (Postfix, from userid 1000) id 303EF40B; Wed, 18 Jul 2018 11:41:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1531906881; bh=/MdQ5kJSDfPjbQqp11z2s21NtYC9zMQ2QYnfF6G7Qvk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KN7NnniGjvJQkHdImnfypzsPmHZVswQsIm/gzU9Zw7M2cNzsWi/aOVZCJ+SnPjxyI HLjUQiXU95zRmzeqSgU0r/Ndhi8dSJ1fpu8eEAk+GTIHpR1oFzvaQztcEhOaHquI9r zBN0/dknwlXwjO5MIxIbVuyGKofjKoiESLyJ5IWX1lenEnriTAZd7QF60i4wEwXG1e Wgr//PZtFv/ElA5xvlIE52l54IB3hMlFij1c3/pZtvq2GUSTL/1zj6uoe0KFl+u/ha fWTgR0it5/mpGWYzn9zSXQA6oK7I6MYa/GFIJdDnOr4pP7X6R14e56c/8j0JlNAWB5 eFrxjMX44cekA== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , "David H . Gutteridge" , jroedel@suse.de, joro@8bytes.org Subject: [PATCH 12/39] x86/entry/32: Add PTI cr3 switch to non-NMI entry/exit points Date: Wed, 18 Jul 2018 11:40:49 +0200 Message-Id: <1531906876-13451-13-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1531906876-13451-1-git-send-email-joro@8bytes.org> References: <1531906876-13451-1-git-send-email-joro@8bytes.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Joerg Roedel Add unconditional cr3 switches between user and kernel cr3 to all non-NMI entry and exit points. Signed-off-by: Joerg Roedel --- arch/x86/entry/entry_32.S | 86 ++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 82 insertions(+), 4 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index dbf7d61..60b28df 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -77,6 +77,8 @@ #endif .endm +#define PTI_SWITCH_MASK (1 << PAGE_SHIFT) + /* * User gs save/restore * @@ -154,6 +156,33 @@ #endif /* CONFIG_X86_32_LAZY_GS */ +/* Unconditionally switch to user cr3 */ +.macro SWITCH_TO_USER_CR3 scratch_reg:req + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI + + movl %cr3, \scratch_reg + orl $PTI_SWITCH_MASK, \scratch_reg + movl \scratch_reg, %cr3 +.Lend_\@: +.endm + +/* + * Switch to kernel cr3 if not already loaded and return current cr3 in + * \scratch_reg + */ +.macro SWITCH_TO_KERNEL_CR3 scratch_reg:req + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI + movl %cr3, \scratch_reg + /* Test if we are already on kernel CR3 */ + testl $PTI_SWITCH_MASK, \scratch_reg + jz .Lend_\@ + andl $(~PTI_SWITCH_MASK), \scratch_reg + movl \scratch_reg, %cr3 + /* Return original CR3 in \scratch_reg */ + orl $PTI_SWITCH_MASK, \scratch_reg +.Lend_\@: +.endm + .macro SAVE_ALL pt_regs_ax=%eax switch_stacks=0 cld PUSH_GS @@ -283,7 +312,6 @@ #endif /* CONFIG_X86_ESPFIX32 */ .endm - /* * Called with pt_regs fully populated and kernel segments loaded, * so we can access PER_CPU and use the integer registers. @@ -296,11 +324,19 @@ */ #define CS_FROM_ENTRY_STACK (1 << 31) +#define CS_FROM_USER_CR3 (1 << 30) .macro SWITCH_TO_KERNEL_STACK ALTERNATIVE "", "jmp .Lend_\@", X86_FEATURE_XENPV + SWITCH_TO_KERNEL_CR3 scratch_reg=%eax + + /* + * %eax now contains the entry cr3 and we carry it forward in + * that register for the time this macro runs + */ + /* Are we on the entry stack? Bail out if not! */ movl PER_CPU_VAR(cpu_entry_area), %ecx addl $CPU_ENTRY_AREA_entry_stack + SIZEOF_entry_stack, %ecx @@ -370,7 +406,8 @@ * but switch back to the entry-stack again when we approach * iret and return to the interrupted code-path. This usually * happens when we hit an exception while restoring user-space - * segment registers on the way back to user-space. + * segment registers on the way back to user-space or when the + * sysenter handler runs with eflags.tf set. * * When we switch to the task-stack here, we can't trust the * contents of the entry-stack anymore, as the exception handler @@ -387,6 +424,7 @@ * * %esi: Entry-Stack pointer (same as %esp) * %edi: Top of the task stack + * %eax: CR3 on kernel entry */ /* Calculate number of bytes on the entry stack in %ecx */ @@ -403,6 +441,14 @@ orl $CS_FROM_ENTRY_STACK, PT_CS(%esp) /* + * Test the cr3 used to enter the kernel and add a marker + * so that we can switch back to it before iret. + */ + testl $PTI_SWITCH_MASK, %eax + jz .Lcopy_pt_regs_\@ + orl $CS_FROM_USER_CR3, PT_CS(%esp) + + /* * %esi and %edi are unchanged, %ecx contains the number of * bytes to copy. The code at .Lcopy_pt_regs_\@ will allocate * the stack-frame on task-stack and copy everything over @@ -468,7 +514,7 @@ /* * This macro handles the case when we return to kernel-mode on the iret - * path and have to switch back to the entry stack. + * path and have to switch back to the entry stack and/or user-cr3 * * See the comments below the .Lentry_from_kernel_\@ label in the * SWITCH_TO_KERNEL_STACK macro for more details. @@ -514,6 +560,18 @@ /* Safe to switch to entry-stack now */ movl %ebx, %esp + /* + * We came from entry-stack and need to check if we also need to + * switch back to user cr3. + */ + testl $CS_FROM_USER_CR3, PT_CS(%esp) + jz .Lend_\@ + + /* Clear marker from stack-frame */ + andl $(~CS_FROM_USER_CR3), PT_CS(%esp) + + SWITCH_TO_USER_CR3 scratch_reg=%eax + .Lend_\@: .endm /* @@ -707,7 +765,20 @@ ENTRY(xen_sysenter_target) * 0(%ebp) arg6 */ ENTRY(entry_SYSENTER_32) + /* + * On entry-stack with all userspace-regs live - save and + * restore eflags and %eax to use it as scratch-reg for the cr3 + * switch. + */ + pushfl + pushl %eax + SWITCH_TO_KERNEL_CR3 scratch_reg=%eax + popl %eax + popfl + + /* Stack empty again, switch to task stack */ movl TSS_entry2task_stack(%esp), %esp + .Lsysenter_past_esp: pushl $__USER_DS /* pt_regs->ss */ pushl %ebp /* pt_regs->sp (stashed in bp) */ @@ -786,6 +857,9 @@ ENTRY(entry_SYSENTER_32) /* Switch to entry stack */ movl %eax, %esp + /* Now ready to switch the cr3 */ + SWITCH_TO_USER_CR3 scratch_reg=%eax + /* * Restore all flags except IF. (We restore IF separately because * STI gives a one-instruction window in which we won't be interrupted, @@ -866,7 +940,11 @@ restore_all: .Lrestore_all_notrace: CHECK_AND_APPLY_ESPFIX .Lrestore_nocheck: - RESTORE_REGS 4 # skip orig_eax/error_code + /* Switch back to user CR3 */ + SWITCH_TO_USER_CR3 scratch_reg=%eax + + /* Restore user state */ + RESTORE_REGS pop=4 # skip orig_eax/error_code .Lirq_return: /* * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization