From patchwork Wed Jul 11 11:29:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 10519529 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9A1B06032A for ; Wed, 11 Jul 2018 11:32:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9F1C928592 for ; Wed, 11 Jul 2018 11:32:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 926E028B8A; Wed, 11 Jul 2018 11:32:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 105F928592 for ; Wed, 11 Jul 2018 11:32:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 822106B0296; Wed, 11 Jul 2018 07:30:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7D2DC6B0298; Wed, 11 Jul 2018 07:30:24 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C1436B0297; Wed, 11 Jul 2018 07:30:24 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by kanga.kvack.org (Postfix) with ESMTP id 09EF26B0295 for ; Wed, 11 Jul 2018 07:30:24 -0400 (EDT) Received: by mail-ed1-f72.google.com with SMTP id x21-v6so5800041eds.2 for ; Wed, 11 Jul 2018 04:30:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=EHTti7H4jeVWLIkik5tlR4LYvDTe7pin4z6s1mf5akY=; b=LXWMB3OxrBKtl2Q9bxGqS9N/frVvsBHMNkp8Hd3LM2xPhG2DVyRd1dR1SJ/v7Tr8kI YN8O3or6TT+nGjIL+0/wDgr11M85O3p1N3MF2f+T3JbtMQObltpQm7Zi1Km0/nFloB2I 84MfDRGJpnn7Jjif43/32Wdz3HL+0SOG5i4L/ndGS/cXe1hP9bxITqLToJNK8ialr3p+ zzYTzwJsiAtREds/DWL38HHzGLZp8xWPEtbDpHRcCjtDym24TxP80mquHHcqAMwQok6o Mee2eZ1Ikubdjvtb0oDv3JhBCyKGdqHoKYjIfM7Ugoc+eUOWz2AzmcE+mstPuVUAhpnS ck+A== X-Gm-Message-State: APt69E3LGAyyrFAoAna+M1HzXtJYLOESt+JsiNLzKbbdU+753pOI6M3v PM6SRT/rtHeBtqcqLXOFyevWZ+IXtfkvftmKDKFdt4HeSDAEkdCQ3YfxZj6IeRTtYVwCdx/PZEw ZU9URj4XFNwL0eb+7WnVUTltn+Ow8RjBLH1hg37nLTGnNQDmjPpNkpQFcqdwUdZV4Rg== X-Received: by 2002:a50:a3ce:: with SMTP id t14-v6mr31052500edb.227.1531308623604; Wed, 11 Jul 2018 04:30:23 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdI/YTyVZwZMmuPueaIy8mDsRE58NeYjYcwPi53IIF9oEI3YnBGIKVdFTbBHJbGPz1yLD3I X-Received: by 2002:a50:a3ce:: with SMTP id t14-v6mr31052460edb.227.1531308622877; Wed, 11 Jul 2018 04:30:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531308622; cv=none; d=google.com; s=arc-20160816; b=lBEw5vGaAeRxJUns15HbgnG3G65xd5mbjOmSWahEiKUABYp05r4WJ5VNS6pV0vNYIz cf7EVqbNGkBN0XGspxgcWzzR1cnCc7eHP+8+2/ouulSw0kJdRnt1sNmofG1sNpkAAETQ 5x3X+VkZb6r2Z2j2+m9rcSlwi+9TA0G2xVeoA8mL4scLBTjGtUJFeoT2Cq0hMiGoaYN1 mNnc67RsaWvB7nmJWuY3c2LpOCCiLMVngs2KFq0ppOAPjzXSUC74wcQQcz1HUt++zNAX FECFuaEPivDATJYr7SNMN5MEcqI1w6D4sAE+8MIaJRTefayg4Pg7r4CKKlO16+Q1upHO SXSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=EHTti7H4jeVWLIkik5tlR4LYvDTe7pin4z6s1mf5akY=; b=rcn52FuqGUkWF0jTqqOewj1IhfttS7lpddlHYW/EaGhREPlRMVDPlzsNeweptWr3Hy 12TJzjH5sHqTPNFJ4LMoWYKcQVzLUrkRS0OtXaorlp0qh2ugi+XecokHgS9Se7Nr+Z47 7fm4DiQIIIUoBGfBy8Ept8vlGZKtmbOQl3JsQp1WKk/Q/nXmxA81HdA1pVQc7agetPnh zRnXWoNsDnOTREE+dQfXamYeInPjte+4iKt0A8lz5jWfRKxlsUqwvMEctGxMilO1UoMP KAZV3Zv0DLasP5peRuXWzyxCAOGdb+CoT4dcdV3xNk3Z6L40jiIRlUZldaeMNwRGSrsA Xb7Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=TrzgZZic; spf=pass (google.com: domain of joro@8bytes.org designates 81.169.241.247 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: from theia.8bytes.org (8bytes.org. [81.169.241.247]) by mx.google.com with ESMTPS id 5-v6si1636630edo.397.2018.07.11.04.30.22 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Jul 2018 04:30:22 -0700 (PDT) Received-SPF: pass (google.com: domain of joro@8bytes.org designates 81.169.241.247 as permitted sender) client-ip=81.169.241.247; Authentication-Results: mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=TrzgZZic; spf=pass (google.com: domain of joro@8bytes.org designates 81.169.241.247 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: by theia.8bytes.org (Postfix, from userid 1000) id 77477309F; Wed, 11 Jul 2018 13:30:07 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1531308608; bh=rH3pHIAZjes8fXXsCzld/lB2OED0lZmhYxZj0vhyKjY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TrzgZZicvprPxn1HhyaJ6THvUmeXEwlpjtZXrnHUXNdXvOi4BwNEcj/YCvCoIQLM3 Se268MmgdEwEEF3X4a63RRS1cZ2pLB4Bp2rJNh0y+iO4vYs8PwZkFqo7wZJAT9fTTV uP0GRnFoare5G1eWva9jlHbA0hsuR1ySuhmsIktJFYcpmSm+K3uuu2Tkj2TWnfdOa2 Nk2k3qavzkYkGvSRiwmXkqytTrrCTHA+SkCPYtGcEhNU5Fans95qvYDDdJqaPKS1SX yiwCTTKxb503p7kx8niI3iaSPYoFsBRT4eFUmexly0jVudt9SaKha2qLUN28KFifmt 90ANZ7QZ2BR8g== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , "David H . Gutteridge" , jroedel@suse.de, joro@8bytes.org Subject: [PATCH 39/39] x86/entry/32: Add debug code to check entry/exit cr3 Date: Wed, 11 Jul 2018 13:29:46 +0200 Message-Id: <1531308586-29340-40-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1531308586-29340-1-git-send-email-joro@8bytes.org> References: <1531308586-29340-1-git-send-email-joro@8bytes.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Joerg Roedel Add a config option that enabled code to check that we enter and leave the kernel with the correct cr3. This is needed because we have no NX protection of user-addresses in the kernel-cr3 on x86-32 and wouldn't notice that type of bug otherwise. Signed-off-by: Joerg Roedel --- arch/x86/Kconfig.debug | 12 ++++++++++++ arch/x86/entry/entry_32.S | 43 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 55 insertions(+) diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug index c6dd1d9..6eaca2d 100644 --- a/arch/x86/Kconfig.debug +++ b/arch/x86/Kconfig.debug @@ -340,6 +340,18 @@ config X86_DEBUG_FPU If unsure, say N. +config X86_DEBUG_ENTRY_CR3 + bool "Debug CR3 for Kernel entry/exit" + depends on X86_32 && PAGE_TABLE_ISOLATION + help + Add instructions to the x86-32 entry code to check whether the kernel + is entered and left with the correct CR3. When PTI is enabled, this + checks whether we enter the kernel with the user-space cr3 when + coming from user-mode and if we leave with user-cr3 back to + user-space. + + If unsure, say N. + config PUNIT_ATOM_DEBUG tristate "ATOM Punit debug driver" depends on PCI diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index a368583..d8d9a54 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -166,6 +166,24 @@ .Lend_\@: .endm +.macro BUG_IF_WRONG_CR3 no_user_check=0 +#ifdef CONFIG_X86_DEBUG_ENTRY_CR3 + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI + .if \no_user_check == 0 + /* coming from usermode? */ + testl $SEGMENT_RPL_MASK, PT_CS(%esp) + jz .Lend_\@ + .endif + /* On user-cr3? */ + movl %cr3, %eax + testl $PTI_SWITCH_MASK, %eax + jnz .Lend_\@ + /* From userspace with kernel cr3 - BUG */ + ud2 +.Lend_\@: +#endif +.endm + /* * Switch to kernel cr3 if not already loaded and return current cr3 in * \scratch_reg @@ -218,6 +236,8 @@ .macro SAVE_ALL_NMI cr3_reg:req SAVE_ALL + BUG_IF_WRONG_CR3 + /* * Now switch the CR3 when PTI is enabled. * @@ -229,6 +249,7 @@ .Lend_\@: .endm + /* * This is a sneaky trick to help the unwinder find pt_regs on the stack. The * frame pointer is replaced with an encoded pointer to pt_regs. The encoding @@ -292,6 +313,8 @@ .Lswitched_\@: + BUG_IF_WRONG_CR3 + RESTORE_REGS pop=\pop .endm @@ -362,6 +385,8 @@ ALTERNATIVE "", "jmp .Lend_\@", X86_FEATURE_XENPV + BUG_IF_WRONG_CR3 + SWITCH_TO_KERNEL_CR3 scratch_reg=%eax /* @@ -803,6 +828,7 @@ ENTRY(entry_SYSENTER_32) */ pushfl pushl %eax + BUG_IF_WRONG_CR3 no_user_check=1 SWITCH_TO_KERNEL_CR3 scratch_reg=%eax popl %eax popfl @@ -897,6 +923,7 @@ ENTRY(entry_SYSENTER_32) * whereas POPF does not.) */ btrl $X86_EFLAGS_IF_BIT, (%esp) + BUG_IF_WRONG_CR3 no_user_check=1 popfl popl %eax @@ -974,6 +1001,8 @@ restore_all: /* Switch back to user CR3 */ SWITCH_TO_USER_CR3 scratch_reg=%eax + BUG_IF_WRONG_CR3 + /* Restore user state */ RESTORE_REGS pop=4 # skip orig_eax/error_code .Lirq_return: @@ -987,6 +1016,7 @@ restore_all: restore_all_kernel: TRACE_IRQS_IRET PARANOID_EXIT_TO_KERNEL_MODE + BUG_IF_WRONG_CR3 RESTORE_REGS 4 jmp .Lirq_return @@ -994,6 +1024,19 @@ restore_all_kernel: ENTRY(iret_exc ) pushl $0 # no error code pushl $do_iret_error + +#ifdef CONFIG_X86_DEBUG_ENTRY_CR3 + /* + * The stack-frame here is the one that iret faulted on, so its a + * return-to-user frame. We are on kernel-cr3 because we come here from + * the fixup code. This confuses the CR3 checker, so switch to user-cr3 + * as the checker expects it. + */ + pushl %eax + SWITCH_TO_USER_CR3 scratch_reg=%eax + popl %eax +#endif + jmp common_exception .previous _ASM_EXTABLE(.Lirq_return, iret_exc)