From patchwork Tue Nov 13 13:07:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10681479 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 45A481759 for ; Tue, 13 Nov 2018 20:25:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3963F2B63C for ; Tue, 13 Nov 2018 20:25:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2D1752B649; Tue, 13 Nov 2018 20:25:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 12AF42B640 for ; Tue, 13 Nov 2018 20:25:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725965AbeKNGZ0 (ORCPT ); Wed, 14 Nov 2018 01:25:26 -0500 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:21360 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725813AbeKNGZZ (ORCPT ); Wed, 14 Nov 2018 01:25:25 -0500 Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Tue, 13 Nov 2018 12:25:11 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id 94FCEB1A5C; Tue, 13 Nov 2018 15:25:38 -0500 (EST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Dave Hansen , Peter Zijlstra , , , , Andy Lutomirski , Kees Cook , Dave Hansen , Nadav Amit Subject: [PATCH v5 03/10] x86/mm: temporary mm struct Date: Tue, 13 Nov 2018 05:07:23 -0800 Message-ID: <20181113130730.44844-4-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113130730.44844-1-namit@vmware.com> References: <20181113130730.44844-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Andy Lutomirski Sometimes we want to set a temporary page-table entries (PTEs) in one of the cores, without allowing other cores to use - even speculatively - these mappings. There are two benefits for doing so: (1) Security: if sensitive PTEs are set, temporary mm prevents their use in other cores. This hardens the security as it prevents exploding a dangling pointer to overwrite sensitive data using the sensitive PTE. (2) Avoiding TLB shootdowns: the PTEs do not need to be flushed in remote page-tables. To do so a temporary mm_struct can be used. Mappings which are private for this mm can be set in the userspace part of the address-space. During the whole time in which the temporary mm is loaded, interrupts must be disabled. The first use-case for temporary PTEs, which will follow, is for poking the kernel text. [ Commit message was written by Nadav ] Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Reviewed-by: Masami Hiramatsu Tested-by: Masami Hiramatsu Signed-off-by: Andy Lutomirski Signed-off-by: Nadav Amit --- arch/x86/include/asm/mmu_context.h | 32 ++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 0ca50611e8ce..0141b7fa6d01 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -338,4 +338,36 @@ static inline unsigned long __get_current_cr3_fast(void) return cr3; } +typedef struct { + struct mm_struct *prev; +} temporary_mm_state_t; + +/* + * Using a temporary mm allows to set temporary mappings that are not accessible + * by other cores. Such mappings are needed to perform sensitive memory writes + * that override the kernel memory protections (e.g., W^X), without exposing the + * temporary page-table mappings that are required for these write operations to + * other cores. + * + * Context: The temporary mm needs to be used exclusively by a single core. To + * harden security IRQs must be disabled while the temporary mm is + * loaded, thereby preventing interrupt handler bugs from override the + * kernel memory protection. + */ +static inline temporary_mm_state_t use_temporary_mm(struct mm_struct *mm) +{ + temporary_mm_state_t state; + + lockdep_assert_irqs_disabled(); + state.prev = this_cpu_read(cpu_tlbstate.loaded_mm); + switch_mm_irqs_off(NULL, mm, current); + return state; +} + +static inline void unuse_temporary_mm(temporary_mm_state_t prev) +{ + lockdep_assert_irqs_disabled(); + switch_mm_irqs_off(NULL, prev.prev, current); +} + #endif /* _ASM_X86_MMU_CONTEXT_H */