From patchwork Wed Jul 11 11:29:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 10519523 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 39EC06032A for ; Wed, 11 Jul 2018 11:32:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 36ED628592 for ; Wed, 11 Jul 2018 11:32:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2A71228B8A; Wed, 11 Jul 2018 11:32:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A332A28592 for ; Wed, 11 Jul 2018 11:32:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 675786B0293; Wed, 11 Jul 2018 07:30:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 624446B0295; Wed, 11 Jul 2018 07:30:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5140F6B0296; Wed, 11 Jul 2018 07:30:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id E39EB6B0293 for ; Wed, 11 Jul 2018 07:30:22 -0400 (EDT) Received: by mail-ed1-f70.google.com with SMTP id a22-v6so9880762eds.13 for ; Wed, 11 Jul 2018 04:30:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=1tK4DJkM6IMYIVPaJ98OlSv3S+ezDSjfJPKXAzM6UFQ=; b=e8MhupIKbC/NkP7TWVICcsRl1PIgIf34RZfg7HY6qxztpgwL7FVThlQi71ltjxy2VH ytT6F/gbMK90Aa83iN7TmY2wSLEpCRe1xbCwel3/zcBcrDAgk2GFhn5w+vILmGugANyx mtc4v6q4q7xzwqPgU4W/A5h8EQPWeSH46e80Hj1ygGzvxEY32RsJ221B/fDjnMZIUzks Khysn0w/seF3/R/QvhYw3ID+CQuFMiAQ0qDimlM/Li6H0u3B/gf4J+if4twJwVZkDhxx hM0twl9T6kVqjsFre/ugujM1/4cNoAJVE8ymkIxS26wyNRCTiB7y1FFa5tbTvvgXWVHj FvJQ== X-Gm-Message-State: APt69E3wMIQtTXnAbLba7l/KptvK+Rp5AwlF72mSI7XDh4NgkscOT3Hv lRiQIgxxFm3crpQBIeksYMy70k96GmY7zCzbfDlSEuBrRDo9mZ8+34WwEQC+90FkYdo6jWnbF3/ 9OSPoAA87kOVvqj5X5V2tz+q3PA84F1M8bgSJEiU6acibr3Z0lRx6h+OQxxHsV6djuQ== X-Received: by 2002:a50:ace2:: with SMTP id x89-v6mr31198813edc.152.1531308622450; Wed, 11 Jul 2018 04:30:22 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdHoYCPkm8h+yM3/PlPwrsRitB6GE9BEHtiXlvP3I28K0X5g7POFnkDcAnKSIQHqVB02H2t X-Received: by 2002:a50:ace2:: with SMTP id x89-v6mr31198772edc.152.1531308621789; Wed, 11 Jul 2018 04:30:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531308621; cv=none; d=google.com; s=arc-20160816; b=VywlwXJykI5mGvnOTSFrMFcRdYY/R7EXRAhbNjTKGqfi0Whf4jOSerevm/sjyparAa 70HxqGwF09Dpd1fHL4u1NCWWPSc8UcA4sKAS1cpIYfuZkNRt+84aJiXijjchdNz0ezs+ dueqdGkd7KkfCL6dWzg4n0Ua6OTEj/okYmnjiR5cmoPhFSNarWRrnAQ+aNcSsnLTvRZJ E0U1r2LH4FuLr95GBGVFvw1T3z6wAPdUIw3T8JapD30zJ9NkiahxZDac417kVtmp3/1n inrVOLJ0bo63lvlKi+Lg7AO8X6caoglAWCmGV57iwsfyL5OKE8faj4RUB4p21u2ChXKA d60A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=1tK4DJkM6IMYIVPaJ98OlSv3S+ezDSjfJPKXAzM6UFQ=; b=fogSyTbHQaV2oLeLrzB91i+D/bMFxiXyMX1i9XDdvm9ok1vEbGNX1O+GqHiUmbTvAm x1zHNSLUuacihzV0kEHdps82AB1yVtR8HTyPmEVw0Nz1MRGqNz8F06vvW6UIdEF/VZxA lhJY7RzVu1GvOzmQoSkJJEW3tBXvH8YUzHt4gBOPxGGKe3rCbSb3vvNGipAT/eXK/JME HAEBYavBVSUEjzHLqbpvrD1Czrs8+p7BvBO3fICFI/UDQptaRpVyxW3yiLNohAehQndk xyYM7jOh+HJHXkuU49I1j1GMDk8A+DXsQDAq01u+ghW2tg88r1TQ3OJV2nlGLulwzQ5l Nh2g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=khXz1DgE; spf=pass (google.com: domain of joro@8bytes.org designates 81.169.241.247 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: from theia.8bytes.org (8bytes.org. [81.169.241.247]) by mx.google.com with ESMTPS id a90-v6si364106ede.124.2018.07.11.04.30.21 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Jul 2018 04:30:21 -0700 (PDT) Received-SPF: pass (google.com: domain of joro@8bytes.org designates 81.169.241.247 as permitted sender) client-ip=81.169.241.247; Authentication-Results: mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=khXz1DgE; spf=pass (google.com: domain of joro@8bytes.org designates 81.169.241.247 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: by theia.8bytes.org (Postfix, from userid 1000) id 2E1142432; Wed, 11 Jul 2018 13:30:06 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1531308608; bh=I9Q73gZnfm6UrdNLFwZi1LUuugpS4MvdTAw5t1OmAUA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=khXz1DgEFTkrXKxee/vIHRQJnmRLIDqLZfuilA5sq+1v0cvhgQTXEQbRdUwmr9McV bG2yqIQjOqSxHk4GYKkKUsEy7H5rXGuUUKs0NbfzfsDf16aWOJ8w48zISwu/8qRoYa BZGx041uaQbt+FTBHe/l6TxXqg32UpufBXPUDk3xtPx3Avk3Is+kZHLq9om33XeWM/ l4hCy+dlTsFx5H4dRvEsuNd7N9bWSOQndw5Wp9Z+muwG9ms8B8bD4ImXyhaOqqcrTp OxzZvG7z6MfUVVg9cLsJgOUMK8zKYb5DBRALDuk/SRYRmaY/QkZS6DN0z2ZwYw7n3R xO9G71SaK6A7Q== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , "David H . Gutteridge" , jroedel@suse.de, joro@8bytes.org Subject: [PATCH 35/39] x86/ldt: Split out sanity check in map_ldt_struct() Date: Wed, 11 Jul 2018 13:29:42 +0200 Message-Id: <1531308586-29340-36-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1531308586-29340-1-git-send-email-joro@8bytes.org> References: <1531308586-29340-1-git-send-email-joro@8bytes.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Joerg Roedel This splits out the mapping sanity check and the actual mapping of the LDT to user-space from the map_ldt_struct() function in a way so that it is re-usable for PAE paging. Signed-off-by: Joerg Roedel Reviewed-by: Andy Lutomirski --- arch/x86/kernel/ldt.c | 82 ++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 58 insertions(+), 24 deletions(-) diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c index e921b3d..69af9a0 100644 --- a/arch/x86/kernel/ldt.c +++ b/arch/x86/kernel/ldt.c @@ -100,6 +100,49 @@ static struct ldt_struct *alloc_ldt_struct(unsigned int num_entries) return new_ldt; } +#ifdef CONFIG_PAGE_TABLE_ISOLATION + +static void do_sanity_check(struct mm_struct *mm, + bool had_kernel_mapping, + bool had_user_mapping) +{ + if (mm->context.ldt) { + /* + * We already had an LDT. The top-level entry should already + * have been allocated and synchronized with the usermode + * tables. + */ + WARN_ON(!had_kernel_mapping); + if (static_cpu_has(X86_FEATURE_PTI)) + WARN_ON(!had_user_mapping); + } else { + /* + * This is the first time we're mapping an LDT for this process. + * Sync the pgd to the usermode tables. + */ + WARN_ON(had_kernel_mapping); + if (static_cpu_has(X86_FEATURE_PTI)) + WARN_ON(had_user_mapping); + } +} + +static void map_ldt_struct_to_user(struct mm_struct *mm) +{ + pgd_t *pgd = pgd_offset(mm, LDT_BASE_ADDR); + + if (static_cpu_has(X86_FEATURE_PTI) && !mm->context.ldt) + set_pgd(kernel_to_user_pgdp(pgd), *pgd); +} + +static void sanity_check_ldt_mapping(struct mm_struct *mm) +{ + pgd_t *pgd = pgd_offset(mm, LDT_BASE_ADDR); + bool had_kernel = (pgd->pgd != 0); + bool had_user = (kernel_to_user_pgdp(pgd)->pgd != 0); + + do_sanity_check(mm, had_kernel, had_user); +} + /* * If PTI is enabled, this maps the LDT into the kernelmode and * usermode tables for the given mm. @@ -115,9 +158,8 @@ static struct ldt_struct *alloc_ldt_struct(unsigned int num_entries) static int map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) { -#ifdef CONFIG_PAGE_TABLE_ISOLATION - bool is_vmalloc, had_top_level_entry; unsigned long va; + bool is_vmalloc; spinlock_t *ptl; pgd_t *pgd; int i; @@ -131,13 +173,15 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) */ WARN_ON(ldt->slot != -1); + /* Check if the current mappings are sane */ + sanity_check_ldt_mapping(mm); + /* * Did we already have the top level entry allocated? We can't * use pgd_none() for this because it doens't do anything on * 4-level page table kernels. */ pgd = pgd_offset(mm, LDT_BASE_ADDR); - had_top_level_entry = (pgd->pgd != 0); is_vmalloc = is_vmalloc_addr(ldt->entries); @@ -172,35 +216,25 @@ map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) pte_unmap_unlock(ptep, ptl); } - if (mm->context.ldt) { - /* - * We already had an LDT. The top-level entry should already - * have been allocated and synchronized with the usermode - * tables. - */ - WARN_ON(!had_top_level_entry); - if (static_cpu_has(X86_FEATURE_PTI)) - WARN_ON(!kernel_to_user_pgdp(pgd)->pgd); - } else { - /* - * This is the first time we're mapping an LDT for this process. - * Sync the pgd to the usermode tables. - */ - WARN_ON(had_top_level_entry); - if (static_cpu_has(X86_FEATURE_PTI)) { - WARN_ON(kernel_to_user_pgdp(pgd)->pgd); - set_pgd(kernel_to_user_pgdp(pgd), *pgd); - } - } + /* Propagate LDT mapping to the user page-table */ + map_ldt_struct_to_user(mm); va = (unsigned long)ldt_slot_va(slot); flush_tlb_mm_range(mm, va, va + LDT_SLOT_STRIDE, 0); ldt->slot = slot; -#endif return 0; } +#else /* !CONFIG_PAGE_TABLE_ISOLATION */ + +static int +map_ldt_struct(struct mm_struct *mm, struct ldt_struct *ldt, int slot) +{ + return 0; +} +#endif /* CONFIG_PAGE_TABLE_ISOLATION */ + static void free_ldt_pgtables(struct mm_struct *mm) { #ifdef CONFIG_PAGE_TABLE_ISOLATION