From patchwork Tue Jun 21 16:09:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vrabel X-Patchwork-Id: 9190889 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1A5C56075E for ; Tue, 21 Jun 2016 16:11:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0A5BB28319 for ; Tue, 21 Jun 2016 16:11:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F356528325; Tue, 21 Jun 2016 16:11:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DA69328319 for ; Tue, 21 Jun 2016 16:11:23 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bFOF2-0005c2-2r; Tue, 21 Jun 2016 16:09:32 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bFOF0-0005bs-NT for xen-devel@lists.xenproject.org; Tue, 21 Jun 2016 16:09:30 +0000 Received: from [85.158.137.68] by server-9.bemta-3.messagelabs.com id 4F/45-18769-9B669675; Tue, 21 Jun 2016 16:09:29 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrGLMWRWlGSWpSXmKPExsXitHSDve7OtMx wgx/35Cy+b5nM5MDocfjDFZYAxijWzLyk/IoE1oxJhz+zFJyXqGg8MZ+tgfGeSBcjJ4eEgL/E 8c0b2UFsNgEdicdLZgDZHBwiAioSt/cagISZBaokrlz6zApiCwsESfy4dI4ZxGYRUJVYvOQHm M0r4C6xdcFxRoiRchLnj/8EiwsBjfm4dhUrRI2gxMmZT1ggZkpIHHzxghlklYQAt8TfbvsJjD yzkFTNQlK1gJFpFaN6cWpRWWqRrqVeUlFmekZJbmJmjq6hgbFebmpxcWJ6ak5iUrFecn7uJkZ geNQzMDDuYHz90+kQoyQHk5IoL7NyRrgQX1J+SmVGYnFGfFFpTmrxIUYZDg4lCV6O5MxwIcGi 1PTUirTMHGCgwqQlOHiURHi5QdK8xQWJucWZ6RCpU4yKUuK8R5OAEgIgiYzSPLg2WHRcYpSVE uZlZGBgEOIpSC3KzSxBlX/FKM7BqCTMewlkCk9mXgnc9FdAi5mAFi/rTwdZXJKIkJJqYEy5LX Kks09di+HM+bhvSVE61dtKpfcqaVzoztzgfv3dFvfyEpbIlOLrz8MstEJWMfJ9jP5wqfHqJ5e +JgF5fm2WjTNcj6akaM4/vWp1pNW2w87xR/2Dtx02fdNUedpdOtnMebPU3VM6C6MLX9RsjV0s +/PDtj0fpdb7lB0wXyqx8NlTO8sqUSWW4oxEQy3mouJEAE6HfxaJAgAA X-Env-Sender: prvs=973ab1edf=david.vrabel@citrix.com X-Msg-Ref: server-2.tower-31.messagelabs.com!1466525367!46476850!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 63674 invoked from network); 21 Jun 2016 16:09:29 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 21 Jun 2016 16:09:29 -0000 X-IronPort-AV: E=Sophos;i="5.26,504,1459814400"; d="scan'208";a="368756768" From: David Vrabel To: Date: Tue, 21 Jun 2016 17:09:13 +0100 Message-ID: <1466525353-27751-1-git-send-email-david.vrabel@citrix.com> X-Mailer: git-send-email 2.1.4 MIME-Version: 1.0 X-DLP: MIA1 Cc: Juergen Gross , Boris Ostrovsky , David Vrabel Subject: [Xen-devel] [PATCHv2] x86/xen: avoid m2p lookup when setting early page table entries X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP When page tables entries are set using xen_set_pte_init() during early boot there is no page fault handler that could handle a fault when performing an M2P lookup. In 64 guest (usually dom0) early_ioremap() would fault in xen_set_pte_init() because an M2P lookup faults because the MFN is in MMIO space and not mapped in the M2P. This lookup is done to see if the PFN in in the range used for the initial page table pages, so that the PTE may be set as read-only. The M2P lookup can be avoided by moving the check (and clear of RW) earlier when the PFN is still available. Signed-off-by: David Vrabel Tested-by: Keven Moraga --- v2: - Remove __init annotation from xen_make_pte_init() since PV_CALLEE_SAVE_REGS_THUNK always puts the thunk in .text. - mask_rw_pte() -> mask_rw_pteval() for x86-64. --- arch/x86/xen/mmu.c | 28 +++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c index 478a2de..e47bc19 100644 --- a/arch/x86/xen/mmu.c +++ b/arch/x86/xen/mmu.c @@ -1562,7 +1562,7 @@ static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte) return pte; } #else /* CONFIG_X86_64 */ -static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte) +static pteval_t __init mask_rw_pte(pteval_t pte) { unsigned long pfn; @@ -1577,10 +1577,10 @@ static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte) * page tables for mapping the p2m list, too, and page tables MUST be * mapped read-only. */ - pfn = pte_pfn(pte); + pfn = (pte & PTE_PFN_MASK) >> PAGE_SHIFT; if (pfn >= xen_start_info->first_p2m_pfn && pfn < xen_start_info->first_p2m_pfn + xen_start_info->nr_p2m_frames) - pte = __pte_ma(pte_val_ma(pte) & ~_PAGE_RW); + pte &= ~_PAGE_RW; return pte; } @@ -1600,13 +1600,26 @@ static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte) * so always write the PTE directly and rely on Xen trapping and * emulating any updates as necessary. */ +__visible pte_t xen_make_pte_init(pteval_t pte) +{ +#ifdef CONFIG_X86_64 + pte = mask_rw_pte(pte); +#endif + pte = pte_pfn_to_mfn(pte); + + if ((pte & PTE_PFN_MASK) >> PAGE_SHIFT == INVALID_P2M_ENTRY) + pte = 0; + + return native_make_pte(pte); +} +PV_CALLEE_SAVE_REGS_THUNK(xen_make_pte_init); + static void __init xen_set_pte_init(pte_t *ptep, pte_t pte) { +#ifdef CONFIG_X86_32 if (pte_mfn(pte) != INVALID_P2M_ENTRY) pte = mask_rw_pte(ptep, pte); - else - pte = __pte_ma(0); - +#endif native_set_pte(ptep, pte); } @@ -2407,6 +2420,7 @@ static void __init xen_post_allocator_init(void) pv_mmu_ops.alloc_pud = xen_alloc_pud; pv_mmu_ops.release_pud = xen_release_pud; #endif + pv_mmu_ops.make_pte = PV_CALLEE_SAVE(xen_make_pte); #ifdef CONFIG_X86_64 pv_mmu_ops.write_cr3 = &xen_write_cr3; @@ -2455,7 +2469,7 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = { .pte_val = PV_CALLEE_SAVE(xen_pte_val), .pgd_val = PV_CALLEE_SAVE(xen_pgd_val), - .make_pte = PV_CALLEE_SAVE(xen_make_pte), + .make_pte = PV_CALLEE_SAVE(xen_make_pte_init), .make_pgd = PV_CALLEE_SAVE(xen_make_pgd), #ifdef CONFIG_X86_PAE