From patchwork Tue May 17 15:11:00 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vrabel X-Patchwork-Id: 9113691 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 2FB19BF29F for ; Tue, 17 May 2016 15:13:07 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 457192022A for ; Tue, 17 May 2016 15:13:06 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2AC7F20218 for ; Tue, 17 May 2016 15:13:05 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1b2geI-0002T0-S0; Tue, 17 May 2016 15:11:06 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1b2geH-0002Sq-27 for xen-devel@lists.xen.org; Tue, 17 May 2016 15:11:05 +0000 Received: from [193.109.254.147] by server-14.bemta-14.messagelabs.com id F3/DF-26345-8843B375; Tue, 17 May 2016 15:11:04 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrGIsWRWlGSWpSXmKPExsXitHRDpG67iXW 4wZxz1hZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8aJj+vZC2ZIVXT8esvYwPhVtIuRk0NCwF/i xvyDbF2MHBzCAnYSDz9kgYRFBHIlzn94AxTm4hASWM4q0br0MxNIglmgQOJVRxMbiM0moCPxe MkMdhCbV0BT4uHtr6wgNouAqsTy41OZQWxRgWCJOxees0HUCEqcnPmEBcTmFNCSWPr3DiPIXm ag3vW79CHGy0tsfzsHrFVIQEXi49pVrBBnckvcPj2VeQIj/ywkk2YhdM9C0r2AkXkVo3pxalF ZapGusV5SUWZ6RkluYmaOrqGhiV5uanFxYnpqTmJSsV5yfu4mRmD4MQDBDsa7fc6HGCU5mJRE eV9etAoX4kvKT6nMSCzOiC8qzUktPsQow8GhJMF728g6XEiwKDU9tSItMwcYCTBpCQ4eJRHei yBp3uKCxNzizHSI1ClGRSlx3iUgCQGQREZpHlwbLPouMcpKCfMyAh0ixFOQWpSbWYIq/4pRnI NRSZj3JsgUnsy8Erjpr4AWMwEtnmBmAbK4JBEhJdXAeN52S2fB0f7O/Igr/mYtL+eveFtnFOu z9vin598m/926X/1lc9fp0MNVh4q1L1R9dbqk3ib87ZVf8dVTpVrKdo3Lv/w66SEk+PL7JpdV 6kez7wpsFllRMCE66x8TW9GWqZJLs1j+3zGRzzhYYJNqd//oKn+pOccuucSHWWXxbTp4+d3/2 xzJp5RYijMSDbWYi4oTAZTCZ5K5AgAA X-Env-Sender: prvs=9381a948e=david.vrabel@citrix.com X-Msg-Ref: server-11.tower-27.messagelabs.com!1463497862!33512834!1 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 52234 invoked from network); 17 May 2016 15:11:03 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 17 May 2016 15:11:03 -0000 X-IronPort-AV: E=Sophos;i="5.26,324,1459814400"; d="scan'208";a="354788300" To: David Vrabel , Juergen Gross , Jan Beulich References: <572FC2F9.3060200@riseup.net> <57307DD402000078000E9622@prv-mh.provo.novell.com> <5730A447.3010505@riseup.net> <5730CE7E02000078000E9A8A@prv-mh.provo.novell.com> <5730BD86.6080407@riseup.net> <5730C5A4.9040008@oracle.com> <5730C73E.8030007@riseup.net> <5730D994.6050100@oracle.com> <5731A87502000078000E9D8C@prv-mh.provo.novell.com> <5731E4A5.30906@oracle.com> <5732050602000078000EA230@prv-mh.provo.novell.com> <5731FBEA.4010202@suse.com> <57321BFA02000078000EA3C2@suse.com> <573201AE.1010306@suse.com> <57320DD3.3030805@oracle.com> <5732C7D0.4050907@suse.com> <5732EEBF02000078000EA613@suse.com> <5732D893.7000503@suse.com> <5733068D.5050604@citrix.com> From: David Vrabel Message-ID: <573B3484.1080603@citrix.com> Date: Tue, 17 May 2016 16:11:00 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.5.0 MIME-Version: 1.0 In-Reply-To: <5733068D.5050604@citrix.com> X-DLP: MIA2 Cc: Kevin Moraga , Boris Ostrovsky , xen-devel@lists.xen.org Subject: Re: [Xen-devel] crash on boot with 4.6.1 on fedora 24 X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On 11/05/16 11:16, David Vrabel wrote: > > Why don't we get the RW bits correct when making the pteval when we > already have the pfn, instead trying to fix it up afterwards. Kevin, can you try this patch. David 8<----------------- x86/xen: avoid m2p lookup when setting early page table entries When page tables entries are set using xen_set_pte_init() during early boot there is no page fault handler that could handle a fault when performing an M2P lookup. In 64 guest (usually dom0) early_ioremap() would fault in xen_set_pte_init() because an M2P lookup faults because the MFN is in MMIO space and not mapped in the M2P. This lookup is done to see if the PFN in in the range used for the initial page table pages, so that the PTE may be set as read-only. The M2P lookup can be avoided by moving the check (and clear of RW) earlier when the PFN is still available. [ Not entirely happy with this as the 32/64 bit paths diverge even more. Is there some way to unify them instead? ] Signed-off-by: David Vrabel --- arch/x86/xen/mmu.c | 28 +++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) @@ -1577,10 +1577,10 @@ static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte) * page tables for mapping the p2m list, too, and page tables MUST be * mapped read-only. */ - pfn = pte_pfn(pte); + pfn = (pte & PTE_PFN_MASK) >> PAGE_SHIFT; if (pfn >= xen_start_info->first_p2m_pfn && pfn < xen_start_info->first_p2m_pfn + xen_start_info->nr_p2m_frames) - pte = __pte_ma(pte_val_ma(pte) & ~_PAGE_RW); + pte &= ~_PAGE_RW; return pte; } @@ -1600,13 +1600,26 @@ static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte) * so always write the PTE directly and rely on Xen trapping and * emulating any updates as necessary. */ +__visible __init pte_t xen_make_pte_init(pteval_t pte) +{ +#ifdef CONFIG_X86_64 + pte = mask_rw_pte(pte); +#endif + pte = pte_pfn_to_mfn(pte); + + if ((pte & PTE_PFN_MASK) >> PAGE_SHIFT == INVALID_P2M_ENTRY) + pte = 0; + + return native_make_pte(pte); +} +PV_CALLEE_SAVE_REGS_THUNK(xen_make_pte_init); + static void __init xen_set_pte_init(pte_t *ptep, pte_t pte) { +#ifdef CONFIG_X86_32 if (pte_mfn(pte) != INVALID_P2M_ENTRY) pte = mask_rw_pte(ptep, pte); - else - pte = __pte_ma(0); - +#endif native_set_pte(ptep, pte); } @@ -2407,6 +2420,7 @@ static void __init xen_post_allocator_init(void) pv_mmu_ops.alloc_pud = xen_alloc_pud; pv_mmu_ops.release_pud = xen_release_pud; #endif + pv_mmu_ops.make_pte = PV_CALLEE_SAVE(xen_make_pte); #ifdef CONFIG_X86_64 pv_mmu_ops.write_cr3 = &xen_write_cr3; @@ -2455,7 +2469,7 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = { .pte_val = PV_CALLEE_SAVE(xen_pte_val), .pgd_val = PV_CALLEE_SAVE(xen_pgd_val), - .make_pte = PV_CALLEE_SAVE(xen_make_pte), + .make_pte = PV_CALLEE_SAVE(xen_make_pte_init), .make_pgd = PV_CALLEE_SAVE(xen_make_pgd), #ifdef CONFIG_X86_PAE diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c index 478a2de..897fad4 100644 --- a/arch/x86/xen/mmu.c +++ b/arch/x86/xen/mmu.c @@ -1562,7 +1562,7 @@ static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte) return pte; } #else /* CONFIG_X86_64 */ -static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte) +static pteval_t __init mask_rw_pte(pteval_t pte) { unsigned long pfn;