From patchwork Mon Mar 14 15:12:34 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 8580741 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 2982D9F6E1 for ; Mon, 14 Mar 2016 15:15:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F1C1E203F3 for ; Mon, 14 Mar 2016 15:15:30 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9C22C201CD for ; Mon, 14 Mar 2016 15:15:29 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xen.org with esmtp (Exim 4.84) (envelope-from ) id 1afUAt-0000pb-V3; Mon, 14 Mar 2016 15:12:51 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.84) (envelope-from ) id 1afUAs-0000pV-Lb for xen-devel@lists.xenproject.org; Mon, 14 Mar 2016 15:12:50 +0000 Received: from [85.158.137.68] by server-5.bemta-3.messagelabs.com id F4/7F-03651-9E4D6E65; Mon, 14 Mar 2016 15:12:41 +0000 X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-5.tower-31.messagelabs.com!1457968358!28966077!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.11; banners=-,-,- X-VirusChecked: Checked Received: (qmail 5356 invoked from network); 14 Mar 2016 15:12:40 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 14 Mar 2016 15:12:40 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Mon, 14 Mar 2016 09:12:38 -0600 Message-Id: <56E6E2F202000078000DC1A9@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.0 Date: Mon, 14 Mar 2016 09:12:34 -0600 From: "Jan Beulich" To: "xen-devel" Mime-Version: 1.0 Cc: Andrew Cooper , Keir Fraser Subject: [Xen-devel] [PATCH] x86: partially revert use of 2M mappings for hypervisor image X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As explained by Andrew in http://lists.xenproject.org/archives/html/xen-devel/2016-03/msg01380.html that change makes the uncompressed xen.gz image too large for certain boot environments. As a result this change makes some of the effects of commits cf393624ee ("x86: use 2M superpages for text/data/bss mappings") and 53aa3dde17 ("x86: unilaterally remove .init mappings") conditional, restoring alternative previous code where necessary. This is so that xen.efi can still benefit from the new mechanisms, as it is unaffected by said limitations. Signed-off-by: Jan Beulich --- The first, neater attempt (making the __2M_* symbols weak) failed: - older gcc doesn't access the weak symbols through .got - GOTPCREL relocations get treated just like PCREL ones by ld when linking xen.efi x86: partially revert use of 2M mappings for hypervisor image As explained by Andrew in http://lists.xenproject.org/archives/html/xen-devel/2016-03/msg01380.html that change makes the uncompressed xen.gz image too large for certain boot environments. As a result this change makes some of the effects of commits cf393624ee ("x86: use 2M superpages for text/data/bss mappings") and 53aa3dde17 ("x86: unilaterally remove .init mappings") conditional, restoring alternative previous code where necessary. This is so that xen.efi can still benefit from the new mechanisms, as it is unaffected by said limitations. Signed-off-by: Jan Beulich --- The first, neater attempt (making the __2M_* symbols weak) failed: - older gcc doesn't access the weak symbols through .got - GOTPCREL relocations get treated just like PCREL ones by ld when linking xen.efi --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -497,6 +497,17 @@ static void __init kexec_reserve_area(st #endif } +static inline bool_t using_2M_mapping(void) +{ + return !l1_table_offset((unsigned long)__2M_text_end) && + !l1_table_offset((unsigned long)__2M_rodata_start) && + !l1_table_offset((unsigned long)__2M_rodata_end) && + !l1_table_offset((unsigned long)__2M_init_start) && + !l1_table_offset((unsigned long)__2M_init_end) && + !l1_table_offset((unsigned long)__2M_rwdata_start) && + !l1_table_offset((unsigned long)__2M_rwdata_end); +} + static void noinline init_done(void) { void *va; @@ -509,10 +520,19 @@ static void noinline init_done(void) for ( va = __init_begin; va < _p(__init_end); va += PAGE_SIZE ) clear_page(va); - /* Destroy Xen's mappings, and reuse the pages. */ - destroy_xen_mappings((unsigned long)&__2M_init_start, - (unsigned long)&__2M_init_end); - init_xenheap_pages(__pa(__2M_init_start), __pa(__2M_init_end)); + if ( using_2M_mapping() ) + { + /* Destroy Xen's mappings, and reuse the pages. */ + destroy_xen_mappings((unsigned long)&__2M_init_start, + (unsigned long)&__2M_init_end); + init_xenheap_pages(__pa(__2M_init_start), __pa(__2M_init_end)); + } + else + { + destroy_xen_mappings((unsigned long)&__init_begin, + (unsigned long)&__init_end); + init_xenheap_pages(__pa(__init_begin), __pa(__init_end)); + } printk("Freed %ldkB init memory.\n", (long)(__init_end-__init_begin)>>10); @@ -922,6 +942,8 @@ void __init noreturn __start_xen(unsigne * Undo the temporary-hooking of the l1_identmap. __2M_text_start * is contained in this PTE. */ + BUG_ON(l2_table_offset((unsigned long)_erodata) == + l2_table_offset((unsigned long)_stext)); *pl2e++ = l2e_from_pfn(xen_phys_start >> PAGE_SHIFT, PAGE_HYPERVISOR_RX | _PAGE_PSE); for ( i = 1; i < L2_PAGETABLE_ENTRIES; i++, pl2e++ ) @@ -931,6 +953,13 @@ void __init noreturn __start_xen(unsigne if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) ) continue; + if ( !using_2M_mapping() ) + { + *pl2e = l2e_from_intpte(l2e_get_intpte(*pl2e) + + xen_phys_start); + continue; + } + if ( i < l2_table_offset((unsigned long)&__2M_text_end) ) { flags = PAGE_HYPERVISOR_RX | _PAGE_PSE; --- a/xen/arch/x86/xen.lds.S +++ b/xen/arch/x86/xen.lds.S @@ -53,11 +53,14 @@ SECTIONS _etext = .; /* End of text section */ } :text = 0x9090 +#ifdef EFI . = ALIGN(MB(2)); +#endif __2M_text_end = .; __2M_rodata_start = .; /* Start of 2M superpages, mapped RO. */ .rodata : { + _srodata = .; /* Bug frames table */ . = ALIGN(4); __start_bug_frames = .; @@ -79,9 +82,12 @@ SECTIONS *(.lockprofile.data) __lock_profile_end = .; #endif + _erodata = .; } :text +#ifdef EFI . = ALIGN(MB(2)); +#endif __2M_rodata_end = .; __2M_init_start = .; /* Start of 2M superpages, mapped RWX (boot only). */ @@ -148,7 +154,9 @@ SECTIONS . = ALIGN(PAGE_SIZE); __init_end = .; +#ifdef EFI . = ALIGN(MB(2)); +#endif __2M_init_end = .; __2M_rwdata_start = .; /* Start of 2M superpages, mapped RW. */ @@ -200,7 +208,9 @@ SECTIONS } :text _end = . ; +#ifdef EFI . = ALIGN(MB(2)); +#endif __2M_rwdata_end = .; #ifdef EFI @@ -250,6 +260,7 @@ ASSERT(kexec_reloc_size - kexec_reloc <= #endif ASSERT(IS_ALIGNED(__2M_text_start, MB(2)), "__2M_text_start misaligned") +#ifdef EFI ASSERT(IS_ALIGNED(__2M_text_end, MB(2)), "__2M_text_end misaligned") ASSERT(IS_ALIGNED(__2M_rodata_start, MB(2)), "__2M_rodata_start misaligned") ASSERT(IS_ALIGNED(__2M_rodata_end, MB(2)), "__2M_rodata_end misaligned") @@ -257,6 +268,7 @@ ASSERT(IS_ALIGNED(__2M_init_start, MB( ASSERT(IS_ALIGNED(__2M_init_end, MB(2)), "__2M_init_end misaligned") ASSERT(IS_ALIGNED(__2M_rwdata_start, MB(2)), "__2M_rwdata_start misaligned") ASSERT(IS_ALIGNED(__2M_rwdata_end, MB(2)), "__2M_rwdata_end misaligned") +#endif ASSERT(IS_ALIGNED(cpu0_stack, STACK_SIZE), "cpu0_stack misaligned") --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -497,6 +497,17 @@ static void __init kexec_reserve_area(st #endif } +static inline bool_t using_2M_mapping(void) +{ + return !l1_table_offset((unsigned long)__2M_text_end) && + !l1_table_offset((unsigned long)__2M_rodata_start) && + !l1_table_offset((unsigned long)__2M_rodata_end) && + !l1_table_offset((unsigned long)__2M_init_start) && + !l1_table_offset((unsigned long)__2M_init_end) && + !l1_table_offset((unsigned long)__2M_rwdata_start) && + !l1_table_offset((unsigned long)__2M_rwdata_end); +} + static void noinline init_done(void) { void *va; @@ -509,10 +520,19 @@ static void noinline init_done(void) for ( va = __init_begin; va < _p(__init_end); va += PAGE_SIZE ) clear_page(va); - /* Destroy Xen's mappings, and reuse the pages. */ - destroy_xen_mappings((unsigned long)&__2M_init_start, - (unsigned long)&__2M_init_end); - init_xenheap_pages(__pa(__2M_init_start), __pa(__2M_init_end)); + if ( using_2M_mapping() ) + { + /* Destroy Xen's mappings, and reuse the pages. */ + destroy_xen_mappings((unsigned long)&__2M_init_start, + (unsigned long)&__2M_init_end); + init_xenheap_pages(__pa(__2M_init_start), __pa(__2M_init_end)); + } + else + { + destroy_xen_mappings((unsigned long)&__init_begin, + (unsigned long)&__init_end); + init_xenheap_pages(__pa(__init_begin), __pa(__init_end)); + } printk("Freed %ldkB init memory.\n", (long)(__init_end-__init_begin)>>10); @@ -922,6 +942,8 @@ void __init noreturn __start_xen(unsigne * Undo the temporary-hooking of the l1_identmap. __2M_text_start * is contained in this PTE. */ + BUG_ON(l2_table_offset((unsigned long)_erodata) == + l2_table_offset((unsigned long)_stext)); *pl2e++ = l2e_from_pfn(xen_phys_start >> PAGE_SHIFT, PAGE_HYPERVISOR_RX | _PAGE_PSE); for ( i = 1; i < L2_PAGETABLE_ENTRIES; i++, pl2e++ ) @@ -931,6 +953,13 @@ void __init noreturn __start_xen(unsigne if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) ) continue; + if ( !using_2M_mapping() ) + { + *pl2e = l2e_from_intpte(l2e_get_intpte(*pl2e) + + xen_phys_start); + continue; + } + if ( i < l2_table_offset((unsigned long)&__2M_text_end) ) { flags = PAGE_HYPERVISOR_RX | _PAGE_PSE; --- a/xen/arch/x86/xen.lds.S +++ b/xen/arch/x86/xen.lds.S @@ -53,11 +53,14 @@ SECTIONS _etext = .; /* End of text section */ } :text = 0x9090 +#ifdef EFI . = ALIGN(MB(2)); +#endif __2M_text_end = .; __2M_rodata_start = .; /* Start of 2M superpages, mapped RO. */ .rodata : { + _srodata = .; /* Bug frames table */ . = ALIGN(4); __start_bug_frames = .; @@ -79,9 +82,12 @@ SECTIONS *(.lockprofile.data) __lock_profile_end = .; #endif + _erodata = .; } :text +#ifdef EFI . = ALIGN(MB(2)); +#endif __2M_rodata_end = .; __2M_init_start = .; /* Start of 2M superpages, mapped RWX (boot only). */ @@ -148,7 +154,9 @@ SECTIONS . = ALIGN(PAGE_SIZE); __init_end = .; +#ifdef EFI . = ALIGN(MB(2)); +#endif __2M_init_end = .; __2M_rwdata_start = .; /* Start of 2M superpages, mapped RW. */ @@ -200,7 +208,9 @@ SECTIONS } :text _end = . ; +#ifdef EFI . = ALIGN(MB(2)); +#endif __2M_rwdata_end = .; #ifdef EFI @@ -250,6 +260,7 @@ ASSERT(kexec_reloc_size - kexec_reloc <= #endif ASSERT(IS_ALIGNED(__2M_text_start, MB(2)), "__2M_text_start misaligned") +#ifdef EFI ASSERT(IS_ALIGNED(__2M_text_end, MB(2)), "__2M_text_end misaligned") ASSERT(IS_ALIGNED(__2M_rodata_start, MB(2)), "__2M_rodata_start misaligned") ASSERT(IS_ALIGNED(__2M_rodata_end, MB(2)), "__2M_rodata_end misaligned") @@ -257,6 +268,7 @@ ASSERT(IS_ALIGNED(__2M_init_start, MB( ASSERT(IS_ALIGNED(__2M_init_end, MB(2)), "__2M_init_end misaligned") ASSERT(IS_ALIGNED(__2M_rwdata_start, MB(2)), "__2M_rwdata_start misaligned") ASSERT(IS_ALIGNED(__2M_rwdata_end, MB(2)), "__2M_rwdata_end misaligned") +#endif ASSERT(IS_ALIGNED(cpu0_stack, STACK_SIZE), "cpu0_stack misaligned")