From patchwork Fri May 31 09:35:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10969839 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 30ABC1398 for ; Fri, 31 May 2019 09:36:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1E86E289CB for ; Fri, 31 May 2019 09:36:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 121B828C77; Fri, 31 May 2019 09:36:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E8C4A289F3 for ; Fri, 31 May 2019 09:36:44 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWdwR-0003Lk-3Q; Fri, 31 May 2019 09:35:15 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWdwP-0003Lf-Su for xen-devel@lists.xenproject.org; Fri, 31 May 2019 09:35:13 +0000 X-Inumbo-ID: 61356720-8387-11e9-ae13-fb39399f897c Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 61356720-8387-11e9-ae13-fb39399f897c; Fri, 31 May 2019 09:35:09 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 31 May 2019 03:35:07 -0600 Message-Id: <5CF0F5460200007800233DA8@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.1 Date: Fri, 31 May 2019 03:35:02 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CF0F33A0200007800233D8F@prv1-mh.provo.novell.com> In-Reply-To: <5CF0F33A0200007800233D8F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH 1/2] adjust special domain creation (and call it earlier on x86) X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , WeiLiu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Split out this mostly arch-independent code into a common-code helper function. (This does away with Arm's arch_init_memory() altogether.) On x86 this needs to happen before acpi_boot_init(): Commit 9fa94e1058 ("x86/ACPI: also parse AMD IOMMU tables early") only appeared to work fine - it's really broken, and doesn't crash (on non-EFI AMD systems) only because of there being a mapping of linear address 0 during early boot. On EFI there is: Early fatal page fault at e008:ffff82d08024d58e (cr2=0000000000000220, ec=0000) ----[ Xen-4.13-unstable x86_64 debug=y Not tainted ]---- CPU: 0 RIP: e008:[] pci.c#_pci_hide_device+0x17/0x3a RFLAGS: 0000000000010046 CONTEXT: hypervisor rax: 0000000000000000 rbx: 0000000000006000 rcx: 0000000000000000 rdx: ffff83104f2ee9b0 rsi: ffff82e0209e5d48 rdi: ffff83104f2ee9a0 rbp: ffff82d08081fce0 rsp: ffff82d08081fcb8 r8: 0000000000000000 r9: 8000000000000000 r10: 0180000000000000 r11: 7fffffffffffffff r12: ffff83104f2ee9a0 r13: 0000000000000002 r14: ffff83104f2ee4b0 r15: 0000000000000064 cr0: 0000000080050033 cr4: 00000000000000a0 cr3: 000000009f614000 cr2: 0000000000000220 fsb: 0000000000000000 gsb: 0000000000000000 gss: 0000000000000000 ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: 0000 cs: e008 Xen code around (pci.c#_pci_hide_device+0x17/0x3a): 48 89 47 38 48 8d 57 10 <48> 8b 88 20 02 00 00 48 89 51 08 48 89 4f 10 48 Xen stack trace from rsp=ffff82d08081fcb8: [...] Xen call trace: [] pci.c#_pci_hide_device+0x17/0x3a [ [< >] pci_ro_device+...] [] amd_iommu_detect_one_acpi+0x161/0x249 [] iommu_acpi.c#detect_iommu_acpi+0xb5/0xe7 [] acpi_table_parse+0x61/0x90 [] amd_iommu_detect_acpi+0x17/0x19 [] acpi_ivrs_init+0x20/0x5b [] acpi_boot_init+0x301/0x30f [] __start_xen+0x1daf/0x28a2 Pagetable walk from 0000000000000220: L4[0x000] = 000000009f44f063 ffffffffffffffff L3[0x000] = 000000009f44b063 ffffffffffffffff L2[0x000] = 0000000000000000 ffffffffffffffff **************************************** Panic on CPU 0: FATAL TRAP: vector = 14 (page fault) [error_code=0000] , IN INTERRUPT CONTEXT **************************************** Of course the bug would nevertheless have lead to post-boot crashes as soon as the list would actually get traversed. Signed-off-by: Jan Beulich --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -42,8 +42,6 @@ #include #include -struct domain *dom_xen, *dom_io, *dom_cow; - /* Override macros from asm/page.h to make them work with mfn_t */ #undef virt_to_mfn #define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) @@ -513,32 +511,6 @@ void flush_page_to_ram(unsigned long mfn invalidate_icache(); } -void __init arch_init_memory(void) -{ - /* - * Initialise our DOMID_XEN domain. - * Any Xen-heap pages that we will allow to be mapped will have - * their domain field set to dom_xen. - */ - dom_xen = domain_create(DOMID_XEN, NULL, false); - BUG_ON(IS_ERR(dom_xen)); - - /* - * Initialise our DOMID_IO domain. - * This domain owns I/O pages that are within the range of the page_info - * array. Mappings occur at the priv of the caller. - */ - dom_io = domain_create(DOMID_IO, NULL, false); - BUG_ON(IS_ERR(dom_io)); - - /* - * Initialise our COW domain. - * This domain owns sharable pages. - */ - dom_cow = domain_create(DOMID_COW, NULL, false); - BUG_ON(IS_ERR(dom_cow)); -} - static inline lpae_t pte_of_xenaddr(vaddr_t va) { paddr_t ma = va + phys_offset; --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -846,7 +846,7 @@ void __init start_xen(unsigned long boot rcu_init(); - arch_init_memory(); + setup_special_domains(); local_irq_enable(); local_abort_enable(); --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -160,9 +160,6 @@ l1_pgentry_t __section(".bss.page_aligne paddr_t __read_mostly mem_hotplug; -/* Private domain structs for DOMID_XEN and DOMID_IO. */ -struct domain *dom_xen, *dom_io, *dom_cow; - /* Frame table size in pages. */ unsigned long max_page; unsigned long total_pages; @@ -283,32 +280,6 @@ void __init arch_init_memory(void) _PAGE_DIRTY | _PAGE_AVAIL | _PAGE_AVAIL_HIGH | _PAGE_NX); /* - * Initialise our DOMID_XEN domain. - * Any Xen-heap pages that we will allow to be mapped will have - * their domain field set to dom_xen. - * Hidden PCI devices will also be associated with this domain - * (but be [partly] controlled by Dom0 nevertheless). - */ - dom_xen = domain_create(DOMID_XEN, NULL, false); - BUG_ON(IS_ERR(dom_xen)); - INIT_LIST_HEAD(&dom_xen->arch.pdev_list); - - /* - * Initialise our DOMID_IO domain. - * This domain owns I/O pages that are within the range of the page_info - * array. Mappings occur at the priv of the caller. - */ - dom_io = domain_create(DOMID_IO, NULL, false); - BUG_ON(IS_ERR(dom_io)); - - /* - * Initialise our COW domain. - * This domain owns sharable pages. - */ - dom_cow = domain_create(DOMID_COW, NULL, false); - BUG_ON(IS_ERR(dom_cow)); - - /* * First 1MB of RAM is historically marked as I/O. * Note that apart from IO Xen also uses the low 1MB to store the AP boot * trampoline and boot information metadata. Due to this always special --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -1533,6 +1533,8 @@ void __init noreturn __start_xen(unsigne mmio_ro_ranges = rangeset_new(NULL, "r/o mmio ranges", RANGESETF_prettyprint_hex); + setup_special_domains(); + acpi_boot_init(); if ( smp_found_config ) --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -71,6 +71,11 @@ domid_t hardware_domid __read_mostly; integer_param("hardware_dom", hardware_domid); #endif +/* Private domain structs for DOMID_XEN, DOMID_IO, etc. */ +struct domain *__read_mostly dom_xen; +struct domain *__read_mostly dom_io; +struct domain *__read_mostly dom_cow; + struct vcpu *idle_vcpu[NR_CPUS] __read_mostly; vcpu_info_t dummy_vcpu_info; @@ -516,6 +521,36 @@ struct domain *domain_create(domid_t dom return ERR_PTR(err); } +void __init setup_special_domains(void) +{ + /* + * Initialise our DOMID_XEN domain. + * Any Xen-heap pages that we will allow to be mapped will have + * their domain field set to dom_xen. + * Hidden PCI devices will also be associated with this domain + * (but be [partly] controlled by Dom0 nevertheless). + */ + dom_xen = domain_create(DOMID_XEN, NULL, false); + BUG_ON(IS_ERR(dom_xen)); +#ifdef CONFIG_HAS_PCI + INIT_LIST_HEAD(&dom_xen->arch.pdev_list); +#endif + + /* + * Initialise our DOMID_IO domain. + * This domain owns I/O pages that are within the range of the page_info + * array. Mappings occur at the priv of the caller. + */ + dom_io = domain_create(DOMID_IO, NULL, false); + BUG_ON(IS_ERR(dom_io)); + + /* + * Initialise our COW domain. + * This domain owns sharable pages. + */ + dom_cow = domain_create(DOMID_COW, NULL, false); + BUG_ON(IS_ERR(dom_cow)); +} void domain_update_node_affinity(struct domain *d) { --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -334,8 +334,6 @@ long arch_memory_op(int op, XEN_GUEST_HA unsigned long domain_get_maximum_gpfn(struct domain *d); -extern struct domain *dom_xen, *dom_io, *dom_cow; - #define memguard_guard_stack(_p) ((void)0) #define memguard_guard_range(_p,_l) ((void)0) #define memguard_unguard_range(_p,_l) ((void)0) --- a/xen/include/asm-arm/setup.h +++ b/xen/include/asm-arm/setup.h @@ -77,8 +77,6 @@ extern struct bootinfo bootinfo; extern domid_t max_init_domid; -void arch_init_memory(void); - void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len); size_t estimate_efi_size(int mem_nr_banks); --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -595,8 +595,6 @@ unsigned int domain_clamp_alloc_bitsize( unsigned long domain_get_maximum_gpfn(struct domain *d); -extern struct domain *dom_xen, *dom_io, *dom_cow; /* for vmcoreinfo */ - /* Definition of an mm lock: spinlock with extra fields for debugging */ typedef struct mm_lock { spinlock_t lock; --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -5,6 +5,7 @@ #include #include + #include #include @@ -22,6 +23,8 @@ struct vcpu *alloc_dom0_vcpu0(struct dom int vcpu_reset(struct vcpu *); int vcpu_up(struct vcpu *v); +void setup_special_domains(void); + struct xen_domctl_getdomaininfo; void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info); void arch_get_domain_info(const struct domain *d, --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -642,6 +642,9 @@ static inline void filtered_flush_tlb_ma } } +/* Private domain structs for DOMID_XEN, DOMID_IO, etc. */ +extern struct domain *dom_xen, *dom_io, *dom_cow; + enum XENSHARE_flags { SHARE_rw, SHARE_ro, From patchwork Fri May 31 09:35:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10969841 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A0DB1398 for ; Fri, 31 May 2019 09:37:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5760228987 for ; Fri, 31 May 2019 09:37:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4B16928C5A; Fri, 31 May 2019 09:37:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D3ED928987 for ; Fri, 31 May 2019 09:37:30 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWdwz-0003Nq-Eh; Fri, 31 May 2019 09:35:49 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWdwy-0003Nh-E9 for xen-devel@lists.xenproject.org; Fri, 31 May 2019 09:35:48 +0000 X-Inumbo-ID: 77d4f4d8-8387-11e9-8980-bc764e045a96 Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 77d4f4d8-8387-11e9-8980-bc764e045a96; Fri, 31 May 2019 09:35:46 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 31 May 2019 03:35:46 -0600 Message-Id: <5CF0F5700200007800233DB4@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.1 Date: Fri, 31 May 2019 03:35:44 -0600 From: "Jan Beulich" To: "xen-devel" References: <5CF0F33A0200007800233D8F@prv1-mh.provo.novell.com> In-Reply-To: <5CF0F33A0200007800233D8F@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH 2/2] dom_cow is needed for mem-sharing only X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , WeiLiu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Tamas K Lengyel Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP A couple of adjustments are needed to code checking for dom_cow, but since there are pretty few it is probably better to adjust those than to set up and keep around a never used domain. Take the opportunity and tighten a BUG_ON() in emul-priv-op.c:read_cr(). (Arguably this perhaps shouldn't be a BUG_ON() in the first place.) Signed-off-by: Jan Beulich --- While for now this avoids creating the domain on Arm only, Tamas'es patch switching to CONFIG_MEM_SHARING will make x86 leverage this too. --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -967,8 +967,8 @@ get_page_from_l1e( return flip; } - if ( unlikely( (real_pg_owner != pg_owner) && - (real_pg_owner != dom_cow) ) ) + if ( unlikely((real_pg_owner != pg_owner) && + (!dom_cow || (real_pg_owner != dom_cow))) ) { /* * Let privileged domains transfer the right to map their target --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -568,7 +568,8 @@ struct page_info *p2m_get_page_from_gfn( } else if ( !get_page(page, p2m->domain) && /* Page could be shared */ - (!p2m_is_shared(*t) || !get_page(page, dom_cow)) ) + (!dom_cow || !p2m_is_shared(*t) || + !get_page(page, dom_cow)) ) page = NULL; } p2m_read_unlock(p2m); @@ -941,7 +942,8 @@ guest_physmap_add_entry(struct domain *d /* Then, look for m->p mappings for this range and deal with them */ for ( i = 0; i < (1UL << page_order); i++ ) { - if ( page_get_owner(mfn_to_page(mfn_add(mfn, i))) == dom_cow ) + if ( dom_cow && + page_get_owner(mfn_to_page(mfn_add(mfn, i))) == dom_cow ) { /* This is no way to add a shared page to your physmap! */ gdprintk(XENLOG_ERR, "Adding shared mfn %lx directly to dom%d physmap not allowed.\n", --- a/xen/arch/x86/pv/emul-priv-op.c +++ b/xen/arch/x86/pv/emul-priv-op.c @@ -723,8 +723,8 @@ static int read_cr(unsigned int reg, uns unmap_domain_page(pl4e); *val = compat_pfn_to_cr3(mfn_to_gmfn(currd, mfn_x(mfn))); } - /* PTs should not be shared */ - BUG_ON(page_get_owner(mfn_to_page(mfn)) == dom_cow); + /* PTs should be owned by their domains */ + BUG_ON(page_get_owner(mfn_to_page(mfn)) != currd); return X86EMUL_OKAY; } } --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -74,7 +74,9 @@ integer_param("hardware_dom", hardware_d /* Private domain structs for DOMID_XEN, DOMID_IO, etc. */ struct domain *__read_mostly dom_xen; struct domain *__read_mostly dom_io; +#ifdef CONFIG_HAS_MEM_SHARING struct domain *__read_mostly dom_cow; +#endif struct vcpu *idle_vcpu[NR_CPUS] __read_mostly; @@ -544,12 +546,14 @@ void __init setup_special_domains(void) dom_io = domain_create(DOMID_IO, NULL, false); BUG_ON(IS_ERR(dom_io)); +#ifdef CONFIG_HAS_MEM_SHARING /* * Initialise our COW domain. * This domain owns sharable pages. */ dom_cow = domain_create(DOMID_COW, NULL, false); BUG_ON(IS_ERR(dom_cow)); +#endif } void domain_update_node_affinity(struct domain *d) --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -1095,7 +1095,7 @@ map_grant_ref( host_map_created = true; } } - else if ( owner == rd || owner == dom_cow ) + else if ( owner == rd || (dom_cow && owner == dom_cow) ) { if ( (op->flags & GNTMAP_device_map) && !(op->flags & GNTMAP_readonly) ) { --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -644,6 +644,9 @@ static inline void filtered_flush_tlb_ma /* Private domain structs for DOMID_XEN, DOMID_IO, etc. */ extern struct domain *dom_xen, *dom_io, *dom_cow; +#ifndef CONFIG_HAS_MEM_SHARING +# define dom_cow NULL +#endif enum XENSHARE_flags { SHARE_rw,