From patchwork Thu Sep 26 09:46:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xia, Hongyan" X-Patchwork-Id: 11162163 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A3A1E13BD for ; Thu, 26 Sep 2019 09:50:55 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6671F2053B for ; Thu, 26 Sep 2019 09:50:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="H9XpqcXv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6671F2053B Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDQPN-0007F4-Om; Thu, 26 Sep 2019 09:49:57 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDQPL-0007BU-V1 for xen-devel@lists.xenproject.org; Thu, 26 Sep 2019 09:49:56 +0000 X-Inumbo-ID: dfc72af2-e042-11e9-964c-12813bfff9fa Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25]) by localhost (Halon) with ESMTPS id dfc72af2-e042-11e9-964c-12813bfff9fa; Thu, 26 Sep 2019 09:49:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1569491343; x=1601027343; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=SACdUBbbExS2N8tVnCoyvdJrMIaNKqA8nFeqqiztjxw=; b=H9XpqcXvRMU2WdWuQHXLy7tdwgiz7IxJUSnxIp7gmI6v7L/gdIq8E06W /b3FAdwt4JLx4c9kCbzD2YdQoJAL2EGqh5g6BvI51KaMySl+6pyplU7Ds k5S2j2aUOXU2OF8YnUBuTS1CS1k6IKwXzTJ9zDlPzGOg6+E0hbVJPWbMN U=; X-IronPort-AV: E=Sophos;i="5.64,551,1559520000"; d="scan'208";a="753354302" Received: from iad6-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-1e-17c49630.us-east-1.amazon.com) ([10.124.125.2]) by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP; 26 Sep 2019 09:49:03 +0000 Received: from EX13MTAUWA001.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166]) by email-inbound-relay-1e-17c49630.us-east-1.amazon.com (Postfix) with ESMTPS id 39407A2A4A; Thu, 26 Sep 2019 09:49:03 +0000 (UTC) Received: from EX13D27UWA003.ant.amazon.com (10.43.160.56) by EX13MTAUWA001.ant.amazon.com (10.43.160.118) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 09:48:37 +0000 Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by EX13D27UWA003.ant.amazon.com (10.43.160.56) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 09:48:36 +0000 Received: from u9d785c4ba99158.ant.amazon.com (10.125.106.58) by mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 09:48:35 +0000 From: To: Date: Thu, 26 Sep 2019 10:46:19 +0100 Message-ID: <28a37c34184073178dfc096729179b44b06baa1c.1569489002.git.hongyax@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [RFC PATCH 56/84] x86/mm: drop _new suffix for page table APIs X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_?= =?utf-8?q?Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Wei Liu Signed-off-by: Wei Liu --- xen/arch/x86/domain.c | 4 +- xen/arch/x86/domain_page.c | 2 +- xen/arch/x86/efi/runtime.h | 4 +- xen/arch/x86/mm.c | 164 +++++++++++++++++------------------ xen/arch/x86/pv/dom0_build.c | 28 +++--- xen/arch/x86/pv/shim.c | 12 +-- xen/arch/x86/setup.c | 8 +- xen/arch/x86/smpboot.c | 74 ++++++++-------- xen/arch/x86/x86_64/mm.c | 136 ++++++++++++++--------------- xen/common/efi/boot.c | 42 ++++----- xen/include/asm-x86/mm.h | 18 ++-- 11 files changed, 246 insertions(+), 246 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index a11b05ea5a..75e89b81bf 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1588,11 +1588,11 @@ void paravirt_ctxt_switch_to(struct vcpu *v) root_pgentry_t *rpt; mapcache_override_current(INVALID_VCPU); - rpt = map_xen_pagetable_new(rpt_mfn); + rpt = map_xen_pagetable(rpt_mfn); rpt[root_table_offset(PERDOMAIN_VIRT_START)] = l4e_from_page(v->domain->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW); - UNMAP_XEN_PAGETABLE_NEW(rpt); + UNMAP_XEN_PAGETABLE(rpt); mapcache_override_current(NULL); } diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c index cfcffd35f3..9ea74b456c 100644 --- a/xen/arch/x86/domain_page.c +++ b/xen/arch/x86/domain_page.c @@ -343,7 +343,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr) l1_pgentry_t *pl1e = virt_to_xen_l1e(va); BUG_ON(!pl1e); l1e = *pl1e; - UNMAP_XEN_PAGETABLE_NEW(pl1e); + UNMAP_XEN_PAGETABLE(pl1e); } else { diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h index 277d237953..ca15c5aab7 100644 --- a/xen/arch/x86/efi/runtime.h +++ b/xen/arch/x86/efi/runtime.h @@ -10,9 +10,9 @@ void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e) { l4_pgentry_t *l4t; - l4t = map_xen_pagetable_new(efi_l4_mfn); + l4t = map_xen_pagetable(efi_l4_mfn); l4e_write(l4t + l4idx, l4e); - UNMAP_XEN_PAGETABLE_NEW(l4t); + UNMAP_XEN_PAGETABLE(l4t); } } #endif diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 8e33c8f4fe..b2b2edbed1 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -353,22 +353,22 @@ void __init arch_init_memory(void) ASSERT(root_pgt_pv_xen_slots < ROOT_PAGETABLE_PV_XEN_SLOTS); if ( l4_table_offset(split_va) == l4_table_offset(split_va - 1) ) { - mfn_t l3tab_mfn = alloc_xen_pagetable_new(); + mfn_t l3tab_mfn = alloc_xen_pagetable(); if ( !mfn_eq(l3tab_mfn, INVALID_MFN) ) { l3_pgentry_t *l3idle = - map_xen_pagetable_new( + map_xen_pagetable( l4e_get_mfn(idle_pg_table[l4_table_offset(split_va)])); - l3_pgentry_t *l3tab = map_xen_pagetable_new(l3tab_mfn); + l3_pgentry_t *l3tab = map_xen_pagetable(l3tab_mfn); for ( i = 0; i < l3_table_offset(split_va); ++i ) l3tab[i] = l3idle[i]; for ( ; i < L3_PAGETABLE_ENTRIES; ++i ) l3tab[i] = l3e_empty(); split_l4e = l4e_from_mfn(l3tab_mfn, __PAGE_HYPERVISOR_RW); - UNMAP_XEN_PAGETABLE_NEW(l3idle); - UNMAP_XEN_PAGETABLE_NEW(l3tab); + UNMAP_XEN_PAGETABLE(l3idle); + UNMAP_XEN_PAGETABLE(l3tab); } else ++root_pgt_pv_xen_slots; @@ -4850,7 +4850,7 @@ int mmcfg_intercept_write( return X86EMUL_OKAY; } -mfn_t alloc_xen_pagetable_new(void) +mfn_t alloc_xen_pagetable(void) { if ( system_state != SYS_STATE_early_boot ) { @@ -4863,20 +4863,20 @@ mfn_t alloc_xen_pagetable_new(void) return alloc_boot_pages(1, 1); } -void *map_xen_pagetable_new(mfn_t mfn) +void *map_xen_pagetable(mfn_t mfn) { return map_domain_page(mfn); } /* v can point to an entry within a table or be NULL */ -void unmap_xen_pagetable_new(void *v) +void unmap_xen_pagetable(void *v) { if ( v ) unmap_domain_page((const void *)((unsigned long)v & PAGE_MASK)); } /* mfn can be INVALID_MFN */ -void free_xen_pagetable_new(mfn_t mfn) +void free_xen_pagetable(mfn_t mfn) { if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) ) free_domheap_page(mfn_to_page(mfn)); @@ -4900,11 +4900,11 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v) l3_pgentry_t *l3t; mfn_t mfn; - mfn = alloc_xen_pagetable_new(); + mfn = alloc_xen_pagetable(); if ( mfn_eq(mfn, INVALID_MFN) ) goto out; - l3t = map_xen_pagetable_new(mfn); + l3t = map_xen_pagetable(mfn); if ( locking ) spin_lock(&map_pgdir_lock); @@ -4924,15 +4924,15 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v) { ASSERT(!pl3e); ASSERT(!mfn_eq(mfn, INVALID_MFN)); - UNMAP_XEN_PAGETABLE_NEW(l3t); - free_xen_pagetable_new(mfn); + UNMAP_XEN_PAGETABLE(l3t); + free_xen_pagetable(mfn); } } if ( !pl3e ) { ASSERT(l4e_get_flags(*pl4e) & _PAGE_PRESENT); - pl3e = (l3_pgentry_t *)map_xen_pagetable_new(l4e_get_mfn(*pl4e)) + pl3e = (l3_pgentry_t *)map_xen_pagetable(l4e_get_mfn(*pl4e)) + l3_table_offset(v); } @@ -4959,11 +4959,11 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v) l2_pgentry_t *l2t; mfn_t mfn; - mfn = alloc_xen_pagetable_new(); + mfn = alloc_xen_pagetable(); if ( mfn_eq(mfn, INVALID_MFN) ) goto out; - l2t = map_xen_pagetable_new(mfn); + l2t = map_xen_pagetable(mfn); if ( locking ) spin_lock(&map_pgdir_lock); @@ -4981,8 +4981,8 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v) { ASSERT(!pl2e); ASSERT(!mfn_eq(mfn, INVALID_MFN)); - UNMAP_XEN_PAGETABLE_NEW(l2t); - free_xen_pagetable_new(mfn); + UNMAP_XEN_PAGETABLE(l2t); + free_xen_pagetable(mfn); } } @@ -4991,12 +4991,12 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v) if ( !pl2e ) { ASSERT(l3e_get_flags(*pl3e) & _PAGE_PRESENT); - pl2e = (l2_pgentry_t *)map_xen_pagetable_new(l3e_get_mfn(*pl3e)) + pl2e = (l2_pgentry_t *)map_xen_pagetable(l3e_get_mfn(*pl3e)) + l2_table_offset(v); } out: - UNMAP_XEN_PAGETABLE_NEW(pl3e); + UNMAP_XEN_PAGETABLE(pl3e); return pl2e; } @@ -5015,11 +5015,11 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v) l1_pgentry_t *l1t; mfn_t mfn; - mfn = alloc_xen_pagetable_new(); + mfn = alloc_xen_pagetable(); if ( mfn_eq(mfn, INVALID_MFN) ) goto out; - l1t = map_xen_pagetable_new(mfn); + l1t = map_xen_pagetable(mfn); if ( locking ) spin_lock(&map_pgdir_lock); @@ -5037,8 +5037,8 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v) { ASSERT(!pl1e); ASSERT(!mfn_eq(mfn, INVALID_MFN)); - UNMAP_XEN_PAGETABLE_NEW(l1t); - free_xen_pagetable_new(mfn); + UNMAP_XEN_PAGETABLE(l1t); + free_xen_pagetable(mfn); } } @@ -5047,12 +5047,12 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v) if ( !pl1e ) { ASSERT(l2e_get_flags(*pl2e) & _PAGE_PRESENT); - pl1e = (l1_pgentry_t *)map_xen_pagetable_new(l2e_get_mfn(*pl2e)) + pl1e = (l1_pgentry_t *)map_xen_pagetable(l2e_get_mfn(*pl2e)) + l1_table_offset(v); } out: - UNMAP_XEN_PAGETABLE_NEW(pl2e); + UNMAP_XEN_PAGETABLE(pl2e); return pl1e; } @@ -5131,7 +5131,7 @@ int map_pages_to_xen( l2_pgentry_t *l2t; mfn_t l2t_mfn = l3e_get_mfn(ol3e); - l2t = map_xen_pagetable_new(l2t_mfn); + l2t = map_xen_pagetable(l2t_mfn); for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ ) { @@ -5146,10 +5146,10 @@ int map_pages_to_xen( l1_pgentry_t *l1t; mfn_t l1t_mfn = l2e_get_mfn(ol2e); - l1t = map_xen_pagetable_new(l1t_mfn); + l1t = map_xen_pagetable(l1t_mfn); for ( j = 0; j < L1_PAGETABLE_ENTRIES; j++ ) flush_flags(l1e_get_flags(l1t[j])); - UNMAP_XEN_PAGETABLE_NEW(l1t); + UNMAP_XEN_PAGETABLE(l1t); } } flush_area(virt, flush_flags); @@ -5158,9 +5158,9 @@ int map_pages_to_xen( ol2e = l2t[i]; if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) && !(l2e_get_flags(ol2e) & _PAGE_PSE) ) - free_xen_pagetable_new(l2e_get_mfn(ol2e)); + free_xen_pagetable(l2e_get_mfn(ol2e)); } - free_xen_pagetable_new(l2t_mfn); + free_xen_pagetable(l2t_mfn); } } @@ -5199,14 +5199,14 @@ int map_pages_to_xen( goto end_of_loop; } - l2t_mfn = alloc_xen_pagetable_new(); + l2t_mfn = alloc_xen_pagetable(); if ( mfn_eq(l2t_mfn, INVALID_MFN) ) { ASSERT(rc == -ENOMEM); goto out; } - l2t = map_xen_pagetable_new(l2t_mfn); + l2t = map_xen_pagetable(l2t_mfn); for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ ) l2e_write(l2t + i, @@ -5224,7 +5224,7 @@ int map_pages_to_xen( { l3e_write_atomic(pl3e, l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR)); - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); l2t = NULL; } if ( locking ) @@ -5232,8 +5232,8 @@ int map_pages_to_xen( flush_area(virt, flush_flags); if ( l2t ) { - UNMAP_XEN_PAGETABLE_NEW(l2t); - free_xen_pagetable_new(l2t_mfn); + UNMAP_XEN_PAGETABLE(l2t); + free_xen_pagetable(l2t_mfn); } } @@ -5268,12 +5268,12 @@ int map_pages_to_xen( l1_pgentry_t *l1t; mfn_t l1t_mfn = l2e_get_mfn(ol2e); - l1t = map_xen_pagetable_new(l1t_mfn); + l1t = map_xen_pagetable(l1t_mfn); for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) flush_flags(l1e_get_flags(l1t[i])); flush_area(virt, flush_flags); - UNMAP_XEN_PAGETABLE_NEW(l1t); - free_xen_pagetable_new(l1t_mfn); + UNMAP_XEN_PAGETABLE(l1t); + free_xen_pagetable(l1t_mfn); } } @@ -5294,7 +5294,7 @@ int map_pages_to_xen( ASSERT(rc == -ENOMEM); goto out; } - UNMAP_XEN_PAGETABLE_NEW(pl1e); + UNMAP_XEN_PAGETABLE(pl1e); } else if ( l2e_get_flags(*pl2e) & _PAGE_PSE ) { @@ -5321,14 +5321,14 @@ int map_pages_to_xen( goto check_l3; } - l1t_mfn = alloc_xen_pagetable_new(); + l1t_mfn = alloc_xen_pagetable(); if ( mfn_eq(l1t_mfn, INVALID_MFN) ) { ASSERT(rc == -ENOMEM); goto out; } - l1t = map_xen_pagetable_new(l1t_mfn); + l1t = map_xen_pagetable(l1t_mfn); for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) l1e_write(&l1t[i], @@ -5345,7 +5345,7 @@ int map_pages_to_xen( { l2e_write_atomic(pl2e, l2e_from_mfn(l1t_mfn, __PAGE_HYPERVISOR)); - UNMAP_XEN_PAGETABLE_NEW(l1t); + UNMAP_XEN_PAGETABLE(l1t); l1t = NULL; } if ( locking ) @@ -5353,16 +5353,16 @@ int map_pages_to_xen( flush_area(virt, flush_flags); if ( l1t ) { - UNMAP_XEN_PAGETABLE_NEW(l1t); - free_xen_pagetable_new(l1t_mfn); + UNMAP_XEN_PAGETABLE(l1t); + free_xen_pagetable(l1t_mfn); } } - pl1e = map_xen_pagetable_new(l2e_get_mfn((*pl2e))); + pl1e = map_xen_pagetable(l2e_get_mfn((*pl2e))); pl1e += l1_table_offset(virt); ol1e = *pl1e; l1e_write_atomic(pl1e, l1e_from_mfn(mfn, flags)); - UNMAP_XEN_PAGETABLE_NEW(pl1e); + UNMAP_XEN_PAGETABLE(pl1e); if ( (l1e_get_flags(ol1e) & _PAGE_PRESENT) ) { unsigned int flush_flags = FLUSH_TLB | FLUSH_ORDER(0); @@ -5408,14 +5408,14 @@ int map_pages_to_xen( } l1t_mfn = l2e_get_mfn(ol2e); - l1t = map_xen_pagetable_new(l1t_mfn); + l1t = map_xen_pagetable(l1t_mfn); base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1); for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) || (l1e_get_flags(l1t[i]) != flags) ) break; - UNMAP_XEN_PAGETABLE_NEW(l1t); + UNMAP_XEN_PAGETABLE(l1t); if ( i == L1_PAGETABLE_ENTRIES ) { l2e_write_atomic(pl2e, l2e_from_pfn(base_mfn, @@ -5425,7 +5425,7 @@ int map_pages_to_xen( flush_area(virt - PAGE_SIZE, FLUSH_TLB_GLOBAL | FLUSH_ORDER(PAGETABLE_ORDER)); - free_xen_pagetable_new(l1t_mfn); + free_xen_pagetable(l1t_mfn); } else if ( locking ) spin_unlock(&map_pgdir_lock); @@ -5460,7 +5460,7 @@ int map_pages_to_xen( } l2t_mfn = l3e_get_mfn(ol3e); - l2t = map_xen_pagetable_new(l2t_mfn); + l2t = map_xen_pagetable(l2t_mfn); base_mfn = l2e_get_pfn(l2t[0]) & ~(L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES - 1); @@ -5469,7 +5469,7 @@ int map_pages_to_xen( (base_mfn + (i << PAGETABLE_ORDER))) || (l2e_get_flags(l2t[i]) != l1f_to_lNf(flags)) ) break; - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); if ( i == L2_PAGETABLE_ENTRIES ) { l3e_write_atomic(pl3e, l3e_from_pfn(base_mfn, @@ -5479,15 +5479,15 @@ int map_pages_to_xen( flush_area(virt - PAGE_SIZE, FLUSH_TLB_GLOBAL | FLUSH_ORDER(2*PAGETABLE_ORDER)); - free_xen_pagetable_new(l2t_mfn); + free_xen_pagetable(l2t_mfn); } else if ( locking ) spin_unlock(&map_pgdir_lock); } end_of_loop: - UNMAP_XEN_PAGETABLE_NEW(pl1e); - UNMAP_XEN_PAGETABLE_NEW(pl2e); - UNMAP_XEN_PAGETABLE_NEW(pl3e); + UNMAP_XEN_PAGETABLE(pl1e); + UNMAP_XEN_PAGETABLE(pl2e); + UNMAP_XEN_PAGETABLE(pl3e); } #undef flush_flags @@ -5495,9 +5495,9 @@ int map_pages_to_xen( rc = 0; out: - UNMAP_XEN_PAGETABLE_NEW(pl1e); - UNMAP_XEN_PAGETABLE_NEW(pl2e); - UNMAP_XEN_PAGETABLE_NEW(pl3e); + UNMAP_XEN_PAGETABLE(pl1e); + UNMAP_XEN_PAGETABLE(pl2e); + UNMAP_XEN_PAGETABLE(pl3e); return rc; } @@ -5568,14 +5568,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) } /* PAGE1GB: shatter the superpage and fall through. */ - mfn = alloc_xen_pagetable_new(); + mfn = alloc_xen_pagetable(); if ( mfn_eq(mfn, INVALID_MFN) ) { ASSERT(rc == -ENOMEM); goto out; } - l2t = map_xen_pagetable_new(mfn); + l2t = map_xen_pagetable(mfn); for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ ) l2e_write(l2t + i, @@ -5588,15 +5588,15 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) (l3e_get_flags(*pl3e) & _PAGE_PSE) ) { l3e_write_atomic(pl3e, l3e_from_mfn(mfn, __PAGE_HYPERVISOR)); - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); l2t = NULL; } if ( locking ) spin_unlock(&map_pgdir_lock); if ( l2t ) { - UNMAP_XEN_PAGETABLE_NEW(l2t); - free_xen_pagetable_new(mfn); + UNMAP_XEN_PAGETABLE(l2t); + free_xen_pagetable(mfn); } } @@ -5604,7 +5604,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) * The L3 entry has been verified to be present, and we've dealt with * 1G pages as well, so the L2 table cannot require allocation. */ - pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e)); + pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e)); pl2e += l2_table_offset(v); if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) ) @@ -5636,14 +5636,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) mfn_t mfn; /* PSE: shatter the superpage and try again. */ - mfn = alloc_xen_pagetable_new(); + mfn = alloc_xen_pagetable(); if ( mfn_eq(mfn, INVALID_MFN) ) { ASSERT(rc == -ENOMEM); goto out; } - l1t = map_xen_pagetable_new(mfn); + l1t = map_xen_pagetable(mfn); for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) l1e_write(&l1t[i], @@ -5656,15 +5656,15 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) { l2e_write_atomic(pl2e, l2e_from_mfn(mfn, __PAGE_HYPERVISOR)); - UNMAP_XEN_PAGETABLE_NEW(l1t); + UNMAP_XEN_PAGETABLE(l1t); l1t = NULL; } if ( locking ) spin_unlock(&map_pgdir_lock); if ( l1t ) { - UNMAP_XEN_PAGETABLE_NEW(l1t); - free_xen_pagetable_new(mfn); + UNMAP_XEN_PAGETABLE(l1t); + free_xen_pagetable(mfn); } } } @@ -5678,7 +5678,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) * present, and we've dealt with 2M pages as well, so the L1 table * cannot require allocation. */ - pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e)); + pl1e = map_xen_pagetable(l2e_get_mfn(*pl2e)); pl1e += l1_table_offset(v); /* Confirm the caller isn't trying to create new mappings. */ @@ -5690,7 +5690,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) (l1e_get_flags(*pl1e) & ~FLAGS_MASK) | nf); l1e_write_atomic(pl1e, nl1e); - UNMAP_XEN_PAGETABLE_NEW(pl1e); + UNMAP_XEN_PAGETABLE(pl1e); v += PAGE_SIZE; /* @@ -5721,11 +5721,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) } l1t_mfn = l2e_get_mfn(*pl2e); - l1t = map_xen_pagetable_new(l1t_mfn); + l1t = map_xen_pagetable(l1t_mfn); for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) if ( l1e_get_intpte(l1t[i]) != 0 ) break; - UNMAP_XEN_PAGETABLE_NEW(l1t); + UNMAP_XEN_PAGETABLE(l1t); if ( i == L1_PAGETABLE_ENTRIES ) { /* Empty: zap the L2E and free the L1 page. */ @@ -5733,7 +5733,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) if ( locking ) spin_unlock(&map_pgdir_lock); flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */ - free_xen_pagetable_new(l1t_mfn); + free_xen_pagetable(l1t_mfn); } else if ( locking ) spin_unlock(&map_pgdir_lock); @@ -5767,11 +5767,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) mfn_t l2t_mfn; l2t_mfn = l3e_get_mfn(*pl3e); - l2t = map_xen_pagetable_new(l2t_mfn); + l2t = map_xen_pagetable(l2t_mfn); for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ ) if ( l2e_get_intpte(l2t[i]) != 0 ) break; - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); if ( i == L2_PAGETABLE_ENTRIES ) { /* Empty: zap the L3E and free the L2 page. */ @@ -5779,14 +5779,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) if ( locking ) spin_unlock(&map_pgdir_lock); flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */ - free_xen_pagetable_new(l2t_mfn); + free_xen_pagetable(l2t_mfn); } else if ( locking ) spin_unlock(&map_pgdir_lock); } end_of_loop: - UNMAP_XEN_PAGETABLE_NEW(pl2e); - UNMAP_XEN_PAGETABLE_NEW(pl3e); + UNMAP_XEN_PAGETABLE(pl2e); + UNMAP_XEN_PAGETABLE(pl3e); } flush_area(NULL, FLUSH_TLB_GLOBAL); @@ -5795,8 +5795,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) rc = 0; out: - UNMAP_XEN_PAGETABLE_NEW(pl2e); - UNMAP_XEN_PAGETABLE_NEW(pl3e); + UNMAP_XEN_PAGETABLE(pl2e); + UNMAP_XEN_PAGETABLE(pl3e); return rc; } diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index 39cb68f7da..02d7f1c27c 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -55,11 +55,11 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d, l1_pgentry_t *pl1e, *l1t; pl4e = l4start + l4_table_offset(vpt_start); - l3t = map_xen_pagetable_new(l4e_get_mfn(*pl4e)); + l3t = map_xen_pagetable(l4e_get_mfn(*pl4e)); pl3e = l3t + l3_table_offset(vpt_start); - l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e)); + l2t = map_xen_pagetable(l3e_get_mfn(*pl3e)); pl2e = l2t + l2_table_offset(vpt_start); - l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e)); + l1t = map_xen_pagetable(l2e_get_mfn(*pl2e)); pl1e = l1t + l1_table_offset(vpt_start); for ( count = 0; count < nr_pt_pages; count++ ) { @@ -86,22 +86,22 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d, { if ( !((unsigned long)++pl3e & (PAGE_SIZE - 1)) ) { - UNMAP_XEN_PAGETABLE_NEW(l3t); - l3t = map_xen_pagetable_new(l4e_get_mfn(*++pl4e)); + UNMAP_XEN_PAGETABLE(l3t); + l3t = map_xen_pagetable(l4e_get_mfn(*++pl4e)); pl3e = l3t; } - UNMAP_XEN_PAGETABLE_NEW(l2t); - l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e)); + UNMAP_XEN_PAGETABLE(l2t); + l2t = map_xen_pagetable(l3e_get_mfn(*pl3e)); pl2e = l2t; } - UNMAP_XEN_PAGETABLE_NEW(l1t); - l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e)); + UNMAP_XEN_PAGETABLE(l1t); + l1t = map_xen_pagetable(l2e_get_mfn(*pl2e)); pl1e = l1t; } } - UNMAP_XEN_PAGETABLE_NEW(l1t); - UNMAP_XEN_PAGETABLE_NEW(l2t); - UNMAP_XEN_PAGETABLE_NEW(l3t); + UNMAP_XEN_PAGETABLE(l1t); + UNMAP_XEN_PAGETABLE(l2t); + UNMAP_XEN_PAGETABLE(l3t); } static __init void setup_pv_physmap(struct domain *d, unsigned long pgtbl_pfn, @@ -695,9 +695,9 @@ int __init dom0_construct_pv(struct domain *d, l3e_get_page(*l3tab)->u.inuse.type_info |= PGT_pae_xen_l2; } - l2t = map_xen_pagetable_new(l3e_get_mfn(l3start[3])); + l2t = map_xen_pagetable(l3e_get_mfn(l3start[3])); init_xen_pae_l2_slots(l2t, d); - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); } /* Pages that are part of page tables must be read only. */ diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c index cf638fa965..09c7766ec5 100644 --- a/xen/arch/x86/pv/shim.c +++ b/xen/arch/x86/pv/shim.c @@ -171,11 +171,11 @@ static void __init replace_va_mapping(struct domain *d, l4_pgentry_t *l4start, l2_pgentry_t *pl2e; l1_pgentry_t *pl1e; - pl3e = map_xen_pagetable_new(l4e_get_mfn(*pl4e)); + pl3e = map_xen_pagetable(l4e_get_mfn(*pl4e)); pl3e += l3_table_offset(va); - pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e)); + pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e)); pl2e += l2_table_offset(va); - pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e)); + pl1e = map_xen_pagetable(l2e_get_mfn(*pl2e)); pl1e += l1_table_offset(va); put_page_and_type(mfn_to_page(l1e_get_mfn(*pl1e))); @@ -183,9 +183,9 @@ static void __init replace_va_mapping(struct domain *d, l4_pgentry_t *l4start, *pl1e = l1e_from_mfn(mfn, (!is_pv_32bit_domain(d) ? L1_PROT : COMPAT_L1_PROT)); - UNMAP_XEN_PAGETABLE_NEW(pl1e); - UNMAP_XEN_PAGETABLE_NEW(pl2e); - UNMAP_XEN_PAGETABLE_NEW(pl3e); + UNMAP_XEN_PAGETABLE(pl1e); + UNMAP_XEN_PAGETABLE(pl2e); + UNMAP_XEN_PAGETABLE(pl3e); } static void evtchn_reserve(struct domain *d, unsigned int port) diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c index 1c90559288..e964c032f6 100644 --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -1101,7 +1101,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) continue; *pl4e = l4e_from_intpte(l4e_get_intpte(*pl4e) + xen_phys_start); - pl3e = l3t = map_xen_pagetable_new(l4e_get_mfn(*pl4e)); + pl3e = l3t = map_xen_pagetable(l4e_get_mfn(*pl4e)); for ( j = 0; j < L3_PAGETABLE_ENTRIES; j++, pl3e++ ) { l2_pgentry_t *l2t; @@ -1113,7 +1113,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) continue; *pl3e = l3e_from_intpte(l3e_get_intpte(*pl3e) + xen_phys_start); - pl2e = l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e)); + pl2e = l2t = map_xen_pagetable(l3e_get_mfn(*pl3e)); for ( k = 0; k < L2_PAGETABLE_ENTRIES; k++, pl2e++ ) { /* Not present, PSE, or already relocated? */ @@ -1124,9 +1124,9 @@ void __init noreturn __start_xen(unsigned long mbi_p) *pl2e = l2e_from_intpte(l2e_get_intpte(*pl2e) + xen_phys_start); } - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); } - UNMAP_XEN_PAGETABLE_NEW(l3t); + UNMAP_XEN_PAGETABLE(l3t); } /* The only data mappings to be relocated are in the Xen area. */ diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index d657ac0108..53f9173f37 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -689,7 +689,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt) goto out; } - pl3e = map_xen_pagetable_new( + pl3e = map_xen_pagetable( l4e_get_mfn(idle_pg_table[root_table_offset(linear)])); pl3e += l3_table_offset(linear); @@ -703,7 +703,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt) } else { - pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e)); + pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e)); pl2e += l2_table_offset(linear); flags = l2e_get_flags(*pl2e); ASSERT(flags & _PAGE_PRESENT); @@ -715,7 +715,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt) } else { - pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e)); + pl1e = map_xen_pagetable(l2e_get_mfn(*pl2e)); pl1e += l1_table_offset(linear); flags = l1e_get_flags(*pl1e); if ( !(flags & _PAGE_PRESENT) ) @@ -727,13 +727,13 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt) } } - UNMAP_XEN_PAGETABLE_NEW(pl1e); - UNMAP_XEN_PAGETABLE_NEW(pl2e); - UNMAP_XEN_PAGETABLE_NEW(pl3e); + UNMAP_XEN_PAGETABLE(pl1e); + UNMAP_XEN_PAGETABLE(pl2e); + UNMAP_XEN_PAGETABLE(pl3e); if ( !(root_get_flags(rpt[root_table_offset(linear)]) & _PAGE_PRESENT) ) { - mfn_t l3t_mfn = alloc_xen_pagetable_new(); + mfn_t l3t_mfn = alloc_xen_pagetable(); if ( mfn_eq(l3t_mfn, INVALID_MFN) ) { @@ -741,20 +741,20 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt) goto out; } - pl3e = map_xen_pagetable_new(l3t_mfn); + pl3e = map_xen_pagetable(l3t_mfn); clear_page(pl3e); l4e_write(&rpt[root_table_offset(linear)], l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR)); } else - pl3e = map_xen_pagetable_new( + pl3e = map_xen_pagetable( l4e_get_mfn(rpt[root_table_offset(linear)])); pl3e += l3_table_offset(linear); if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) ) { - mfn_t l2t_mfn = alloc_xen_pagetable_new(); + mfn_t l2t_mfn = alloc_xen_pagetable(); if ( mfn_eq(l2t_mfn, INVALID_MFN) ) { @@ -762,21 +762,21 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt) goto out; } - pl2e = map_xen_pagetable_new(l2t_mfn); + pl2e = map_xen_pagetable(l2t_mfn); clear_page(pl2e); l3e_write(pl3e, l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR)); } else { ASSERT(!(l3e_get_flags(*pl3e) & _PAGE_PSE)); - pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e)); + pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e)); } pl2e += l2_table_offset(linear); if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) ) { - mfn_t l1t_mfn = alloc_xen_pagetable_new(); + mfn_t l1t_mfn = alloc_xen_pagetable(); if ( mfn_eq(l1t_mfn, INVALID_MFN) ) { @@ -784,14 +784,14 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt) goto out; } - pl1e = map_xen_pagetable_new(l1t_mfn); + pl1e = map_xen_pagetable(l1t_mfn); clear_page(pl1e); l2e_write(pl2e, l2e_from_mfn(l1t_mfn, __PAGE_HYPERVISOR)); } else { ASSERT(!(l2e_get_flags(*pl2e) & _PAGE_PSE)); - pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e)); + pl1e = map_xen_pagetable(l2e_get_mfn(*pl2e)); } pl1e += l1_table_offset(linear); @@ -807,9 +807,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt) rc = 0; out: - UNMAP_XEN_PAGETABLE_NEW(pl1e); - UNMAP_XEN_PAGETABLE_NEW(pl2e); - UNMAP_XEN_PAGETABLE_NEW(pl3e); + UNMAP_XEN_PAGETABLE(pl1e); + UNMAP_XEN_PAGETABLE(pl2e); + UNMAP_XEN_PAGETABLE(pl3e); return rc; } @@ -832,14 +832,14 @@ static int setup_cpu_root_pgt(unsigned int cpu) goto out; } - rpt_mfn = alloc_xen_pagetable_new(); + rpt_mfn = alloc_xen_pagetable(); if ( mfn_eq(rpt_mfn, INVALID_MFN) ) { rc = -ENOMEM; goto out; } - rpt = map_xen_pagetable_new(rpt_mfn); + rpt = map_xen_pagetable(rpt_mfn); clear_page(rpt); per_cpu(root_pgt_mfn, cpu) = rpt_mfn; @@ -884,7 +884,7 @@ static int setup_cpu_root_pgt(unsigned int cpu) rc = clone_mapping((void *)per_cpu(stubs.addr, cpu), rpt); out: - UNMAP_XEN_PAGETABLE_NEW(rpt); + UNMAP_XEN_PAGETABLE(rpt); return rc; } @@ -900,7 +900,7 @@ static void cleanup_cpu_root_pgt(unsigned int cpu) per_cpu(root_pgt_mfn, cpu) = INVALID_MFN; - rpt = map_xen_pagetable_new(rpt_mfn); + rpt = map_xen_pagetable(rpt_mfn); for ( r = root_table_offset(DIRECTMAP_VIRT_START); r < root_table_offset(HYPERVISOR_VIRT_END); ++r ) @@ -913,7 +913,7 @@ static void cleanup_cpu_root_pgt(unsigned int cpu) continue; l3t_mfn = l4e_get_mfn(rpt[r]); - l3t = map_xen_pagetable_new(l3t_mfn); + l3t = map_xen_pagetable(l3t_mfn); for ( i3 = 0; i3 < L3_PAGETABLE_ENTRIES; ++i3 ) { @@ -926,7 +926,7 @@ static void cleanup_cpu_root_pgt(unsigned int cpu) ASSERT(!(l3e_get_flags(l3t[i3]) & _PAGE_PSE)); l2t_mfn = l3e_get_mfn(l3t[i3]); - l2t = map_xen_pagetable_new(l2t_mfn); + l2t = map_xen_pagetable(l2t_mfn); for ( i2 = 0; i2 < L2_PAGETABLE_ENTRIES; ++i2 ) { @@ -934,34 +934,34 @@ static void cleanup_cpu_root_pgt(unsigned int cpu) continue; ASSERT(!(l2e_get_flags(l2t[i2]) & _PAGE_PSE)); - free_xen_pagetable_new(l2e_get_mfn(l2t[i2])); + free_xen_pagetable(l2e_get_mfn(l2t[i2])); } - UNMAP_XEN_PAGETABLE_NEW(l2t); - free_xen_pagetable_new(l2t_mfn); + UNMAP_XEN_PAGETABLE(l2t); + free_xen_pagetable(l2t_mfn); } - UNMAP_XEN_PAGETABLE_NEW(l3t); - free_xen_pagetable_new(l3t_mfn); + UNMAP_XEN_PAGETABLE(l3t); + free_xen_pagetable(l3t_mfn); } - UNMAP_XEN_PAGETABLE_NEW(rpt); - free_xen_pagetable_new(rpt_mfn); + UNMAP_XEN_PAGETABLE(rpt); + free_xen_pagetable(rpt_mfn); /* Also zap the stub mapping for this CPU. */ if ( stub_linear ) { - l3_pgentry_t *l3t = map_xen_pagetable_new(l4e_get_mfn(common_pgt)); - l2_pgentry_t *l2t = map_xen_pagetable_new( + l3_pgentry_t *l3t = map_xen_pagetable(l4e_get_mfn(common_pgt)); + l2_pgentry_t *l2t = map_xen_pagetable( l3e_get_mfn(l3t[l3_table_offset(stub_linear)])); - l1_pgentry_t *l1t = map_xen_pagetable_new( + l1_pgentry_t *l1t = map_xen_pagetable( l2e_get_mfn(l2t[l2_table_offset(stub_linear)])); l1t[l1_table_offset(stub_linear)] = l1e_empty(); - UNMAP_XEN_PAGETABLE_NEW(l1t); - UNMAP_XEN_PAGETABLE_NEW(l2t); - UNMAP_XEN_PAGETABLE_NEW(l3t); + UNMAP_XEN_PAGETABLE(l1t); + UNMAP_XEN_PAGETABLE(l2t); + UNMAP_XEN_PAGETABLE(l3t); } } diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index 6f37bc4c15..37e8d59e5d 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -134,7 +134,7 @@ static int m2p_mapped(unsigned long spfn) int rc = M2P_NO_MAPPED; va = RO_MPT_VIRT_START + spfn * sizeof(*machine_to_phys_mapping); - l3_ro_mpt = map_xen_pagetable_new( + l3_ro_mpt = map_xen_pagetable( l4e_get_mfn(idle_pg_table[l4_table_offset(va)])); switch ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) & @@ -150,15 +150,15 @@ static int m2p_mapped(unsigned long spfn) rc = M2P_NO_MAPPED; goto out; } - l2_ro_mpt = map_xen_pagetable_new( + l2_ro_mpt = map_xen_pagetable( l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)])); if (l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT) rc = M2P_2M_MAPPED; out: - UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt); - UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt); + UNMAP_XEN_PAGETABLE(l2_ro_mpt); + UNMAP_XEN_PAGETABLE(l3_ro_mpt); return rc; } @@ -176,10 +176,10 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info) { n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES; - l3t = map_xen_pagetable_new( + l3t = map_xen_pagetable( l4e_get_mfn(idle_pg_table[l4_table_offset(v)])); l3e = l3t[l3_table_offset(v)]; - UNMAP_XEN_PAGETABLE_NEW(l3t); + UNMAP_XEN_PAGETABLE(l3t); if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ) continue; @@ -187,9 +187,9 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info) { n = L1_PAGETABLE_ENTRIES; - l2t = map_xen_pagetable_new(l3e_get_mfn(l3e)); + l2t = map_xen_pagetable(l3e_get_mfn(l3e)); l2e = l2t[l2_table_offset(v)]; - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ) continue; @@ -211,17 +211,17 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info) v != RDWR_COMPAT_MPT_VIRT_END; v += 1 << L2_PAGETABLE_SHIFT ) { - l3t = map_xen_pagetable_new( + l3t = map_xen_pagetable( l4e_get_mfn(idle_pg_table[l4_table_offset(v)])); l3e = l3t[l3_table_offset(v)]; - UNMAP_XEN_PAGETABLE_NEW(l3t); + UNMAP_XEN_PAGETABLE(l3t); if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ) continue; - l2t = map_xen_pagetable_new(l3e_get_mfn(l3e)); + l2t = map_xen_pagetable(l3e_get_mfn(l3e)); l2e = l2t[l2_table_offset(v)]; - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ) continue; @@ -252,12 +252,12 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info) if ( emap > ((RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2) ) emap = (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2; - l3_ro_mpt = map_xen_pagetable_new( + l3_ro_mpt = map_xen_pagetable( l4e_get_mfn(idle_pg_table[l4_table_offset(HIRO_COMPAT_MPT_VIRT_START)])); ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]) & _PAGE_PRESENT); - l2_ro_mpt = map_xen_pagetable_new( + l2_ro_mpt = map_xen_pagetable( l3e_get_mfn(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)])); for ( i = smap; i < emap; ) @@ -280,8 +280,8 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info) i += 1UL << (L2_PAGETABLE_SHIFT - 2); } - UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt); - UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt); + UNMAP_XEN_PAGETABLE(l2_ro_mpt); + UNMAP_XEN_PAGETABLE(l3_ro_mpt); return; } @@ -292,7 +292,7 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info) unsigned long i, va, rwva; unsigned long smap = info->spfn, emap = info->epfn; - l3_ro_mpt = map_xen_pagetable_new( + l3_ro_mpt = map_xen_pagetable( l4e_get_mfn(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)])); /* @@ -315,13 +315,13 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info) continue; } - l2_ro_mpt = map_xen_pagetable_new( + l2_ro_mpt = map_xen_pagetable( l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)])); if (!(l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT)) { i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) + (1UL << (L2_PAGETABLE_SHIFT - 3)) ; - UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt); + UNMAP_XEN_PAGETABLE(l2_ro_mpt); continue; } @@ -332,17 +332,17 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info) destroy_xen_mappings(rwva, rwva + (1UL << L2_PAGETABLE_SHIFT)); - l2t = map_xen_pagetable_new( + l2t = map_xen_pagetable( l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)])); l2e_write(&l2t[l2_table_offset(va)], l2e_empty()); - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); } i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) + (1UL << (L2_PAGETABLE_SHIFT - 3)); - UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt); + UNMAP_XEN_PAGETABLE(l2_ro_mpt); } - UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt); + UNMAP_XEN_PAGETABLE(l3_ro_mpt); destroy_compat_m2p_mapping(info); @@ -382,12 +382,12 @@ static int setup_compat_m2p_table(struct mem_hotadd_info *info) va = HIRO_COMPAT_MPT_VIRT_START + smap * sizeof(*compat_machine_to_phys_mapping); - l3_ro_mpt = map_xen_pagetable_new( + l3_ro_mpt = map_xen_pagetable( l4e_get_mfn(idle_pg_table[l4_table_offset(va)])); ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) & _PAGE_PRESENT); - l2_ro_mpt = map_xen_pagetable_new( + l2_ro_mpt = map_xen_pagetable( l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)])); #define MFN(x) (((x) << L2_PAGETABLE_SHIFT) / sizeof(unsigned int)) @@ -427,8 +427,8 @@ static int setup_compat_m2p_table(struct mem_hotadd_info *info) #undef CNT #undef MFN - UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt); - UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt); + UNMAP_XEN_PAGETABLE(l2_ro_mpt); + UNMAP_XEN_PAGETABLE(l3_ro_mpt); return err; } @@ -449,7 +449,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info) & _PAGE_PRESENT); l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset( RO_MPT_VIRT_START)]); - l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn); + l3_ro_mpt = map_xen_pagetable(l3_ro_mpt_mfn); smap = (info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 3)) -1))); emap = ((info->epfn + ((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1 )) & @@ -505,23 +505,23 @@ static int setup_m2p_table(struct mem_hotadd_info *info) if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) & _PAGE_PRESENT ) { - UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt); + UNMAP_XEN_PAGETABLE(l2_ro_mpt); l2_ro_mpt_mfn = l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]); - l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn); + l2_ro_mpt = map_xen_pagetable(l2_ro_mpt_mfn); ASSERT(l2_ro_mpt); pl2e = l2_ro_mpt + l2_table_offset(va); } else { - UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt); - l2_ro_mpt_mfn = alloc_xen_pagetable_new(); + UNMAP_XEN_PAGETABLE(l2_ro_mpt); + l2_ro_mpt_mfn = alloc_xen_pagetable(); if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) ) { ret = -ENOMEM; goto error; } - l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn); + l2_ro_mpt = map_xen_pagetable(l2_ro_mpt_mfn); clear_page(l2_ro_mpt); l3e_write(&l3_ro_mpt[l3_table_offset(va)], l3e_from_mfn(l2_ro_mpt_mfn, @@ -541,8 +541,8 @@ static int setup_m2p_table(struct mem_hotadd_info *info) ret = setup_compat_m2p_table(info); error: - UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt); - UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt); + UNMAP_XEN_PAGETABLE(l2_ro_mpt); + UNMAP_XEN_PAGETABLE(l3_ro_mpt); return ret; } @@ -569,23 +569,23 @@ void __init paging_init(void) l3_pgentry_t *pl3t; mfn_t mfn; - mfn = alloc_xen_pagetable_new(); + mfn = alloc_xen_pagetable(); if ( mfn_eq(mfn, INVALID_MFN) ) goto nomem; - pl3t = map_xen_pagetable_new(mfn); + pl3t = map_xen_pagetable(mfn); clear_page(pl3t); l4e_write(&idle_pg_table[l4_table_offset(va)], l4e_from_mfn(mfn, __PAGE_HYPERVISOR_RW)); - UNMAP_XEN_PAGETABLE_NEW(pl3t); + UNMAP_XEN_PAGETABLE(pl3t); } } /* Create user-accessible L2 directory to map the MPT for guests. */ - l3_ro_mpt_mfn = alloc_xen_pagetable_new(); + l3_ro_mpt_mfn = alloc_xen_pagetable(); if ( mfn_eq(l3_ro_mpt_mfn, INVALID_MFN) ) goto nomem; - l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn); + l3_ro_mpt = map_xen_pagetable(l3_ro_mpt_mfn); clear_page(l3_ro_mpt); l4e_write(&idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)], l4e_from_mfn(l3_ro_mpt_mfn, __PAGE_HYPERVISOR_RO | _PAGE_USER)); @@ -675,13 +675,13 @@ void __init paging_init(void) * Unmap l2_ro_mpt, which could've been mapped in previous * iteration. */ - unmap_xen_pagetable_new(l2_ro_mpt); + unmap_xen_pagetable(l2_ro_mpt); - l2_ro_mpt_mfn = alloc_xen_pagetable_new(); + l2_ro_mpt_mfn = alloc_xen_pagetable(); if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) ) goto nomem; - l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn); + l2_ro_mpt = map_xen_pagetable(l2_ro_mpt_mfn); clear_page(l2_ro_mpt); l3e_write(&l3_ro_mpt[l3_table_offset(va)], l3e_from_mfn(l2_ro_mpt_mfn, @@ -697,8 +697,8 @@ void __init paging_init(void) } #undef CNT #undef MFN - UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt); - UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt); + UNMAP_XEN_PAGETABLE(l2_ro_mpt); + UNMAP_XEN_PAGETABLE(l3_ro_mpt); /* Create user-accessible L2 directory to map the MPT for compat guests. */ BUILD_BUG_ON(l4_table_offset(RDWR_MPT_VIRT_START) != @@ -706,12 +706,12 @@ void __init paging_init(void) l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset( HIRO_COMPAT_MPT_VIRT_START)]); - l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn); + l3_ro_mpt = map_xen_pagetable(l3_ro_mpt_mfn); - l2_ro_mpt_mfn = alloc_xen_pagetable_new(); + l2_ro_mpt_mfn = alloc_xen_pagetable(); if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) ) goto nomem; - l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn); + l2_ro_mpt = map_xen_pagetable(l2_ro_mpt_mfn); compat_idle_pg_table_l2 = l2_ro_mpt; clear_page(l2_ro_mpt); l3e_write(&l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)], @@ -757,8 +757,8 @@ void __init paging_init(void) #undef CNT #undef MFN - UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt); - UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt); + UNMAP_XEN_PAGETABLE(l2_ro_mpt); + UNMAP_XEN_PAGETABLE(l3_ro_mpt); machine_to_phys_mapping_valid = 1; @@ -816,10 +816,10 @@ static void cleanup_frame_table(struct mem_hotadd_info *info) while (sva < eva) { - l3t = map_xen_pagetable_new( + l3t = map_xen_pagetable( l4e_get_mfn(idle_pg_table[l4_table_offset(sva)])); l3e = l3t[l3_table_offset(sva)]; - UNMAP_XEN_PAGETABLE_NEW(l3t); + UNMAP_XEN_PAGETABLE(l3t); if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || (l3e_get_flags(l3e) & _PAGE_PSE) ) { @@ -828,9 +828,9 @@ static void cleanup_frame_table(struct mem_hotadd_info *info) continue; } - l2t = map_xen_pagetable_new(l3e_get_mfn(l3e)); + l2t = map_xen_pagetable(l3e_get_mfn(l3e)); l2e = l2t[l2_table_offset(sva)]; - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); ASSERT(l2e_get_flags(l2e) & _PAGE_PRESENT); if ( (l2e_get_flags(l2e) & (_PAGE_PRESENT | _PAGE_PSE)) == @@ -848,10 +848,10 @@ static void cleanup_frame_table(struct mem_hotadd_info *info) #ifndef NDEBUG { - l1_pgentry_t *l1t = map_xen_pagetable_new(l2e_get_mfn(l2e)); + l1_pgentry_t *l1t = map_xen_pagetable(l2e_get_mfn(l2e)); ASSERT(l1e_get_flags(l1t[l1_table_offset(sva)]) & _PAGE_PRESENT); - UNMAP_XEN_PAGETABLE_NEW(l1t); + UNMAP_XEN_PAGETABLE(l1t); } #endif sva = (sva & ~((1UL << PAGE_SHIFT) - 1)) + @@ -942,10 +942,10 @@ void __init subarch_init_memory(void) { n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES; - l3t = map_xen_pagetable_new( + l3t = map_xen_pagetable( l4e_get_mfn(idle_pg_table[l4_table_offset(v)])); l3e = l3t[l3_table_offset(v)]; - UNMAP_XEN_PAGETABLE_NEW(l3t); + UNMAP_XEN_PAGETABLE(l3t); if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ) continue; @@ -953,9 +953,9 @@ void __init subarch_init_memory(void) { n = L1_PAGETABLE_ENTRIES; - l2t = map_xen_pagetable_new(l3e_get_mfn(l3e)); + l2t = map_xen_pagetable(l3e_get_mfn(l3e)); l2e = l2t[l2_table_offset(v)]; - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ) continue; @@ -975,17 +975,17 @@ void __init subarch_init_memory(void) v != RDWR_COMPAT_MPT_VIRT_END; v += 1 << L2_PAGETABLE_SHIFT ) { - l3t = map_xen_pagetable_new( + l3t = map_xen_pagetable( l4e_get_mfn(idle_pg_table[l4_table_offset(v)])); l3e = l3t[l3_table_offset(v)]; - UNMAP_XEN_PAGETABLE_NEW(l3t); + UNMAP_XEN_PAGETABLE(l3t); if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ) continue; - l2t = map_xen_pagetable_new(l3e_get_mfn(l3e)); + l2t = map_xen_pagetable(l3e_get_mfn(l3e)); l2e = l2t[l2_table_offset(v)]; - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ) continue; @@ -1036,18 +1036,18 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) (v < (unsigned long)(machine_to_phys_mapping + max_page)); i++, v += 1UL << L2_PAGETABLE_SHIFT ) { - l3t = map_xen_pagetable_new( + l3t = map_xen_pagetable( l4e_get_mfn(idle_pg_table[l4_table_offset(v)])); l3e = l3t[l3_table_offset(v)]; - UNMAP_XEN_PAGETABLE_NEW(l3t); + UNMAP_XEN_PAGETABLE(l3t); if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ) mfn = last_mfn; else if ( !(l3e_get_flags(l3e) & _PAGE_PSE) ) { - l2t = map_xen_pagetable_new(l3e_get_mfn(l3e)); + l2t = map_xen_pagetable(l3e_get_mfn(l3e)); l2e = l2t[l2_table_offset(v)]; - UNMAP_XEN_PAGETABLE_NEW(l2t); + UNMAP_XEN_PAGETABLE(l2t); if ( l2e_get_flags(l2e) & _PAGE_PRESENT ) mfn = l2e_get_pfn(l2e); else diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c index f55d6a6d76..d47067c998 100644 --- a/xen/common/efi/boot.c +++ b/xen/common/efi/boot.c @@ -1443,20 +1443,20 @@ static __init void copy_mapping(l4_pgentry_t *l4, { mfn_t l3t_mfn; - l3t_mfn = alloc_xen_pagetable_new(); + l3t_mfn = alloc_xen_pagetable(); BUG_ON(mfn_eq(l3t_mfn, INVALID_MFN)); - l3dst = map_xen_pagetable_new(l3t_mfn); + l3dst = map_xen_pagetable(l3t_mfn); clear_page(l3dst); l4[l4_table_offset(mfn << PAGE_SHIFT)] = l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR); } else - l3dst = map_xen_pagetable_new(l4e_get_mfn(l4e)); - l3src = map_xen_pagetable_new( + l3dst = map_xen_pagetable(l4e_get_mfn(l4e)); + l3src = map_xen_pagetable( l4e_get_mfn(idle_pg_table[l4_table_offset(va)])); l3dst[l3_table_offset(mfn << PAGE_SHIFT)] = l3src[l3_table_offset(va)]; - UNMAP_XEN_PAGETABLE_NEW(l3src); - UNMAP_XEN_PAGETABLE_NEW(l3dst); + UNMAP_XEN_PAGETABLE(l3src); + UNMAP_XEN_PAGETABLE(l3dst); } } @@ -1604,9 +1604,9 @@ void __init efi_init_memory(void) mdesc_ver, efi_memmap); #else /* Set up 1:1 page tables to do runtime calls in "physical" mode. */ - efi_l4_mfn = alloc_xen_pagetable_new(); + efi_l4_mfn = alloc_xen_pagetable(); BUG_ON(mfn_eq(efi_l4_mfn, INVALID_MFN)); - efi_l4_pgtable = map_xen_pagetable_new(efi_l4_mfn); + efi_l4_pgtable = map_xen_pagetable(efi_l4_mfn); clear_page(efi_l4_pgtable); copy_mapping(efi_l4_pgtable, 0, max_page, ram_range_valid); @@ -1641,31 +1641,31 @@ void __init efi_init_memory(void) { mfn_t l3t_mfn; - l3t_mfn = alloc_xen_pagetable_new(); + l3t_mfn = alloc_xen_pagetable(); BUG_ON(mfn_eq(l3t_mfn, INVALID_MFN)); - pl3e = map_xen_pagetable_new(l3t_mfn); + pl3e = map_xen_pagetable(l3t_mfn); clear_page(pl3e); efi_l4_pgtable[l4_table_offset(addr)] = l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR); } else - pl3e = map_xen_pagetable_new(l4e_get_mfn(l4e)); + pl3e = map_xen_pagetable(l4e_get_mfn(l4e)); pl3e += l3_table_offset(addr); if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) ) { mfn_t l2t_mfn; - l2t_mfn = alloc_xen_pagetable_new(); + l2t_mfn = alloc_xen_pagetable(); BUG_ON(mfn_eq(l2t_mfn, INVALID_MFN)); - pl2e = map_xen_pagetable_new(l2t_mfn); + pl2e = map_xen_pagetable(l2t_mfn); clear_page(pl2e); *pl3e = l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR); } else { BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE); - pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e)); + pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e)); } pl2e += l2_table_offset(addr); @@ -1673,16 +1673,16 @@ void __init efi_init_memory(void) { mfn_t l1t_mfn; - l1t_mfn = alloc_xen_pagetable_new(); + l1t_mfn = alloc_xen_pagetable(); BUG_ON(mfn_eq(l1t_mfn, INVALID_MFN)); - l1t = map_xen_pagetable_new(l1t_mfn); + l1t = map_xen_pagetable(l1t_mfn); clear_page(l1t); *pl2e = l2e_from_mfn(l1t_mfn, __PAGE_HYPERVISOR); } else { BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE); - l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e)); + l1t = map_xen_pagetable(l2e_get_mfn(*pl2e)); } for ( i = l1_table_offset(addr); i < L1_PAGETABLE_ENTRIES && extra->smfn < extra->emfn; @@ -1695,9 +1695,9 @@ void __init efi_init_memory(void) xfree(extra); } - UNMAP_XEN_PAGETABLE_NEW(l1t); - UNMAP_XEN_PAGETABLE_NEW(pl2e); - UNMAP_XEN_PAGETABLE_NEW(pl3e); + UNMAP_XEN_PAGETABLE(l1t); + UNMAP_XEN_PAGETABLE(pl2e); + UNMAP_XEN_PAGETABLE(pl3e); } /* Insert Xen mappings. */ @@ -1706,7 +1706,7 @@ void __init efi_init_memory(void) efi_l4_pgtable[i] = idle_pg_table[i]; #endif - UNMAP_XEN_PAGETABLE_NEW(efi_l4_pgtable); + UNMAP_XEN_PAGETABLE(efi_l4_pgtable); } #endif diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 4fb79ab8f0..a4b3c9b7af 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -631,15 +631,15 @@ int arch_acquire_resource(struct domain *d, unsigned int type, unsigned int nr_frames, xen_pfn_t mfn_list[]); /* Allocator functions for Xen pagetables. */ -mfn_t alloc_xen_pagetable_new(void); -void *map_xen_pagetable_new(mfn_t mfn); -void unmap_xen_pagetable_new(void *v); -void free_xen_pagetable_new(mfn_t mfn); - -#define UNMAP_XEN_PAGETABLE_NEW(ptr) \ - do { \ - unmap_xen_pagetable_new((ptr)); \ - (ptr) = NULL; \ +mfn_t alloc_xen_pagetable(void); +void *map_xen_pagetable(mfn_t mfn); +void unmap_xen_pagetable(void *v); +void free_xen_pagetable(mfn_t mfn); + +#define UNMAP_XEN_PAGETABLE(ptr) \ + do { \ + unmap_xen_pagetable((ptr)); \ + (ptr) = NULL; \ } while (0) l1_pgentry_t *virt_to_xen_l1e(unsigned long v);