From patchwork Wed Apr 27 13:46:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Rzeszutek Wilk X-Patchwork-Id: 8957571 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E71089F441 for ; Wed, 27 Apr 2016 13:49:15 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2CD9D20259 for ; Wed, 27 Apr 2016 13:49:14 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 74910201EF for ; Wed, 27 Apr 2016 13:49:12 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1avPnp-0004Wc-OG; Wed, 27 Apr 2016 13:46:53 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1avPno-0004WU-F8 for xen-devel@lists.xenproject.org; Wed, 27 Apr 2016 13:46:52 +0000 Received: from [85.158.137.68] by server-9.bemta-3.messagelabs.com id BC/7E-03814-BC2C0275; Wed, 27 Apr 2016 13:46:51 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrBIsWRWlGSWpSXmKPExsVyMfTOEd1ThxT CDaZNN7D4vmUykwOjx+EPV1gCGKNYM/OS8isSWDNWrp7IUtAxhbHi4Mf7LA2M7/K7GLk4hARm MEpsauliBHFYBJaxSlz738IM4kgIHGKVmNb+kq2LkRPIiZGY8a4Tyq6Q+HdpPjOILSSgJLFl8 mNGiFETmSReX50PViQsoCcx+dttoAQH0FhViSPnbEDCbAL6Ek/XXgPrFRFQluj99ZsFpJdZ4B WTxMavp5ggegMlDp5oZgGxeQXMJNZ23GeFWDaDSWLBO2+IuKDEyZlPwGqYBbQkbvx7yQSyi1l AWmL5Pw6QMKeAvcT1DSC7ODhEBVQkXh2sn8AoMgtJ8ywkzbMQmhcwMq9iVC9OLSpLLdI10Usq ykzPKMlNzMzRNTQw1stNLS5OTE/NSUwq1kvOz93ECAx/BiDYwdj4xekQoyQHk5IoL+cKhXAhv qT8lMqMxOKM+KLSnNTiQ4wyHBxKErxnDwLlBItS01Mr0jJzgJEIk5bg4FES4Z0FkuYtLkjMLc 5Mh0idYjTm2PL72lomjm1T761lEmLJy89LlRLnnQtSKgBSmlGaBzcIliAuMcpKCfMyAp0mxFO QWpSbWYIq/4pRnINRSZj3PsgUnsy8Erh9r4BOYQI65fIhWZBTShIRUlINjBkzHxeK95VKeoYb R+lsnFv2wohtfuHWWwcmLJ0W+vfrp0Yx8Q8Obg7h5jtPffjSoPA7J6tu7pLtZbsENW2Trf9+O pV2Vdi56Oluld83L+9dV/tJyfG1hPKHr7uDCnpiNeuefr219LVXpdRth0PdzJ9z1Br1lM6vO8 JsOk1u2mV9Q/58y77HPkosxRmJhlrMRcWJABMh60ELAwAA X-Env-Sender: ketuzsezr@gmail.com X-Msg-Ref: server-13.tower-31.messagelabs.com!1461764809!36627049!1 X-Originating-IP: [209.85.220.196] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.28; banners=-,-,- X-VirusChecked: Checked Received: (qmail 10369 invoked from network); 27 Apr 2016 13:46:50 -0000 Received: from mail-qk0-f196.google.com (HELO mail-qk0-f196.google.com) (209.85.220.196) by server-13.tower-31.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 27 Apr 2016 13:46:50 -0000 Received: by mail-qk0-f196.google.com with SMTP id l68so3399859qkf.3 for ; Wed, 27 Apr 2016 06:46:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=OhbleCYXhCFItDZUVbfP6vmdiWc3wu95TnXItUw4XJw=; b=d3HSn2ddqcmPnha1RxtQS0FI9XWmoDuWF2fU8oc8f3Y9lUQLpOL8vcGVc0JuoTo/of uNjP5Ng+ig3ZEcPQt75GenS82mjLuDx9KJdzER6IJS5XQLOEOeAL4rVZy1lz5hfdeJrs F7PuMh1KYyAE+Pg73UQq1WjhuQMWsqSMZxzFBdPEDuhL1bWrHwWmKRQBNxojovwtuOGm 9kz3NAuxirR61kq3qgZIVznK1zNIkgYOXDNYyjdSysV/aDlDXwzqi6G0LU4iMPXHQmeK yS+bY2WI4vguteW+LCLvYQQaKMvikowraN/YQvbwfrTqkKL4mq7zXvebtLwYTRhb4c9y 3z9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to:user-agent; bh=OhbleCYXhCFItDZUVbfP6vmdiWc3wu95TnXItUw4XJw=; b=OwYhu3ZRyiZI3VSsNTbt42gc/vW5btb6M+oyot/XK/JGs9vnw6TuCDFb03ZBEIYtWV Cplxe8SoLM/wp0ennsq5YHFSkfgFXV4O56a7gg2KARyiYV0nanWJhUmczK5Bjd6aG8iH c0J4YqmCc4JPXiWO3aXlD2a9+ZTNdvW048zX4VWRSgVfulJfXxfPrbO53qZSyLrBFWrz cQanI9pfdS7KNEIG8YcvToAN/jqe2ix8B8oMHElVMykO5xCEilGn186seQ3usIHHKdt6 pg8vtDsyMp02NRvxCfB0dgdoUDODlcTP4/NkEnErs0xMC3/6PNipHA7gXgF1WLt+fh7F 3VsQ== X-Gm-Message-State: AOPr4FWoMl99T7Aac0rA660ZHOsQ91FDTjVFADVQHxhoWhUqtb8Ihg1U2geS5AZGIsT41w== X-Received: by 10.55.20.97 with SMTP id e94mr8567349qkh.199.1461764809405; Wed, 27 Apr 2016 06:46:49 -0700 (PDT) Received: from x230.dumpdata.com (209-6-196-81.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.196.81]) by smtp.gmail.com with ESMTPSA id d12sm1230671qhd.13.2016.04.27.06.46.47 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 27 Apr 2016 06:46:48 -0700 (PDT) Date: Wed, 27 Apr 2016 09:46:44 -0400 From: Konrad Rzeszutek Wilk To: Jan Beulich Message-ID: <20160427134644.GA26384@x230.dumpdata.com> References: <1461598514-5440-1-git-send-email-konrad.wilk@oracle.com> <1461598514-5440-9-git-send-email-konrad.wilk@oracle.com> <571F637C02000078000E5C2E@prv-mh.provo.novell.com> <20160427023837.GC26540@localhost.localdomain> <5720827802000078000E6296@prv-mh.provo.novell.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <5720827802000078000E6296@prv-mh.provo.novell.com> User-Agent: Mutt/1.5.24 (2015-08-30) Cc: Stefano Stabellini , Keir Fraser , andrew.cooper3@citrix.com, Ian Jackson , Tim Deegan , mpohlack@amazon.de, ross.lagerwall@citrix.com, Julien Grall , sasha.levin@oracle.com, xen-devel@lists.xenproject.org Subject: Re: [Xen-devel] [PATCH v9 08/27] arm/x86/vmap: Add v[z|m]alloc_xen and vm_init_type X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Wed, Apr 27, 2016 at 01:12:24AM -0600, Jan Beulich wrote: > >>> On 27.04.16 at 04:38, wrote: > >> With vm_alloc() getting removed, vm_free() should get removed > >> here too. And with that, vm_alloc_type() and vm_free_type() can > >> then just become vm_alloc() and vm_free() respectively (as static > >> internal functions). > > > > Please take a look at this inline one: > > Better, and it can have my ack, but it's still doing more changes than > really needed: > > > +static void vunmap_pages(const void *va, unsigned int pages) > > +{ > > +#ifndef _PAGE_NONE > > + unsigned long addr = (unsigned long)va; > > + > > + destroy_xen_mappings(addr, addr + PAGE_SIZE * pages); > > +#else /* Avoid tearing down intermediate page tables. */ > > + map_pages_to_xen((unsigned long)va, 0, pages, _PAGE_NONE); > > +#endif > > + vm_free(va); > > +} > > There's no real reason to break this out and move up here - the > two callers other than vunmap() could easily continue to call > vunmap(). The more that you do not similarly leverage knowing > the type here already (all callers of vunmap_pages() already > know the type, and hence could pass it here). /me nods. > > > +void vunmap(const void *va) > > +{ > > + enum vmap_region type = VMAP_DEFAULT; > > If vunmap_pages() was to stay, and was to continue to not have a > type parameter, this local variable is pointless. > > > @@ -266,16 +308,32 @@ void *vzalloc(size_t size) > > return p; > > } > > > > +void *vzalloc(size_t size) > > +{ > > + return vzalloc_type(size, VMAP_DEFAULT); > > +} > > + > > +void *vzalloc_xen(size_t size) > > +{ > > + return vzalloc_type(size, VMAP_XEN); > > +} > > I didn't look at your replies to the later patches yet, but considering > my reply to the one using vzalloc_xen() I wonder whether in fact > you still need this flavor (and hence vzalloc_type()). /me nods. Then this should be perfect: From cef95bc0682f94ca5e61609211c4787491212acf Mon Sep 17 00:00:00 2001 From: Konrad Rzeszutek Wilk Date: Tue, 26 Apr 2016 14:03:06 -0400 Subject: [PATCH] arm/x86/vmap: Add vmalloc_xen and vm_init_type For those users who want to use the virtual addresses that are in the hypervisor's code/data region address space - these three new functions allow that. Implementation wise the vmap API keeps track of two virtual address regions now: a) VMAP_VIRT_START b) Any provided virtual address space (need start and end). The a) one is the default one and the existing behavior for users of vmalloc, vmap, etc is the same. If however one wishes to use the b) one only has to use the vm_init_type to initialize and the vmzalloc_xen to utilize it (vfree and vunmap are capable of searching both address spaces). This allows users (such as xSplice) to provide their own mechanism to change the the page flags, and also use virtual addresses closer to the hypervisor virtual addresses (at least on x86) while not having to deal with the allocation of pages. For example of users, see patch titled "xsplice: Implement payload loading", where we parse the payload's ELF relocations - which is defined to be signed 32-bit (on x86) (max displacement hence is 2GB virtual space, ARM32 is 128MB). The displacement of the hypervisor virtual addresses to the vmalloc (on x86) is more than 32-bits - which means that ELF relocations would truncate the 34 and 33th bit. Hence this alternate API. We also add add extra checks in case the b) range has not been initialized. Part of this patch also removes 'vm_alloc' and 'vm_free' decleration as we do not have any users of it. Signed-off-by: Konrad Rzeszutek Wilk Suggested-by: Jan Beulich Acked-by: Julien Grall [ARM] Reviewed-by: Jan Beulich --- Cc: Ian Jackson Cc: Jan Beulich Cc: Keir Fraser Cc: Tim Deegan Cc: Stefano Stabellini Cc: Julien Grall v4: New patch. v5: Update per Jan's comments. v6: Drop the stray parentheses on typedefs. Ditch the vunmap callback. Stash away the virtual addresses in lists. Ditch the vmap callback. Just provide virtual address. Ditch the vmalloc_range. Require users of alternative virtual address to call vmap_init_type first. v7: Don't expose the vmalloc_type and such. Instead provide an wrapper called vmalloc_xen for those. Rename the enum, change one of the names. Moved the vunmap_type around in c file so we don't have to declare it in the header. v9: Remove the vunmap_xen, removed vm_alloc from header. Add vzalloc_xen v10: Properly ASSERT on ranges Make vm_free and vunmap automatically detect the right va space. Remove from header vm_free. Rename vm_alloc_type and vm_free_type to vm_alloc and vm_free respectively. v10 - inline patch set in v8: Ditch the vzalloc_xen Squash vunmap and vunmap_pages together. Move back to original position. Drop vzalloc_type and only expose vzalloc. --- xen/arch/arm/kernel.c | 2 +- xen/arch/arm/mm.c | 2 +- xen/arch/x86/mm.c | 2 +- xen/common/vmap.c | 169 ++++++++++++++++++++++++++++++------------------- xen/drivers/acpi/osl.c | 2 +- xen/include/xen/vmap.h | 21 ++++-- 6 files changed, 125 insertions(+), 73 deletions(-) diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c index 61808ac..9871bd9 100644 --- a/xen/arch/arm/kernel.c +++ b/xen/arch/arm/kernel.c @@ -299,7 +299,7 @@ static __init int kernel_decompress(struct bootmodule *mod) return -ENOMEM; } mfn = _mfn(page_to_mfn(pages)); - output = __vmap(&mfn, 1 << kernel_order_out, 1, 1, PAGE_HYPERVISOR); + output = __vmap(&mfn, 1 << kernel_order_out, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT); rc = perform_gunzip(output, input, size); clean_dcache_va_range(output, output_size); diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 7065c3e..94ea054 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -807,7 +807,7 @@ void *ioremap_attr(paddr_t pa, size_t len, unsigned int attributes) mfn_t mfn = _mfn(PFN_DOWN(pa)); unsigned int offs = pa & (PAGE_SIZE - 1); unsigned int nr = PFN_UP(offs + len); - void *ptr = __vmap(&mfn, nr, 1, 1, attributes); + void *ptr = __vmap(&mfn, nr, 1, 1, attributes, VMAP_DEFAULT); if ( ptr == NULL ) return NULL; diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index a42097f..2bb920b 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -6179,7 +6179,7 @@ void __iomem *ioremap(paddr_t pa, size_t len) unsigned int offs = pa & (PAGE_SIZE - 1); unsigned int nr = PFN_UP(offs + len); - va = __vmap(&mfn, nr, 1, 1, PAGE_HYPERVISOR_NOCACHE) + offs; + va = __vmap(&mfn, nr, 1, 1, PAGE_HYPERVISOR_NOCACHE, VMAP_DEFAULT) + offs; } return (void __force __iomem *)va; diff --git a/xen/common/vmap.c b/xen/common/vmap.c index 134eda0..2393df1 100644 --- a/xen/common/vmap.c +++ b/xen/common/vmap.c @@ -10,40 +10,43 @@ #include static DEFINE_SPINLOCK(vm_lock); -static void *__read_mostly vm_base; -#define vm_bitmap ((unsigned long *)vm_base) +static void *__read_mostly vm_base[VMAP_REGION_NR]; +#define vm_bitmap(x) ((unsigned long *)vm_base[x]) /* highest allocated bit in the bitmap */ -static unsigned int __read_mostly vm_top; +static unsigned int __read_mostly vm_top[VMAP_REGION_NR]; /* total number of bits in the bitmap */ -static unsigned int __read_mostly vm_end; +static unsigned int __read_mostly vm_end[VMAP_REGION_NR]; /* lowest known clear bit in the bitmap */ -static unsigned int vm_low; +static unsigned int vm_low[VMAP_REGION_NR]; -void __init vm_init(void) +void __init vm_init_type(enum vmap_region type, void *start, void *end) { unsigned int i, nr; unsigned long va; - vm_base = (void *)VMAP_VIRT_START; - vm_end = PFN_DOWN(arch_vmap_virt_end() - vm_base); - vm_low = PFN_UP((vm_end + 7) / 8); - nr = PFN_UP((vm_low + 7) / 8); - vm_top = nr * PAGE_SIZE * 8; + ASSERT(!vm_base[type]); - for ( i = 0, va = (unsigned long)vm_bitmap; i < nr; ++i, va += PAGE_SIZE ) + vm_base[type] = start; + vm_end[type] = PFN_DOWN(end - start); + vm_low[type]= PFN_UP((vm_end[type] + 7) / 8); + nr = PFN_UP((vm_low[type] + 7) / 8); + vm_top[type] = nr * PAGE_SIZE * 8; + + for ( i = 0, va = (unsigned long)vm_bitmap(type); i < nr; ++i, va += PAGE_SIZE ) { struct page_info *pg = alloc_domheap_page(NULL, 0); map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR); clear_page((void *)va); } - bitmap_fill(vm_bitmap, vm_low); + bitmap_fill(vm_bitmap(type), vm_low[type]); /* Populate page tables for the bitmap if necessary. */ - populate_pt_range(va, 0, vm_low - nr); + populate_pt_range(va, 0, vm_low[type] - nr); } -void *vm_alloc(unsigned int nr, unsigned int align) +static void *vm_alloc(unsigned int nr, unsigned int align, + enum vmap_region t) { unsigned int start, bit; @@ -52,27 +55,31 @@ void *vm_alloc(unsigned int nr, unsigned int align) else if ( align & (align - 1) ) align &= -align; + ASSERT((t >= VMAP_DEFAULT) && (t < VMAP_REGION_NR)); + if ( !vm_base[t] ) + return NULL; + spin_lock(&vm_lock); for ( ; ; ) { struct page_info *pg; - ASSERT(vm_low == vm_top || !test_bit(vm_low, vm_bitmap)); - for ( start = vm_low; start < vm_top; ) + ASSERT(vm_low[t] == vm_top[t] || !test_bit(vm_low[t], vm_bitmap(t))); + for ( start = vm_low[t]; start < vm_top[t]; ) { - bit = find_next_bit(vm_bitmap, vm_top, start + 1); - if ( bit > vm_top ) - bit = vm_top; + bit = find_next_bit(vm_bitmap(t), vm_top[t], start + 1); + if ( bit > vm_top[t] ) + bit = vm_top[t]; /* * Note that this skips the first bit, making the * corresponding page a guard one. */ start = (start + align) & ~(align - 1); - if ( bit < vm_top ) + if ( bit < vm_top[t] ) { if ( start + nr < bit ) break; - start = find_next_zero_bit(vm_bitmap, vm_top, bit + 1); + start = find_next_zero_bit(vm_bitmap(t), vm_top[t], bit + 1); } else { @@ -82,12 +89,12 @@ void *vm_alloc(unsigned int nr, unsigned int align) } } - if ( start < vm_top ) + if ( start < vm_top[t] ) break; spin_unlock(&vm_lock); - if ( vm_top >= vm_end ) + if ( vm_top[t] >= vm_end[t] ) return NULL; pg = alloc_domheap_page(NULL, 0); @@ -96,23 +103,23 @@ void *vm_alloc(unsigned int nr, unsigned int align) spin_lock(&vm_lock); - if ( start >= vm_top ) + if ( start >= vm_top[t] ) { - unsigned long va = (unsigned long)vm_bitmap + vm_top / 8; + unsigned long va = (unsigned long)vm_bitmap(t) + vm_top[t] / 8; if ( !map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR) ) { clear_page((void *)va); - vm_top += PAGE_SIZE * 8; - if ( vm_top > vm_end ) - vm_top = vm_end; + vm_top[t] += PAGE_SIZE * 8; + if ( vm_top[t] > vm_end[t] ) + vm_top[t] = vm_end[t]; continue; } } free_domheap_page(pg); - if ( start >= vm_top ) + if ( start >= vm_top[t] ) { spin_unlock(&vm_lock); return NULL; @@ -120,47 +127,58 @@ void *vm_alloc(unsigned int nr, unsigned int align) } for ( bit = start; bit < start + nr; ++bit ) - __set_bit(bit, vm_bitmap); - if ( bit < vm_top ) - ASSERT(!test_bit(bit, vm_bitmap)); + __set_bit(bit, vm_bitmap(t)); + if ( bit < vm_top[t] ) + ASSERT(!test_bit(bit, vm_bitmap(t))); else - ASSERT(bit == vm_top); - if ( start <= vm_low + 2 ) - vm_low = bit; + ASSERT(bit == vm_top[t]); + if ( start <= vm_low[t] + 2 ) + vm_low[t] = bit; spin_unlock(&vm_lock); - return vm_base + start * PAGE_SIZE; + return vm_base[t] + start * PAGE_SIZE; } -static unsigned int vm_index(const void *va) +static unsigned int vm_index(const void *va, enum vmap_region type) { unsigned long addr = (unsigned long)va & ~(PAGE_SIZE - 1); unsigned int idx; + unsigned long start = (unsigned long)vm_base[type]; + + if ( !start ) + return 0; - if ( addr < VMAP_VIRT_START + (vm_end / 8) || - addr >= VMAP_VIRT_START + vm_top * PAGE_SIZE ) + if ( addr < start + (vm_end[type] / 8) || + addr >= start + vm_top[type] * PAGE_SIZE ) return 0; - idx = PFN_DOWN(va - vm_base); - return !test_bit(idx - 1, vm_bitmap) && - test_bit(idx, vm_bitmap) ? idx : 0; + idx = PFN_DOWN(va - vm_base[type]); + return !test_bit(idx - 1, vm_bitmap(type)) && + test_bit(idx, vm_bitmap(type)) ? idx : 0; } -static unsigned int vm_size(const void *va) +static unsigned int vm_size(const void *va, enum vmap_region type) { - unsigned int start = vm_index(va), end; + unsigned int start = vm_index(va, type), end; if ( !start ) return 0; - end = find_next_zero_bit(vm_bitmap, vm_top, start + 1); + end = find_next_zero_bit(vm_bitmap(type), vm_top[type], start + 1); - return min(end, vm_top) - start; + return min(end, vm_top[type]) - start; } -void vm_free(const void *va) +static void vm_free(const void *va) { - unsigned int bit = vm_index(va); + enum vmap_region type = VMAP_DEFAULT; + unsigned int bit = vm_index(va, type); + + if ( !bit ) + { + type = VMAP_XEN; + bit = vm_index(va, type); + } if ( !bit ) { @@ -169,22 +187,23 @@ void vm_free(const void *va) } spin_lock(&vm_lock); - if ( bit < vm_low ) + if ( bit < vm_low[type] ) { - vm_low = bit - 1; - while ( !test_bit(vm_low - 1, vm_bitmap) ) - --vm_low; + vm_low[type] = bit - 1; + while ( !test_bit(vm_low[type] - 1, vm_bitmap(type)) ) + --vm_low[type]; } - while ( __test_and_clear_bit(bit, vm_bitmap) ) - if ( ++bit == vm_top ) + while ( __test_and_clear_bit(bit, vm_bitmap(type)) ) + if ( ++bit == vm_top[type] ) break; spin_unlock(&vm_lock); } void *__vmap(const mfn_t *mfn, unsigned int granularity, - unsigned int nr, unsigned int align, unsigned int flags) + unsigned int nr, unsigned int align, unsigned int flags, + enum vmap_region type) { - void *va = vm_alloc(nr * granularity, align); + void *va = vm_alloc(nr * granularity, align, type); unsigned long cur = (unsigned long)va; for ( ; va && nr--; ++mfn, cur += PAGE_SIZE * granularity ) @@ -201,22 +220,28 @@ void *__vmap(const mfn_t *mfn, unsigned int granularity, void *vmap(const mfn_t *mfn, unsigned int nr) { - return __vmap(mfn, 1, nr, 1, PAGE_HYPERVISOR); + return __vmap(mfn, 1, nr, 1, PAGE_HYPERVISOR, VMAP_DEFAULT); } void vunmap(const void *va) { #ifndef _PAGE_NONE unsigned long addr = (unsigned long)va; +#endif + unsigned int pages = vm_size(va, VMAP_DEFAULT); + + if ( !pages ) + pages = vm_size(va, VMAP_XEN); - destroy_xen_mappings(addr, addr + PAGE_SIZE * vm_size(va)); +#ifndef _PAGE_NONE + destroy_xen_mappings(addr, addr + PAGE_SIZE * pages); #else /* Avoid tearing down intermediate page tables. */ - map_pages_to_xen((unsigned long)va, 0, vm_size(va), _PAGE_NONE); + map_pages_to_xen((unsigned long)va, 0, pages, _PAGE_NONE); #endif vm_free(va); } -void *vmalloc(size_t size) +static void *vmalloc_type(size_t size, enum vmap_region type) { mfn_t *mfn; size_t pages, i; @@ -238,7 +263,7 @@ void *vmalloc(size_t size) mfn[i] = _mfn(page_to_mfn(pg)); } - va = vmap(mfn, pages); + va = __vmap(mfn, 1, pages, 1, PAGE_HYPERVISOR, type); if ( va == NULL ) goto error; @@ -252,9 +277,19 @@ void *vmalloc(size_t size) return NULL; } +void *vmalloc(size_t size) +{ + return vmalloc_type(size, VMAP_DEFAULT); +} + +void *vmalloc_xen(size_t size) +{ + return vmalloc_type(size, VMAP_XEN); +} + void *vzalloc(size_t size) { - void *p = vmalloc(size); + void *p = vmalloc_type(size, VMAP_DEFAULT); int i; if ( p == NULL ) @@ -271,11 +306,17 @@ void vfree(void *va) unsigned int i, pages; struct page_info *pg; PAGE_LIST_HEAD(pg_list); + enum vmap_region type = VMAP_DEFAULT; if ( !va ) return; - pages = vm_size(va); + pages = vm_size(va, type); + if ( !pages ) + { + type = VMAP_XEN; + pages = vm_size(va, type); + } ASSERT(pages); for ( i = 0; i < pages; i++ ) diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c index 8a28d87..9a49029 100644 --- a/xen/drivers/acpi/osl.c +++ b/xen/drivers/acpi/osl.c @@ -97,7 +97,7 @@ acpi_os_map_memory(acpi_physical_address phys, acpi_size size) if (IS_ENABLED(CONFIG_X86) && !((phys + size - 1) >> 20)) return __va(phys); return __vmap(&mfn, PFN_UP(offs + size), 1, 1, - ACPI_MAP_MEM_ATTR) + offs; + ACPI_MAP_MEM_ATTR, VMAP_DEFAULT) + offs; } return __acpi_map_table(phys, size); } diff --git a/xen/include/xen/vmap.h b/xen/include/xen/vmap.h index 5671ac8..369560e 100644 --- a/xen/include/xen/vmap.h +++ b/xen/include/xen/vmap.h @@ -4,14 +4,22 @@ #include #include -void *vm_alloc(unsigned int nr, unsigned int align); -void vm_free(const void *); +enum vmap_region { + VMAP_DEFAULT, + VMAP_XEN, + VMAP_REGION_NR, +}; -void *__vmap(const mfn_t *mfn, unsigned int granularity, - unsigned int nr, unsigned int align, unsigned int flags); +void vm_init_type(enum vmap_region type, void *start, void *end); + +void *__vmap(const mfn_t *mfn, unsigned int granularity, unsigned int nr, + unsigned int align, unsigned int flags, enum vmap_region); void *vmap(const mfn_t *mfn, unsigned int nr); void vunmap(const void *); + void *vmalloc(size_t size); +void *vmalloc_xen(size_t size); + void *vzalloc(size_t size); void vfree(void *va); @@ -24,7 +32,10 @@ static inline void iounmap(void __iomem *va) vunmap((void *)(addr & PAGE_MASK)); } -void vm_init(void); void *arch_vmap_virt_end(void); +static inline void vm_init(void) +{ + vm_init_type(VMAP_DEFAULT, (void *)VMAP_VIRT_START, arch_vmap_virt_end()); +} #endif /* __XEN_VMAP_H__ */