From patchwork Tue Nov 12 22:27:31 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 3176421 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 5B580C045B for ; Tue, 12 Nov 2013 22:30:16 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1889620595 for ; Tue, 12 Nov 2013 22:30:15 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D737B20594 for ; Tue, 12 Nov 2013 22:30:13 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VgMS4-0002xn-Le; Tue, 12 Nov 2013 22:28:53 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VgMRn-0005h3-U6; Tue, 12 Nov 2013 22:28:35 +0000 Received: from smtp.codeaurora.org ([198.145.11.231]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VgMRE-0005ab-2t for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2013 22:28:12 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 6650D13EF77; Tue, 12 Nov 2013 22:27:41 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id 59EE913F283; Tue, 12 Nov 2013 22:27:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from lauraa-linux1.qualcomm.com (i-global252.qualcomm.com [199.106.103.252]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: lauraa@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id C162713EF77; Tue, 12 Nov 2013 22:27:40 +0000 (UTC) From: Laura Abbott To: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Subject: [RFC PATCHv2 3/4] mm/vmalloc.c: Allow lowmem to be tracked in vmalloc Date: Tue, 12 Nov 2013 14:27:31 -0800 Message-Id: <1384295252-31778-4-git-send-email-lauraa@codeaurora.org> X-Mailer: git-send-email 1.7.8.3 In-Reply-To: <1384295252-31778-1-git-send-email-lauraa@codeaurora.org> References: <1384295252-31778-1-git-send-email-lauraa@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131112_172800_606795_7CDC3412 X-CRM114-Status: GOOD ( 21.03 ) X-Spam-Score: -1.9 (-) Cc: Laura Abbott , Russell King , Neeti Desai , Kyungmin Park X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP vmalloc is currently assumed to be a completely separate address space from the lowmem region. While this may be true in the general case, there are some instances where lowmem and virtual space intermixing provides gains. One example is needing to steal a large chunk of physical lowmem for another purpose outside the systems usage. Rather than waste the precious lowmem space on a 32-bit system, we can allow the virtual holes created by the physical holes to be used by vmalloc for virtual addressing. Track lowmem allocations in vmalloc to allow mixing of lowmem and vmalloc. Signed-off-by: Laura Abbott Signed-off-by: Neeti Desai --- include/linux/mm.h | 6 ++++++ include/linux/vmalloc.h | 1 + mm/Kconfig | 11 +++++++++++ mm/vmalloc.c | 35 +++++++++++++++++++++++++++++++++++ 4 files changed, 53 insertions(+), 0 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index f022460..f2da420 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -308,6 +308,10 @@ unsigned long vmalloc_to_pfn(const void *addr); * On nommu, vmalloc/vfree wrap through kmalloc/kfree directly, so there * is no special casing required. */ + +#ifdef CONFIG_ENABLE_VMALLOC_SAVING +extern int is_vmalloc_addr(const void *x); +#else static inline int is_vmalloc_addr(const void *x) { #ifdef CONFIG_MMU @@ -318,6 +322,8 @@ static inline int is_vmalloc_addr(const void *x) return 0; #endif } +#endif + #ifdef CONFIG_MMU extern int is_vmalloc_or_module_addr(const void *x); #else diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 4b8a891..e0c8c49 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -16,6 +16,7 @@ struct vm_area_struct; /* vma defining user mapping in mm_types.h */ #define VM_USERMAP 0x00000008 /* suitable for remap_vmalloc_range */ #define VM_VPAGES 0x00000010 /* buffer for pages was vmalloc'ed */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ +#define VM_LOWMEM 0x00000040 /* Tracking of direct mapped lowmem */ /* bits [20..32] reserved for arch specific ioremap internals */ /* diff --git a/mm/Kconfig b/mm/Kconfig index 8028dcc..b3c459d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -519,3 +519,14 @@ config MEM_SOFT_DIRTY it can be cleared by hands. See Documentation/vm/soft-dirty.txt for more details. + +config ENABLE_VMALLOC_SAVING + bool "Intermix lowmem and vmalloc virtual space" + depends on ARCH_TRACKS_VMALLOC + help + Some memory layouts on embedded systems steal large amounts + of lowmem physical memory for purposes outside of the kernel. + Rather than waste the physical and virtual space, allow the + kernel to use the virtual space as vmalloc space. + + If unsure, say N. diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 13a5495..2ec9ac7 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -282,6 +282,38 @@ static unsigned long cached_align; static unsigned long vmap_area_pcpu_hole; +#ifdef CONFIG_ENABLE_VMALLOC_SAVING +int is_vmalloc_addr(const void *x) +{ + struct vmap_area *va; + int ret = 0; + + spin_lock(&vmap_area_lock); + list_for_each_entry(va, &vmap_area_list, list) { + if (va->flags & (VM_LAZY_FREE | VM_LAZY_FREEING)) + continue; + + if (!(va->flags & VM_VM_AREA)) + continue; + + if (va->vm == NULL) + continue; + + if (va->vm->flags & VM_LOWMEM) + continue; + + if ((unsigned long)x >= va->va_start && + (unsigned long)x < va->va_end) { + ret = 1; + break; + } + } + spin_unlock(&vmap_area_lock); + return ret; +} +EXPORT_SYMBOL(is_vmalloc_addr); +#endif + static struct vmap_area *__find_vmap_area(unsigned long addr) { struct rb_node *n = vmap_area_root.rb_node; @@ -2628,6 +2660,9 @@ static int s_show(struct seq_file *m, void *p) if (v->flags & VM_VPAGES) seq_printf(m, " vpages"); + if (v->flags & VM_LOWMEM) + seq_printf(m, " lowmem"); + show_numa_info(m, v); seq_putc(m, '\n'); return 0;