From patchwork Tue Oct 1 09:56:31 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yanfei Zhang X-Patchwork-Id: 2969231 Return-Path: X-Original-To: patchwork-linux-acpi@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7C7C29F289 for ; Tue, 1 Oct 2013 09:57:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 27E9E203DC for ; Tue, 1 Oct 2013 09:57:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CA0892038C for ; Tue, 1 Oct 2013 09:57:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753050Ab3JAJ52 (ORCPT ); Tue, 1 Oct 2013 05:57:28 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:41686 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1752378Ab3JAJ51 (ORCPT ); Tue, 1 Oct 2013 05:57:27 -0400 X-IronPort-AV: E=Sophos;i="4.90,1013,1371052800"; d="scan'208";a="8662846" Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3]) by song.cn.fujitsu.com with ESMTP; 01 Oct 2013 17:54:11 +0800 Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id r919vO74013893; Tue, 1 Oct 2013 17:57:24 +0800 Received: from [10.167.226.121] ([10.167.226.121]) by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3) with ESMTP id 2013100117553154-1917315 ; Tue, 1 Oct 2013 17:55:31 +0800 Message-ID: <524A9C4F.70306@cn.fujitsu.com> Date: Tue, 01 Oct 2013 17:56:31 +0800 From: Zhang Yanfei User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130514 Thunderbird/17.0.6 MIME-Version: 1.0 To: robert.moore@intel.com, lv.zheng@intel.com, "Rafael J . Wysocki" , Len Brown , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Andrew Morton , Tejun Heo , Thomas Renninger , Yinghai Lu , Jiang Liu , Wen Congyang , Lai Jiangshan , Yasuaki Ishimatsu , Taku Izumi , Mel Gorman , Minchan Kim , "mina86@mina86.com" , gong.chen@linux.intel.com, vasilis.liaskovitis@profitbricks.com, Rik van Riel , prarit@redhat.com, Toshi Kani CC: Zhang Yanfei , "x86@kernel.org" , "linux-kernel@vger.kernel.org" , Linux MM , linux-acpi@vger.kernel.org, Tang Chen , imtangchen@gmail.com, Zhang Yanfei Subject: [PATCH -mm 8/8] x86, numa, acpi, memory-hotplug: Make movable_node have higher priority References: <524A991D.3050005@cn.fujitsu.com> In-Reply-To: <524A991D.3050005@cn.fujitsu.com> X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September 15, 2011) at 2013/10/01 17:55:31, Serialize by Router on mailserver/fnst(Release 8.5.3|September 15, 2011) at 2013/10/01 17:55:33, Serialize complete at 2013/10/01 17:55:33 Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,KHOP_BIG_TO_CC, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Tang Chen Arrange hotpluggable memory as ZONE_MOVABLE will cause NUMA performance down because the kernel cannot use movable memory. For users who don't use memory hotplug and who don't want to lose their NUMA performance, they need a way to disable this functionality. So we improved movable_node boot option. If users specify the original movablecore=nn@ss boot option, the kernel will arrange [ss, ss+nn) as ZONE_MOVABLE. The kernelcore=nn@ss boot option is similar except it specifies ZONE_NORMAL ranges. Now, if users specify "movable_node" in kernel commandline, the kernel will arrange hotpluggable memory in SRAT as ZONE_MOVABLE. And if users do this, all the other movablecore=nn@ss and kernelcore=nn@ss options should be ignored. For those who don't want this, just specify nothing. The kernel will act as before. Signed-off-by: Tang Chen Signed-off-by: Zhang Yanfei Reviewed-by: Wanpeng Li --- include/linux/memblock.h | 1 + include/linux/memory_hotplug.h | 5 +++++ mm/memblock.c | 5 +++++ mm/memory_hotplug.c | 3 +++ mm/page_alloc.c | 30 ++++++++++++++++++++++++++++-- 5 files changed, 42 insertions(+), 2 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index b6f149f..046f22a 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -65,6 +65,7 @@ int memblock_reserve(phys_addr_t base, phys_addr_t size); void memblock_trim_memory(phys_addr_t align); int memblock_mark_hotplug(phys_addr_t base, phys_addr_t size); int memblock_clear_hotplug(phys_addr_t base, phys_addr_t size); +bool memblock_is_hotpluggable(struct memblock_region *region); #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index dd38e62..b469513 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -33,6 +33,11 @@ enum { ONLINE_MOVABLE, }; +#ifdef CONFIG_MOVABLE_NODE +/* Enable/disable SRAT in movable_node boot option */ +extern bool movable_node_enable_srat; +#endif /* CONFIG_MOVABLE_NODE */ + /* * pgdat resizing functions */ diff --git a/mm/memblock.c b/mm/memblock.c index 9bdebfb..6241129 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -708,6 +708,11 @@ int __init_memblock memblock_mark_hotplug(phys_addr_t base, phys_addr_t size) return 0; } +bool __init_memblock memblock_is_hotpluggable(struct memblock_region *region) +{ + return region->flags & MEMBLOCK_HOTPLUG; +} + /** * memblock_clear_hotplug - Clear flag MEMBLOCK_HOTPLUG for a specified region. * @base: the base phys addr of the region diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index dcd819a..a635d0c 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1379,6 +1379,8 @@ check_pages_isolated(unsigned long start_pfn, unsigned long end_pfn) } #ifdef CONFIG_MOVABLE_NODE +bool __initdata movable_node_enable_srat; + /* * When CONFIG_MOVABLE_NODE, we permit offlining of a node which doesn't have * normal memory. @@ -1436,6 +1438,7 @@ static int __init cmdline_parse_movablenode(char *p) * the kernel away from hotpluggable memory. */ memblock_set_bottom_up(true); + movable_node_enable_srat = true; #else pr_warn("movablenode option not supported"); #endif diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0ee638f..612f0c8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5021,9 +5021,35 @@ static void __init find_zone_movable_pfns_for_nodes(void) nodemask_t saved_node_state = node_states[N_MEMORY]; unsigned long totalpages = early_calculate_totalpages(); int usable_nodes = nodes_weight(node_states[N_MEMORY]); + struct memblock_type *type = &memblock.memory; + + /* Need to find movable_zone earlier when movable_node is specified. */ + find_usable_zone_for_movable(); + +#ifdef CONFIG_MOVABLE_NODE + /* + * If movable_node is specified, ignore kernelcore and movablecore + * options. + */ + if (movable_node_enable_srat) { + for (i = 0; i < type->cnt; i++) { + if (!memblock_is_hotpluggable(&type->regions[i])) + continue; + + nid = type->regions[i].nid; + + usable_startpfn = PFN_DOWN(type->regions[i].base); + zone_movable_pfn[nid] = zone_movable_pfn[nid] ? + min(usable_startpfn, zone_movable_pfn[nid]) : + usable_startpfn; + } + + goto out2; + } +#endif /* - * If movablecore was specified, calculate what size of + * If movablecore=nn[KMG] was specified, calculate what size of * kernelcore that corresponds so that memory usable for * any allocation type is evenly spread. If both kernelcore * and movablecore are specified, then the value of kernelcore @@ -5049,7 +5075,6 @@ static void __init find_zone_movable_pfns_for_nodes(void) goto out; /* usable_startpfn is the lowest possible pfn ZONE_MOVABLE can be at */ - find_usable_zone_for_movable(); usable_startpfn = arch_zone_lowest_possible_pfn[movable_zone]; restart: @@ -5140,6 +5165,7 @@ restart: if (usable_nodes && required_kernelcore > usable_nodes) goto restart; +out2: /* Align start of ZONE_MOVABLE on all nids to MAX_ORDER_NR_PAGES */ for (nid = 0; nid < MAX_NUMNODES; nid++) zone_movable_pfn[nid] =