From patchwork Wed Jul 4 08:38:57 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 1155041 Return-Path: X-Original-To: patchwork-linux-acpi@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 52CA0DFF0F for ; Wed, 4 Jul 2012 08:39:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030222Ab2GDIjD (ORCPT ); Wed, 4 Jul 2012 04:39:03 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:46370 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1756580Ab2GDIi6 (ORCPT ); Wed, 4 Jul 2012 04:38:58 -0400 X-IronPort-AV: E=Sophos;i="4.77,521,1336320000"; d="scan'208";a="5327206" Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3]) by song.cn.fujitsu.com with ESMTP; 04 Jul 2012 16:38:05 +0800 Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id q648cfXI013493; Wed, 4 Jul 2012 16:38:51 +0800 Received: from localhost.localdomain ([10.167.226.126]) by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3) with ESMTP id 2012070416384522-511641 ; Wed, 4 Jul 2012 16:38:45 +0800 From: Lai Jiangshan To: Mel Gorman Cc: Chris Metcalf , Len Brown , Greg Kroah-Hartman , Andi Kleen , Julia Lawall , David Howells , Lai Jiangshan , Benjamin Herrenschmidt , Kay Sievers , Ingo Molnar , Paul Gortmaker , Daniel Kiper , Andrew Morton , Konrad Rzeszutek Wilk , Michal Hocko , KAMEZAWA Hiroyuki , Minchan Kim , Michal Nazarewicz , Marek Szyprowski , Rik van Riel , Bjorn Helgaas , Christoph Lameter , David Rientjes , linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH 2/3 V1 resend] mm, page migrate: add MIGRATE_HOTREMOVE type Date: Wed, 4 Jul 2012 16:38:57 +0800 Message-Id: <1341391138-9547-3-git-send-email-laijs@cn.fujitsu.com> X-Mailer: git-send-email 1.7.4.4 In-Reply-To: <1341391138-9547-1-git-send-email-laijs@cn.fujitsu.com> References: <1341391138-9547-1-git-send-email-laijs@cn.fujitsu.com> X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September 15, 2011) at 2012/07/04 16:38:45, Serialize by Router on mailserver/fnst(Release 8.5.3|September 15, 2011) at 2012/07/04 16:38:58, Serialize complete at 2012/07/04 16:38:58 Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org MIGRATE_HOTREMOVE is a special kind of MIGRATE_MOVABLE, but it is stable: any page of the type can NOT be changed to the other type nor be moved to the other free list. So the pages of MIGRATE_HOTREMOVE are always movable, this ability is useful for hugepages and hotremove ...etc. MIGRATE_HOTREMOVE pages is the used as the first candidate when we allocate movable pages. 1) add small routine is_migrate_movable() for movable-like types 2) add small routine is_migrate_stable() for stable types 3) fix some comments 4) fix get_any_page(). The get_any_page() may change MIGRATE_CMA/HOTREMOVE types page to MOVABLE which may cause this page to be changed to UNMOVABLE. Signed-off-by: Lai Jiangshan --- include/linux/mmzone.h | 34 ++++++++++++++++++++++++++++++++++ include/linux/page-isolation.h | 2 +- mm/compaction.c | 6 +++--- mm/memory-failure.c | 8 +++++++- mm/page_alloc.c | 21 +++++++++++++-------- mm/vmstat.c | 3 +++ 6 files changed, 61 insertions(+), 13 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 979c333..872f430 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -58,6 +58,15 @@ enum { */ MIGRATE_CMA, #endif +#ifdef CONFIG_MEMORY_HOTREMOVE + /* + * MIGRATE_HOTREMOVE migration type is designed to mimic the way + * ZONE_MOVABLE works. Only movable pages can be allocated + * from MIGRATE_HOTREMOVE pageblocks and page allocator never + * implicitly change migration type of MIGRATE_HOTREMOVE pageblock. + */ + MIGRATE_HOTREMOVE, +#endif MIGRATE_ISOLATE, /* can't allocate from here */ MIGRATE_TYPES }; @@ -70,6 +79,31 @@ enum { # define cma_wmark_pages(zone) 0 #endif +#ifdef CONFIG_MEMORY_HOTREMOVE +#define is_migrate_hotremove(migratetype) ((migratetype) == MIGRATE_HOTREMOVE) +#else +#define is_migrate_hotremove(migratetype) false +#endif + +/* Is it one of the movable types */ +static inline bool is_migrate_movable(int migratetype) +{ + return is_migrate_hotremove(migratetype) || + migratetype == MIGRATE_MOVABLE || + is_migrate_cma(migratetype); +} + +/* + * Stable types: any page of the type can NOT be changed to + * the other type nor be moved to the other free list. + */ +static inline bool is_migrate_stable(int migratetype) +{ + return is_migrate_hotremove(migratetype) || + is_migrate_cma(migratetype) || + migratetype == MIGRATE_RESERVE; +} + #define for_each_migratetype_order(order, type) \ for (order = 0; order < MAX_ORDER; order++) \ for (type = 0; type < MIGRATE_TYPES; type++) diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 3bdcab3..b1d6d92 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -15,7 +15,7 @@ start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, unsigned migratetype); /* - * Changes MIGRATE_ISOLATE to MIGRATE_MOVABLE. + * Changes MIGRATE_ISOLATE to migratetype. * target range is [start_pfn, end_pfn) */ extern int diff --git a/mm/compaction.c b/mm/compaction.c index 7ea259d..e8da894 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -47,7 +47,7 @@ static void map_pages(struct list_head *list) static inline bool migrate_async_suitable(int migratetype) { - return is_migrate_cma(migratetype) || migratetype == MIGRATE_MOVABLE; + return is_migrate_movable(migratetype); } /* @@ -375,8 +375,8 @@ static bool suitable_migration_target(struct page *page) if (PageBuddy(page) && page_order(page) >= pageblock_order) return true; - /* If the block is MIGRATE_MOVABLE or MIGRATE_CMA, allow migration */ - if (migrate_async_suitable(migratetype)) + /* If the block is movable, allow migration */ + if (is_migrate_movable(migratetype)) return true; /* Otherwise skip the block */ diff --git a/mm/memory-failure.c b/mm/memory-failure.c index ab1e714..f5e300d 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1367,6 +1367,7 @@ static struct page *new_page(struct page *p, unsigned long private, int **x) static int get_any_page(struct page *p, unsigned long pfn, int flags) { int ret; + int mt; if (flags & MF_COUNT_INCREASED) return 1; @@ -1377,6 +1378,11 @@ static int get_any_page(struct page *p, unsigned long pfn, int flags) */ lock_memory_hotplug(); + /* Don't move page of stable type to MIGRATE_MOVABLE */ + mt = get_pageblock_migratetype(p); + if (!is_migrate_stable(mt)) + mt = MIGRATE_MOVABLE; + /* * Isolate the page, so that it doesn't get reallocated if it * was free. @@ -1404,7 +1410,7 @@ static int get_any_page(struct page *p, unsigned long pfn, int flags) /* Not a free page */ ret = 1; } - unset_migratetype_isolate(p, MIGRATE_MOVABLE); + unset_migratetype_isolate(p, mt); unlock_memory_hotplug(); return ret; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index efc327f..7a4a03b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -667,7 +667,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, page = list_entry(list->prev, struct page, lru); /* must delete as __free_one_page list manipulates */ list_del(&page->lru); - /* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */ + /* MIGRATE_MOVABLE list may include other types */ __free_one_page(page, zone, 0, page_private(page)); trace_mm_page_pcpu_drain(page, 0, page_private(page)); } while (--to_free && --batch_free && !list_empty(list)); @@ -1058,6 +1058,14 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order, { struct page *page; +#ifdef CONFIG_MEMORY_HOTREMOVE + if (migratetype == MIGRATE_MOVABLE) { + page = __rmqueue_smallest(zone, order, MIGRATE_HOTREMOVE); + if (likely(page)) + goto done; + } +#endif + page = __rmqueue_smallest(zone, order, migratetype); #ifdef CONFIG_CMA @@ -1071,6 +1079,7 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order, if (unlikely(!page)) page = __rmqueue_smallest(zone, order, MIGRATE_RESERVE); +done: trace_mm_page_alloc_zone_locked(page, order, migratetype); return page; } @@ -1105,11 +1114,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, list_add(&page->lru, list); else list_add_tail(&page->lru, list); - if (IS_ENABLED(CONFIG_CMA)) { - mt = get_pageblock_migratetype(page); - if (!is_migrate_cma(mt) && mt != MIGRATE_ISOLATE) - mt = migratetype; - } + mt = get_pageblock_migratetype(page); set_page_private(page, mt); list = &page->lru; } @@ -1392,7 +1397,7 @@ int split_free_page(struct page *page) struct page *endpage = page + (1 << order) - 1; for (; page < endpage; page += pageblock_nr_pages) { int mt = get_pageblock_migratetype(page); - if (mt != MIGRATE_ISOLATE && !is_migrate_cma(mt)) + if (mt != MIGRATE_ISOLATE && !is_migrate_stable(mt)) set_pageblock_migratetype(page, MIGRATE_MOVABLE); } @@ -5465,7 +5470,7 @@ __count_immobile_pages(struct zone *zone, struct page *page, int count) if (zone_idx(zone) == ZONE_MOVABLE) return true; mt = get_pageblock_migratetype(page); - if (mt == MIGRATE_MOVABLE || is_migrate_cma(mt)) + if (is_migrate_movable(mt)) return true; pfn = page_to_pfn(page); diff --git a/mm/vmstat.c b/mm/vmstat.c index 1bbbbd9..44a3b7f 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -616,6 +616,9 @@ static char * const migratetype_names[MIGRATE_TYPES] = { #ifdef CONFIG_CMA "CMA", #endif +#ifdef CONFIG_MEMORY_HOTREMOVE + "Hotremove", +#endif "Isolate", };