From patchwork Fri Aug 12 10:58:23 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 1060562 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.4) with ESMTP id p7CB0eZh023917 for ; Fri, 12 Aug 2011 11:00:40 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753815Ab1HLK6p (ORCPT ); Fri, 12 Aug 2011 06:58:45 -0400 Received: from mailout2.w1.samsung.com ([210.118.77.12]:50039 "EHLO mailout2.w1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751476Ab1HLK6i (ORCPT ); Fri, 12 Aug 2011 06:58:38 -0400 Received: from eu_spt1 (mailout2.w1.samsung.com [210.118.77.12]) by mailout2.w1.samsung.com (iPlanet Messaging Server 5.2 Patch 2 (built Jul 14 2004)) with ESMTP id <0LPT00L0JAHO04@mailout2.w1.samsung.com>; Fri, 12 Aug 2011 11:58:36 +0100 (BST) Received: from linux.samsung.com ([106.116.38.10]) by spt1.w1.samsung.com (iPlanet Messaging Server 5.2 Patch 2 (built Jul 14 2004)) with ESMTPA id <0LPT004Q1AHLSN@spt1.w1.samsung.com>; Fri, 12 Aug 2011 11:58:34 +0100 (BST) Received: from mcdsrvbld02.digital.local (unknown [106.116.37.23]) by linux.samsung.com (Postfix) with ESMTP id 3FBC827005D; Fri, 12 Aug 2011 13:00:00 +0200 (CEST) Date: Fri, 12 Aug 2011 12:58:23 +0200 From: Marek Szyprowski Subject: [PATCH 1/9] mm: move some functions from memory_hotplug.c to page_isolation.c In-reply-to: <1313146711-1767-1-git-send-email-m.szyprowski@samsung.com> To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linaro-mm-sig@lists.linaro.org Cc: Michal Nazarewicz , Marek Szyprowski , Kyungmin Park , Russell King , Andrew Morton , KAMEZAWA Hiroyuki , Ankita Garg , Daniel Walker , Mel Gorman , Arnd Bergmann , Jesse Barker , Jonathan Corbet , Shariq Hasnain , Chunsang Jeong Message-id: <1313146711-1767-2-git-send-email-m.szyprowski@samsung.com> MIME-version: 1.0 X-Mailer: git-send-email 1.7.5.4 Content-type: TEXT/PLAIN Content-transfer-encoding: 7BIT References: <1313146711-1767-1-git-send-email-m.szyprowski@samsung.com> Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Fri, 12 Aug 2011 11:00:40 +0000 (UTC) From: KAMEZAWA Hiroyuki Memory hotplug is a logic for making pages unused in the specified range of pfn. So, some of core logics can be used for other purpose as allocating a very large contigous memory block. This patch moves some functions from mm/memory_hotplug.c to mm/page_isolation.c. This helps adding a function for large-alloc in page_isolation.c with memory-unplug technique. Signed-off-by: KAMEZAWA Hiroyuki [m.nazarewicz: reworded commit message] Signed-off-by: Michal Nazarewicz Signed-off-by: Kyungmin Park [m.szyprowski: rebased and updated to Linux v3.0-rc1] Signed-off-by: Marek Szyprowski CC: Michal Nazarewicz Acked-by: Arnd Bergmann --- include/linux/page-isolation.h | 7 +++ mm/memory_hotplug.c | 111 -------------------------------------- mm/page_isolation.c | 114 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 121 insertions(+), 111 deletions(-) diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 051c1b1..58cdbac 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -33,5 +33,12 @@ test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn); extern int set_migratetype_isolate(struct page *page); extern void unset_migratetype_isolate(struct page *page); +/* + * For migration. + */ + +int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn); +unsigned long scan_lru_pages(unsigned long start, unsigned long end); +int do_migrate_range(unsigned long start_pfn, unsigned long end_pfn); #endif diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 6e7d8b2..3419dd6 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -707,117 +707,6 @@ int is_mem_section_removable(unsigned long start_pfn, unsigned long nr_pages) } /* - * Confirm all pages in a range [start, end) is belongs to the same zone. - */ -static int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn) -{ - unsigned long pfn; - struct zone *zone = NULL; - struct page *page; - int i; - for (pfn = start_pfn; - pfn < end_pfn; - pfn += MAX_ORDER_NR_PAGES) { - i = 0; - /* This is just a CONFIG_HOLES_IN_ZONE check.*/ - while ((i < MAX_ORDER_NR_PAGES) && !pfn_valid_within(pfn + i)) - i++; - if (i == MAX_ORDER_NR_PAGES) - continue; - page = pfn_to_page(pfn + i); - if (zone && page_zone(page) != zone) - return 0; - zone = page_zone(page); - } - return 1; -} - -/* - * Scanning pfn is much easier than scanning lru list. - * Scan pfn from start to end and Find LRU page. - */ -static unsigned long scan_lru_pages(unsigned long start, unsigned long end) -{ - unsigned long pfn; - struct page *page; - for (pfn = start; pfn < end; pfn++) { - if (pfn_valid(pfn)) { - page = pfn_to_page(pfn); - if (PageLRU(page)) - return pfn; - } - } - return 0; -} - -static struct page * -hotremove_migrate_alloc(struct page *page, unsigned long private, int **x) -{ - /* This should be improooooved!! */ - return alloc_page(GFP_HIGHUSER_MOVABLE); -} - -#define NR_OFFLINE_AT_ONCE_PAGES (256) -static int -do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) -{ - unsigned long pfn; - struct page *page; - int move_pages = NR_OFFLINE_AT_ONCE_PAGES; - int not_managed = 0; - int ret = 0; - LIST_HEAD(source); - - for (pfn = start_pfn; pfn < end_pfn && move_pages > 0; pfn++) { - if (!pfn_valid(pfn)) - continue; - page = pfn_to_page(pfn); - if (!get_page_unless_zero(page)) - continue; - /* - * We can skip free pages. And we can only deal with pages on - * LRU. - */ - ret = isolate_lru_page(page); - if (!ret) { /* Success */ - put_page(page); - list_add_tail(&page->lru, &source); - move_pages--; - inc_zone_page_state(page, NR_ISOLATED_ANON + - page_is_file_cache(page)); - - } else { -#ifdef CONFIG_DEBUG_VM - printk(KERN_ALERT "removing pfn %lx from LRU failed\n", - pfn); - dump_page(page); -#endif - put_page(page); - /* Because we don't have big zone->lock. we should - check this again here. */ - if (page_count(page)) { - not_managed++; - ret = -EBUSY; - break; - } - } - } - if (!list_empty(&source)) { - if (not_managed) { - putback_lru_pages(&source); - goto out; - } - /* this function returns # of failed pages */ - ret = migrate_pages(&source, hotremove_migrate_alloc, 0, - true, true); - if (ret) - putback_lru_pages(&source); - } -out: - return ret; -} - -/* * remove from free_area[] and mark all as Reserved. */ static int diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 4ae42bb..270a026 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -5,6 +5,9 @@ #include #include #include +#include +#include +#include #include "internal.h" static inline struct page * @@ -139,3 +142,114 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn) spin_unlock_irqrestore(&zone->lock, flags); return ret ? 0 : -EBUSY; } + + +/* + * Confirm all pages in a range [start, end) is belongs to the same zone. + */ +int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn) +{ + unsigned long pfn; + struct zone *zone = NULL; + struct page *page; + int i; + for (pfn = start_pfn; + pfn < end_pfn; + pfn += MAX_ORDER_NR_PAGES) { + i = 0; + /* This is just a CONFIG_HOLES_IN_ZONE check.*/ + while ((i < MAX_ORDER_NR_PAGES) && !pfn_valid_within(pfn + i)) + i++; + if (i == MAX_ORDER_NR_PAGES) + continue; + page = pfn_to_page(pfn + i); + if (zone && page_zone(page) != zone) + return 0; + zone = page_zone(page); + } + return 1; +} + +/* + * Scanning pfn is much easier than scanning lru list. + * Scan pfn from start to end and Find LRU page. + */ +unsigned long scan_lru_pages(unsigned long start, unsigned long end) +{ + unsigned long pfn; + struct page *page; + for (pfn = start; pfn < end; pfn++) { + if (pfn_valid(pfn)) { + page = pfn_to_page(pfn); + if (PageLRU(page)) + return pfn; + } + } + return 0; +} + +struct page * +hotremove_migrate_alloc(struct page *page, unsigned long private, int **x) +{ + /* This should be improooooved!! */ + return alloc_page(GFP_HIGHUSER_MOVABLE); +} + +#define NR_OFFLINE_AT_ONCE_PAGES (256) +int do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) +{ + unsigned long pfn; + struct page *page; + int move_pages = NR_OFFLINE_AT_ONCE_PAGES; + int not_managed = 0; + int ret = 0; + LIST_HEAD(source); + + for (pfn = start_pfn; pfn < end_pfn && move_pages > 0; pfn++) { + if (!pfn_valid(pfn)) + continue; + page = pfn_to_page(pfn); + if (!get_page_unless_zero(page)) + continue; + /* + * We can skip free pages. And we can only deal with pages on + * LRU. + */ + ret = isolate_lru_page(page); + if (!ret) { /* Success */ + put_page(page); + list_add_tail(&page->lru, &source); + move_pages--; + inc_zone_page_state(page, NR_ISOLATED_ANON + + page_is_file_cache(page)); + + } else { +#ifdef CONFIG_DEBUG_VM + printk(KERN_ALERT "removing pfn %lx from LRU failed\n", + pfn); + dump_page(page); +#endif + put_page(page); + /* Because we don't have big zone->lock. we should + check this again here. */ + if (page_count(page)) { + not_managed++; + ret = -EBUSY; + break; + } + } + } + if (!list_empty(&source)) { + if (not_managed) { + putback_lru_pages(&source); + goto out; + } + /* this function returns # of failed pages */ + ret = migrate_pages(&source, hotremove_migrate_alloc, 0, + true, true); + if (ret) + putback_lru_pages(&source); + } +out: + return ret; +}