From patchwork Mon May 8 07:11:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234124 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65FAEC7EE24 for ; Mon, 8 May 2023 06:55:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F1C74900002; Mon, 8 May 2023 02:54:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DE979280003; Mon, 8 May 2023 02:54:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B9E3C280001; Mon, 8 May 2023 02:54:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7E8CC900002 for ; Mon, 8 May 2023 02:54:56 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4EAB2ADC78 for ; Mon, 8 May 2023 06:54:56 +0000 (UTC) X-FDA: 80766175392.15.B578A3D Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf13.hostedemail.com (Postfix) with ESMTP id C10FF20003 for ; Mon, 8 May 2023 06:54:52 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528894; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OhLwtz0svCuGkWin0y/EMxKde1C2pTHP2S8APY4aTok=; b=hvqsDFDs84lFsSPADNUlc7u9OLQYGFPvemlnL+RGCo/WtMI2vapy/SzuR2fqhlQAJozgdk 7IMTjISfA4VC/rJOBHxNGlDBL7Urx0ESW/3YcHEw7uDGke5ru/djmIH5WrCYUQ+xdwPc0g 0Pm0kO3Uqud72KmghOu+XN4kySgqIZg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528894; a=rsa-sha256; cv=none; b=3h5SVcr3ZUoLib0myaPYXwBr9AzC5avlNwIu8L64wnwqgHAAPzZAhBwQkq3F/YnTlSjL22 onT4VPyPDLnKUCLtwP3aEdBOh847C7rFZn6s8UIQZQf/dq95532taGb4rlIy/k0ipXV10S jKiLhIL1xP0yf/lrUhVN8vJsq0Pmx28= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4QFBkl6pXLz18LDr; Mon, 8 May 2023 14:50:39 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:47 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 03/12] mm: page_alloc: move set_zone_contiguous() into mm_init.c Date: Mon, 8 May 2023 15:11:51 +0800 Message-ID: <20230508071200.123962-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: n43c15z6se78iw9zkbfqusm3etqsmpzx X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: C10FF20003 X-HE-Tag: 1683528892-697116 X-HE-Meta: U2FsdGVkX1+l5+Eig6YS35+MaQUfKZSWoZqjkioSswgB8gRIrPDXWFIhpATxacozDMlPTgwqx0bmKizKdb9eLKDRzdL+Sh0uW61tG4gGQqnDVCSxxJjY0jnlus9ZX9smnYBDNukoqx+6Vrxa4y9cVzdFL4X06F+z43fGdJJW8HfDsYGHB3+q8cSADqO7jdGmjyRn907kQMKcTWpk2AN1RWmh5CxSY12sHvS9+vOFz+TsjQwrIPUJOCrnxojv+6CTxxjebUPgdK/AlSGz2hAm6jAg3sLHfkL0nEU/yHbQzZhftqeF0I0LCUVkJRH4+CaT/heIKwDHQp5DY9rIzIGsUF+15NTJyvkWlvoeWlr0XxS7xfSZubMtb6+QA8Ac9GB9oVnmF6PgZgP3Xw+yFUw/qz0GTZfqYRQbToKoYfz4iLSHnsFCbn0vIhO11b8DacmdYWEIYriB2ek12UOMnVwMaW20AcvIUEhC7KLbdmiDANOOgFW+9sNZlJlStEnQZGAVDPUJGsDrO8D5s/riiy+xZ4YUiEP0Z8vxFOwrVIX2qtjwDdhg42h4LbLGjwDOX6c/456rYY/9eBQwz1CaKsWXD9Bj48CeCbwZSj9r+S+dDw11AL+GmJuo/V0k1aEcSI36RK9V51RJZNzPApmmb0JGbDPdbXtv/mG3FO7C4Tf28O5KN5LrKlcgdjD6ccjcvcfypuFvzpSndeVUiEWaeNRY/jrLCTG1sedeG8IU4zDiBBsBArpq3GCLLFzybxddtUxvqufh6i8dNEDG6JtzI2wixg5Z9STUfjbxS/fOi5dJ4ncsmhMDCBw+q3TVTXapVE273sEJqSzqqKpjGSKD5aXQl1f/zhkJnCSl+OM5Gz/H8H2PAtMqFiTII02tapYgUhMsM+lkroHVUfaHlSY77e/AXWtz8YYdqf5LYGk2TUIHZchExdiLQnhoUiASIUn+hUX50CMf5wZlkIAyoQSMpKx jCwTHedr SpsHh/qiK57b4LeF4evLryBbnWbnEDkNrelm01c39eRZ7E+i3pIfCPdeKMAnvUlicthI2nGOMSbux2UE99MjKdEhu3YZttp4lFevszXnTVpeDiYQDBxD6JBMcOTHN+1zdgmc3SYnMaTAbTAZCcbR977x+LfobB+M3TIW+YID0lRYI5G0vvVGtinVPYYCvtEpzQBEY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_zone_contiguous() is only used in mm init/hotplug, and clear_zone_contiguous() only used in hotplug, move them from page_alloc.c to the more appropriate file. Signed-off-by: Kefeng Wang --- include/linux/memory_hotplug.h | 3 -- mm/internal.h | 7 +++ mm/mm_init.c | 74 +++++++++++++++++++++++++++++++ mm/page_alloc.c | 79 ---------------------------------- 4 files changed, 81 insertions(+), 82 deletions(-) diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 9fcbf5706595..04bc286eed42 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -326,9 +326,6 @@ static inline int remove_memory(u64 start, u64 size) static inline void __remove_memory(u64 start, u64 size) {} #endif /* CONFIG_MEMORY_HOTREMOVE */ -extern void set_zone_contiguous(struct zone *zone); -extern void clear_zone_contiguous(struct zone *zone); - #ifdef CONFIG_MEMORY_HOTPLUG extern void __ref free_area_init_core_hotplug(struct pglist_data *pgdat); extern int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags); diff --git a/mm/internal.h b/mm/internal.h index e28442c0858a..9482862b28cc 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -371,6 +371,13 @@ static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn, return __pageblock_pfn_to_page(start_pfn, end_pfn, zone); } +void set_zone_contiguous(struct zone *zone); + +static inline void clear_zone_contiguous(struct zone *zone) +{ + zone->contiguous = false; +} + extern int __isolate_free_page(struct page *page, unsigned int order); extern void __putback_isolated_page(struct page *page, unsigned int order, int mt); diff --git a/mm/mm_init.c b/mm/mm_init.c index 15201887f8e0..1f30b9e16577 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2330,6 +2330,80 @@ void __init init_cma_reserved_pageblock(struct page *page) } #endif +/* + * Check that the whole (or subset of) a pageblock given by the interval of + * [start_pfn, end_pfn) is valid and within the same zone, before scanning it + * with the migration of free compaction scanner. + * + * Return struct page pointer of start_pfn, or NULL if checks were not passed. + * + * It's possible on some configurations to have a setup like node0 node1 node0 + * i.e. it's possible that all pages within a zones range of pages do not + * belong to a single zone. We assume that a border between node0 and node1 + * can occur within a single pageblock, but not a node0 node1 node0 + * interleaving within a single pageblock. It is therefore sufficient to check + * the first and last page of a pageblock and avoid checking each individual + * page in a pageblock. + * + * Note: the function may return non-NULL struct page even for a page block + * which contains a memory hole (i.e. there is no physical memory for a subset + * of the pfn range). For example, if the pageblock order is MAX_ORDER, which + * will fall into 2 sub-sections, and the end pfn of the pageblock may be hole + * even though the start pfn is online and valid. This should be safe most of + * the time because struct pages are still initialized via init_unavailable_range() + * and pfn walkers shouldn't touch any physical memory range for which they do + * not recognize any specific metadata in struct pages. + */ +struct page *__pageblock_pfn_to_page(unsigned long start_pfn, + unsigned long end_pfn, struct zone *zone) +{ + struct page *start_page; + struct page *end_page; + + /* end_pfn is one past the range we are checking */ + end_pfn--; + + if (!pfn_valid(end_pfn)) + return NULL; + + start_page = pfn_to_online_page(start_pfn); + if (!start_page) + return NULL; + + if (page_zone(start_page) != zone) + return NULL; + + end_page = pfn_to_page(end_pfn); + + /* This gives a shorter code than deriving page_zone(end_page) */ + if (page_zone_id(start_page) != page_zone_id(end_page)) + return NULL; + + return start_page; +} + +void set_zone_contiguous(struct zone *zone) +{ + unsigned long block_start_pfn = zone->zone_start_pfn; + unsigned long block_end_pfn; + + block_end_pfn = pageblock_end_pfn(block_start_pfn); + for (; block_start_pfn < zone_end_pfn(zone); + block_start_pfn = block_end_pfn, + block_end_pfn += pageblock_nr_pages) { + + block_end_pfn = min(block_end_pfn, zone_end_pfn(zone)); + + if (!__pageblock_pfn_to_page(block_start_pfn, + block_end_pfn, zone)) + return; + cond_resched(); + } + + /* We confirm that there is no hole */ + zone->contiguous = true; +} + void __init page_alloc_init_late(void) { struct zone *zone; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4f094ba7c8fb..fe7c1ee5becd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1480,85 +1480,6 @@ void __free_pages_core(struct page *page, unsigned int order) __free_pages_ok(page, order, FPI_TO_TAIL); } -/* - * Check that the whole (or subset of) a pageblock given by the interval of - * [start_pfn, end_pfn) is valid and within the same zone, before scanning it - * with the migration of free compaction scanner. - * - * Return struct page pointer of start_pfn, or NULL if checks were not passed. - * - * It's possible on some configurations to have a setup like node0 node1 node0 - * i.e. it's possible that all pages within a zones range of pages do not - * belong to a single zone. We assume that a border between node0 and node1 - * can occur within a single pageblock, but not a node0 node1 node0 - * interleaving within a single pageblock. It is therefore sufficient to check - * the first and last page of a pageblock and avoid checking each individual - * page in a pageblock. - * - * Note: the function may return non-NULL struct page even for a page block - * which contains a memory hole (i.e. there is no physical memory for a subset - * of the pfn range). For example, if the pageblock order is MAX_ORDER, which - * will fall into 2 sub-sections, and the end pfn of the pageblock may be hole - * even though the start pfn is online and valid. This should be safe most of - * the time because struct pages are still initialized via init_unavailable_range() - * and pfn walkers shouldn't touch any physical memory range for which they do - * not recognize any specific metadata in struct pages. - */ -struct page *__pageblock_pfn_to_page(unsigned long start_pfn, - unsigned long end_pfn, struct zone *zone) -{ - struct page *start_page; - struct page *end_page; - - /* end_pfn is one past the range we are checking */ - end_pfn--; - - if (!pfn_valid(end_pfn)) - return NULL; - - start_page = pfn_to_online_page(start_pfn); - if (!start_page) - return NULL; - - if (page_zone(start_page) != zone) - return NULL; - - end_page = pfn_to_page(end_pfn); - - /* This gives a shorter code than deriving page_zone(end_page) */ - if (page_zone_id(start_page) != page_zone_id(end_page)) - return NULL; - - return start_page; -} - -void set_zone_contiguous(struct zone *zone) -{ - unsigned long block_start_pfn = zone->zone_start_pfn; - unsigned long block_end_pfn; - - block_end_pfn = pageblock_end_pfn(block_start_pfn); - for (; block_start_pfn < zone_end_pfn(zone); - block_start_pfn = block_end_pfn, - block_end_pfn += pageblock_nr_pages) { - - block_end_pfn = min(block_end_pfn, zone_end_pfn(zone)); - - if (!__pageblock_pfn_to_page(block_start_pfn, - block_end_pfn, zone)) - return; - cond_resched(); - } - - /* We confirm that there is no hole */ - zone->contiguous = true; -} - -void clear_zone_contiguous(struct zone *zone) -{ - zone->contiguous = false; -} - /* * The order of subdivision here is critical for the IO subsystem. * Please do not alter this order without good reasons and regression