Message ID | 20230508071200.123962-4-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: page_alloc: misc cleanup and refector | expand |
Kefeng Wang <wangkefeng.wang@huawei.com> writes: > set_zone_contiguous() is only used in mm init/hotplug, and > clear_zone_contiguous() only used in hotplug, move them from > page_alloc.c to the more appropriate file. > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > --- > include/linux/memory_hotplug.h | 3 -- > mm/internal.h | 7 +++ > mm/mm_init.c | 74 +++++++++++++++++++++++++++++++ > mm/page_alloc.c | 79 ---------------------------------- > 4 files changed, 81 insertions(+), 82 deletions(-) > > diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h > index 9fcbf5706595..04bc286eed42 100644 > --- a/include/linux/memory_hotplug.h > +++ b/include/linux/memory_hotplug.h > @@ -326,9 +326,6 @@ static inline int remove_memory(u64 start, u64 size) > static inline void __remove_memory(u64 start, u64 size) {} > #endif /* CONFIG_MEMORY_HOTREMOVE */ > > -extern void set_zone_contiguous(struct zone *zone); > -extern void clear_zone_contiguous(struct zone *zone); > - > #ifdef CONFIG_MEMORY_HOTPLUG > extern void __ref free_area_init_core_hotplug(struct pglist_data *pgdat); > extern int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags); > diff --git a/mm/internal.h b/mm/internal.h > index e28442c0858a..9482862b28cc 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -371,6 +371,13 @@ static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn, > return __pageblock_pfn_to_page(start_pfn, end_pfn, zone); > } > > +void set_zone_contiguous(struct zone *zone); > + > +static inline void clear_zone_contiguous(struct zone *zone) > +{ > + zone->contiguous = false; > +} > + > extern int __isolate_free_page(struct page *page, unsigned int order); > extern void __putback_isolated_page(struct page *page, unsigned int order, > int mt); > diff --git a/mm/mm_init.c b/mm/mm_init.c > index 15201887f8e0..1f30b9e16577 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -2330,6 +2330,80 @@ void __init init_cma_reserved_pageblock(struct page *page) > } > #endif > > +/* > + * Check that the whole (or subset of) a pageblock given by the interval of > + * [start_pfn, end_pfn) is valid and within the same zone, before scanning it > + * with the migration of free compaction scanner. > + * > + * Return struct page pointer of start_pfn, or NULL if checks were not passed. > + * > + * It's possible on some configurations to have a setup like node0 node1 node0 > + * i.e. it's possible that all pages within a zones range of pages do not > + * belong to a single zone. We assume that a border between node0 and node1 > + * can occur within a single pageblock, but not a node0 node1 node0 > + * interleaving within a single pageblock. It is therefore sufficient to check > + * the first and last page of a pageblock and avoid checking each individual > + * page in a pageblock. > + * > + * Note: the function may return non-NULL struct page even for a page block > + * which contains a memory hole (i.e. there is no physical memory for a subset > + * of the pfn range). For example, if the pageblock order is MAX_ORDER, which > + * will fall into 2 sub-sections, and the end pfn of the pageblock may be hole > + * even though the start pfn is online and valid. This should be safe most of > + * the time because struct pages are still initialized via init_unavailable_range() > + * and pfn walkers shouldn't touch any physical memory range for which they do > + * not recognize any specific metadata in struct pages. > + */ > +struct page *__pageblock_pfn_to_page(unsigned long start_pfn, > + unsigned long end_pfn, struct zone *zone) __pageblock_pfn_to_page() is also called by compaction code too (e.g., isolate_freepages_range() -> pageblock_pfn_to_page() -> __pageblock_pfn_to_page()). So, it is used not only by initialization and hotplug? Best Regards, Huang, Ying > +{ > + struct page *start_page; > + struct page *end_page; > + > + /* end_pfn is one past the range we are checking */ > + end_pfn--; > + > + if (!pfn_valid(end_pfn)) > + return NULL; > + > + start_page = pfn_to_online_page(start_pfn); > + if (!start_page) > + return NULL; > + > + if (page_zone(start_page) != zone) > + return NULL; > + > + end_page = pfn_to_page(end_pfn); > + > + /* This gives a shorter code than deriving page_zone(end_page) */ > + if (page_zone_id(start_page) != page_zone_id(end_page)) > + return NULL; > + > + return start_page; > +} > + > +void set_zone_contiguous(struct zone *zone) > +{ > + unsigned long block_start_pfn = zone->zone_start_pfn; > + unsigned long block_end_pfn; > + > + block_end_pfn = pageblock_end_pfn(block_start_pfn); > + for (; block_start_pfn < zone_end_pfn(zone); > + block_start_pfn = block_end_pfn, > + block_end_pfn += pageblock_nr_pages) { > + > + block_end_pfn = min(block_end_pfn, zone_end_pfn(zone)); > + > + if (!__pageblock_pfn_to_page(block_start_pfn, > + block_end_pfn, zone)) > + return; > + cond_resched(); > + } > + > + /* We confirm that there is no hole */ > + zone->contiguous = true; > +} > + > void __init page_alloc_init_late(void) > { > struct zone *zone; > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 4f094ba7c8fb..fe7c1ee5becd 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1480,85 +1480,6 @@ void __free_pages_core(struct page *page, unsigned int order) > __free_pages_ok(page, order, FPI_TO_TAIL); > } > > -/* > - * Check that the whole (or subset of) a pageblock given by the interval of > - * [start_pfn, end_pfn) is valid and within the same zone, before scanning it > - * with the migration of free compaction scanner. > - * > - * Return struct page pointer of start_pfn, or NULL if checks were not passed. > - * > - * It's possible on some configurations to have a setup like node0 node1 node0 > - * i.e. it's possible that all pages within a zones range of pages do not > - * belong to a single zone. We assume that a border between node0 and node1 > - * can occur within a single pageblock, but not a node0 node1 node0 > - * interleaving within a single pageblock. It is therefore sufficient to check > - * the first and last page of a pageblock and avoid checking each individual > - * page in a pageblock. > - * > - * Note: the function may return non-NULL struct page even for a page block > - * which contains a memory hole (i.e. there is no physical memory for a subset > - * of the pfn range). For example, if the pageblock order is MAX_ORDER, which > - * will fall into 2 sub-sections, and the end pfn of the pageblock may be hole > - * even though the start pfn is online and valid. This should be safe most of > - * the time because struct pages are still initialized via init_unavailable_range() > - * and pfn walkers shouldn't touch any physical memory range for which they do > - * not recognize any specific metadata in struct pages. > - */ > -struct page *__pageblock_pfn_to_page(unsigned long start_pfn, > - unsigned long end_pfn, struct zone *zone) > -{ > - struct page *start_page; > - struct page *end_page; > - > - /* end_pfn is one past the range we are checking */ > - end_pfn--; > - > - if (!pfn_valid(end_pfn)) > - return NULL; > - > - start_page = pfn_to_online_page(start_pfn); > - if (!start_page) > - return NULL; > - > - if (page_zone(start_page) != zone) > - return NULL; > - > - end_page = pfn_to_page(end_pfn); > - > - /* This gives a shorter code than deriving page_zone(end_page) */ > - if (page_zone_id(start_page) != page_zone_id(end_page)) > - return NULL; > - > - return start_page; > -} > - > -void set_zone_contiguous(struct zone *zone) > -{ > - unsigned long block_start_pfn = zone->zone_start_pfn; > - unsigned long block_end_pfn; > - > - block_end_pfn = pageblock_end_pfn(block_start_pfn); > - for (; block_start_pfn < zone_end_pfn(zone); > - block_start_pfn = block_end_pfn, > - block_end_pfn += pageblock_nr_pages) { > - > - block_end_pfn = min(block_end_pfn, zone_end_pfn(zone)); > - > - if (!__pageblock_pfn_to_page(block_start_pfn, > - block_end_pfn, zone)) > - return; > - cond_resched(); > - } > - > - /* We confirm that there is no hole */ > - zone->contiguous = true; > -} > - > -void clear_zone_contiguous(struct zone *zone) > -{ > - zone->contiguous = false; > -} > - > /* > * The order of subdivision here is critical for the IO subsystem. > * Please do not alter this order without good reasons and regression
On 2023/5/8 15:12, Huang, Ying wrote: > Kefeng Wang <wangkefeng.wang@huawei.com> writes: > >> set_zone_contiguous() is only used in mm init/hotplug, and >> clear_zone_contiguous() only used in hotplug, move them from >> page_alloc.c to the more appropriate file. >> >> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> >> --- >> include/linux/memory_hotplug.h | 3 -- >> mm/internal.h | 7 +++ >> mm/mm_init.c | 74 +++++++++++++++++++++++++++++++ >> mm/page_alloc.c | 79 ---------------------------------- >> 4 files changed, 81 insertions(+), 82 deletions(-) >> ... >> >> +/* >> + * Check that the whole (or subset of) a pageblock given by the interval of >> + * [start_pfn, end_pfn) is valid and within the same zone, before scanning it >> + * with the migration of free compaction scanner. >> + * >> + * Return struct page pointer of start_pfn, or NULL if checks were not passed. >> + * >> + * It's possible on some configurations to have a setup like node0 node1 node0 >> + * i.e. it's possible that all pages within a zones range of pages do not >> + * belong to a single zone. We assume that a border between node0 and node1 >> + * can occur within a single pageblock, but not a node0 node1 node0 >> + * interleaving within a single pageblock. It is therefore sufficient to check >> + * the first and last page of a pageblock and avoid checking each individual >> + * page in a pageblock. >> + * >> + * Note: the function may return non-NULL struct page even for a page block >> + * which contains a memory hole (i.e. there is no physical memory for a subset >> + * of the pfn range). For example, if the pageblock order is MAX_ORDER, which >> + * will fall into 2 sub-sections, and the end pfn of the pageblock may be hole >> + * even though the start pfn is online and valid. This should be safe most of >> + * the time because struct pages are still initialized via init_unavailable_range() >> + * and pfn walkers shouldn't touch any physical memory range for which they do >> + * not recognize any specific metadata in struct pages. >> + */ >> +struct page *__pageblock_pfn_to_page(unsigned long start_pfn, >> + unsigned long end_pfn, struct zone *zone) > > __pageblock_pfn_to_page() is also called by compaction code too (e.g., > isolate_freepages_range() -> pageblock_pfn_to_page() -> > __pageblock_pfn_to_page()). > > So, it is used not only by initialization and hotplug? > I should drop the move of this function, thanks for your reminder. > Best Regards, > Huang, Ying
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 9fcbf5706595..04bc286eed42 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -326,9 +326,6 @@ static inline int remove_memory(u64 start, u64 size) static inline void __remove_memory(u64 start, u64 size) {} #endif /* CONFIG_MEMORY_HOTREMOVE */ -extern void set_zone_contiguous(struct zone *zone); -extern void clear_zone_contiguous(struct zone *zone); - #ifdef CONFIG_MEMORY_HOTPLUG extern void __ref free_area_init_core_hotplug(struct pglist_data *pgdat); extern int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags); diff --git a/mm/internal.h b/mm/internal.h index e28442c0858a..9482862b28cc 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -371,6 +371,13 @@ static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn, return __pageblock_pfn_to_page(start_pfn, end_pfn, zone); } +void set_zone_contiguous(struct zone *zone); + +static inline void clear_zone_contiguous(struct zone *zone) +{ + zone->contiguous = false; +} + extern int __isolate_free_page(struct page *page, unsigned int order); extern void __putback_isolated_page(struct page *page, unsigned int order, int mt); diff --git a/mm/mm_init.c b/mm/mm_init.c index 15201887f8e0..1f30b9e16577 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2330,6 +2330,80 @@ void __init init_cma_reserved_pageblock(struct page *page) } #endif +/* + * Check that the whole (or subset of) a pageblock given by the interval of + * [start_pfn, end_pfn) is valid and within the same zone, before scanning it + * with the migration of free compaction scanner. + * + * Return struct page pointer of start_pfn, or NULL if checks were not passed. + * + * It's possible on some configurations to have a setup like node0 node1 node0 + * i.e. it's possible that all pages within a zones range of pages do not + * belong to a single zone. We assume that a border between node0 and node1 + * can occur within a single pageblock, but not a node0 node1 node0 + * interleaving within a single pageblock. It is therefore sufficient to check + * the first and last page of a pageblock and avoid checking each individual + * page in a pageblock. + * + * Note: the function may return non-NULL struct page even for a page block + * which contains a memory hole (i.e. there is no physical memory for a subset + * of the pfn range). For example, if the pageblock order is MAX_ORDER, which + * will fall into 2 sub-sections, and the end pfn of the pageblock may be hole + * even though the start pfn is online and valid. This should be safe most of + * the time because struct pages are still initialized via init_unavailable_range() + * and pfn walkers shouldn't touch any physical memory range for which they do + * not recognize any specific metadata in struct pages. + */ +struct page *__pageblock_pfn_to_page(unsigned long start_pfn, + unsigned long end_pfn, struct zone *zone) +{ + struct page *start_page; + struct page *end_page; + + /* end_pfn is one past the range we are checking */ + end_pfn--; + + if (!pfn_valid(end_pfn)) + return NULL; + + start_page = pfn_to_online_page(start_pfn); + if (!start_page) + return NULL; + + if (page_zone(start_page) != zone) + return NULL; + + end_page = pfn_to_page(end_pfn); + + /* This gives a shorter code than deriving page_zone(end_page) */ + if (page_zone_id(start_page) != page_zone_id(end_page)) + return NULL; + + return start_page; +} + +void set_zone_contiguous(struct zone *zone) +{ + unsigned long block_start_pfn = zone->zone_start_pfn; + unsigned long block_end_pfn; + + block_end_pfn = pageblock_end_pfn(block_start_pfn); + for (; block_start_pfn < zone_end_pfn(zone); + block_start_pfn = block_end_pfn, + block_end_pfn += pageblock_nr_pages) { + + block_end_pfn = min(block_end_pfn, zone_end_pfn(zone)); + + if (!__pageblock_pfn_to_page(block_start_pfn, + block_end_pfn, zone)) + return; + cond_resched(); + } + + /* We confirm that there is no hole */ + zone->contiguous = true; +} + void __init page_alloc_init_late(void) { struct zone *zone; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4f094ba7c8fb..fe7c1ee5becd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1480,85 +1480,6 @@ void __free_pages_core(struct page *page, unsigned int order) __free_pages_ok(page, order, FPI_TO_TAIL); } -/* - * Check that the whole (or subset of) a pageblock given by the interval of - * [start_pfn, end_pfn) is valid and within the same zone, before scanning it - * with the migration of free compaction scanner. - * - * Return struct page pointer of start_pfn, or NULL if checks were not passed. - * - * It's possible on some configurations to have a setup like node0 node1 node0 - * i.e. it's possible that all pages within a zones range of pages do not - * belong to a single zone. We assume that a border between node0 and node1 - * can occur within a single pageblock, but not a node0 node1 node0 - * interleaving within a single pageblock. It is therefore sufficient to check - * the first and last page of a pageblock and avoid checking each individual - * page in a pageblock. - * - * Note: the function may return non-NULL struct page even for a page block - * which contains a memory hole (i.e. there is no physical memory for a subset - * of the pfn range). For example, if the pageblock order is MAX_ORDER, which - * will fall into 2 sub-sections, and the end pfn of the pageblock may be hole - * even though the start pfn is online and valid. This should be safe most of - * the time because struct pages are still initialized via init_unavailable_range() - * and pfn walkers shouldn't touch any physical memory range for which they do - * not recognize any specific metadata in struct pages. - */ -struct page *__pageblock_pfn_to_page(unsigned long start_pfn, - unsigned long end_pfn, struct zone *zone) -{ - struct page *start_page; - struct page *end_page; - - /* end_pfn is one past the range we are checking */ - end_pfn--; - - if (!pfn_valid(end_pfn)) - return NULL; - - start_page = pfn_to_online_page(start_pfn); - if (!start_page) - return NULL; - - if (page_zone(start_page) != zone) - return NULL; - - end_page = pfn_to_page(end_pfn); - - /* This gives a shorter code than deriving page_zone(end_page) */ - if (page_zone_id(start_page) != page_zone_id(end_page)) - return NULL; - - return start_page; -} - -void set_zone_contiguous(struct zone *zone) -{ - unsigned long block_start_pfn = zone->zone_start_pfn; - unsigned long block_end_pfn; - - block_end_pfn = pageblock_end_pfn(block_start_pfn); - for (; block_start_pfn < zone_end_pfn(zone); - block_start_pfn = block_end_pfn, - block_end_pfn += pageblock_nr_pages) { - - block_end_pfn = min(block_end_pfn, zone_end_pfn(zone)); - - if (!__pageblock_pfn_to_page(block_start_pfn, - block_end_pfn, zone)) - return; - cond_resched(); - } - - /* We confirm that there is no hole */ - zone->contiguous = true; -} - -void clear_zone_contiguous(struct zone *zone) -{ - zone->contiguous = false; -} - /* * The order of subdivision here is critical for the IO subsystem. * Please do not alter this order without good reasons and regression
set_zone_contiguous() is only used in mm init/hotplug, and clear_zone_contiguous() only used in hotplug, move them from page_alloc.c to the more appropriate file. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> --- include/linux/memory_hotplug.h | 3 -- mm/internal.h | 7 +++ mm/mm_init.c | 74 +++++++++++++++++++++++++++++++ mm/page_alloc.c | 79 ---------------------------------- 4 files changed, 81 insertions(+), 82 deletions(-)