Message ID | 20240325145646.1044760-4-bhe@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/mm_init.c: refactor free_area_init_core() | expand |
Hi Baoquan, On Mon, Mar 25, 2024 at 10:56:43PM +0800, Baoquan He wrote: > This is a preparation to calculate nr_kernel_pages and nr_all_pages, > both of which will be used later in alloc_large_system_hash(). > > nr_all_pages counts up all free but not reserved memory in memblock > allocator, including HIGHMEM memory. While nr_kernel_pages counts up > all free but not reserved low memory in memblock allocator, excluding > HIGHMEM memory. Sorry I've missed this in the previous review, but I think this patch and the patch "remove unneeded calc_memmap_size()" can be merged into "remove meaningless calculation of zone->managed_pages in free_area_init_core()" with an appropriate update of the commit message. With the current patch splitting there will be compilation warning about unused function for this and the next patch. > Signed-off-by: Baoquan He <bhe@redhat.com> > --- > mm/mm_init.c | 24 ++++++++++++++++++++++++ > 1 file changed, 24 insertions(+) > > diff --git a/mm/mm_init.c b/mm/mm_init.c > index 153fb2dc666f..c57a7fc97a16 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -1264,6 +1264,30 @@ static void __init reset_memoryless_node_totalpages(struct pglist_data *pgdat) > pr_debug("On node %d totalpages: 0\n", pgdat->node_id); > } > > +static void __init calc_nr_kernel_pages(void) > +{ > + unsigned long start_pfn, end_pfn; > + phys_addr_t start_addr, end_addr; > + u64 u; > +#ifdef CONFIG_HIGHMEM > + unsigned long high_zone_low = arch_zone_lowest_possible_pfn[ZONE_HIGHMEM]; > +#endif > + > + for_each_free_mem_range(u, NUMA_NO_NODE, MEMBLOCK_NONE, &start_addr, &end_addr, NULL) { > + start_pfn = PFN_UP(start_addr); > + end_pfn = PFN_DOWN(end_addr); > + > + if (start_pfn < end_pfn) { > + nr_all_pages += end_pfn - start_pfn; > +#ifdef CONFIG_HIGHMEM > + start_pfn = clamp(start_pfn, 0, high_zone_low); > + end_pfn = clamp(end_pfn, 0, high_zone_low); > +#endif > + nr_kernel_pages += end_pfn - start_pfn; > + } > + } > +} > + > static void __init calculate_node_totalpages(struct pglist_data *pgdat, > unsigned long node_start_pfn, > unsigned long node_end_pfn) > -- > 2.41.0 >
On 03/26/24 at 08:57am, Mike Rapoport wrote: > Hi Baoquan, > > On Mon, Mar 25, 2024 at 10:56:43PM +0800, Baoquan He wrote: > > This is a preparation to calculate nr_kernel_pages and nr_all_pages, > > both of which will be used later in alloc_large_system_hash(). > > > > nr_all_pages counts up all free but not reserved memory in memblock > > allocator, including HIGHMEM memory. While nr_kernel_pages counts up > > all free but not reserved low memory in memblock allocator, excluding > > HIGHMEM memory. > > Sorry I've missed this in the previous review, but I think this patch and > the patch "remove unneeded calc_memmap_size()" can be merged into "remove > meaningless calculation of zone->managed_pages in free_area_init_core()" > with an appropriate update of the commit message. > > With the current patch splitting there will be compilation warning about unused > function for this and the next patch. Thanks for careful checking. We need to make patch bisect-able to not break compiling so that people can spot the cirminal commit, that's for sure. Do we need care about the compiling warning from intermediate patch in one series? Not sure about it. I always suggest people to seperate out this kind of newly added function to a standalone patch for better reviewing and later checking, and I saw a lot of commits like this by searching with 'git log --oneline | grep helper' > > > Signed-off-by: Baoquan He <bhe@redhat.com> > > --- > > mm/mm_init.c | 24 ++++++++++++++++++++++++ > > 1 file changed, 24 insertions(+) > > > > diff --git a/mm/mm_init.c b/mm/mm_init.c > > index 153fb2dc666f..c57a7fc97a16 100644 > > --- a/mm/mm_init.c > > +++ b/mm/mm_init.c > > @@ -1264,6 +1264,30 @@ static void __init reset_memoryless_node_totalpages(struct pglist_data *pgdat) > > pr_debug("On node %d totalpages: 0\n", pgdat->node_id); > > } > > > > +static void __init calc_nr_kernel_pages(void) > > +{ > > + unsigned long start_pfn, end_pfn; > > + phys_addr_t start_addr, end_addr; > > + u64 u; > > +#ifdef CONFIG_HIGHMEM > > + unsigned long high_zone_low = arch_zone_lowest_possible_pfn[ZONE_HIGHMEM]; > > +#endif > > + > > + for_each_free_mem_range(u, NUMA_NO_NODE, MEMBLOCK_NONE, &start_addr, &end_addr, NULL) { > > + start_pfn = PFN_UP(start_addr); > > + end_pfn = PFN_DOWN(end_addr); > > + > > + if (start_pfn < end_pfn) { > > + nr_all_pages += end_pfn - start_pfn; > > +#ifdef CONFIG_HIGHMEM > > + start_pfn = clamp(start_pfn, 0, high_zone_low); > > + end_pfn = clamp(end_pfn, 0, high_zone_low); > > +#endif > > + nr_kernel_pages += end_pfn - start_pfn; > > + } > > + } > > +} > > + > > static void __init calculate_node_totalpages(struct pglist_data *pgdat, > > unsigned long node_start_pfn, > > unsigned long node_end_pfn) > > -- > > 2.41.0 > > > > -- > Sincerely yours, > Mike. >
On Mon, Mar 25, 2024 at 10:56:43PM +0800, Baoquan He wrote: > This is a preparation to calculate nr_kernel_pages and nr_all_pages, > both of which will be used later in alloc_large_system_hash(). > > nr_all_pages counts up all free but not reserved memory in memblock > allocator, including HIGHMEM memory. While nr_kernel_pages counts up > all free but not reserved low memory in memblock allocator, excluding > HIGHMEM memory. > > Signed-off-by: Baoquan He <bhe@redhat.com> Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> > --- > mm/mm_init.c | 24 ++++++++++++++++++++++++ > 1 file changed, 24 insertions(+) > > diff --git a/mm/mm_init.c b/mm/mm_init.c > index 153fb2dc666f..c57a7fc97a16 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -1264,6 +1264,30 @@ static void __init reset_memoryless_node_totalpages(struct pglist_data *pgdat) > pr_debug("On node %d totalpages: 0\n", pgdat->node_id); > } > > +static void __init calc_nr_kernel_pages(void) > +{ > + unsigned long start_pfn, end_pfn; > + phys_addr_t start_addr, end_addr; > + u64 u; > +#ifdef CONFIG_HIGHMEM > + unsigned long high_zone_low = arch_zone_lowest_possible_pfn[ZONE_HIGHMEM]; > +#endif > + > + for_each_free_mem_range(u, NUMA_NO_NODE, MEMBLOCK_NONE, &start_addr, &end_addr, NULL) { > + start_pfn = PFN_UP(start_addr); > + end_pfn = PFN_DOWN(end_addr); > + > + if (start_pfn < end_pfn) { > + nr_all_pages += end_pfn - start_pfn; > +#ifdef CONFIG_HIGHMEM > + start_pfn = clamp(start_pfn, 0, high_zone_low); > + end_pfn = clamp(end_pfn, 0, high_zone_low); > +#endif > + nr_kernel_pages += end_pfn - start_pfn; > + } > + } > +} > + > static void __init calculate_node_totalpages(struct pglist_data *pgdat, > unsigned long node_start_pfn, > unsigned long node_end_pfn) > -- > 2.41.0 >
diff --git a/mm/mm_init.c b/mm/mm_init.c index 153fb2dc666f..c57a7fc97a16 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1264,6 +1264,30 @@ static void __init reset_memoryless_node_totalpages(struct pglist_data *pgdat) pr_debug("On node %d totalpages: 0\n", pgdat->node_id); } +static void __init calc_nr_kernel_pages(void) +{ + unsigned long start_pfn, end_pfn; + phys_addr_t start_addr, end_addr; + u64 u; +#ifdef CONFIG_HIGHMEM + unsigned long high_zone_low = arch_zone_lowest_possible_pfn[ZONE_HIGHMEM]; +#endif + + for_each_free_mem_range(u, NUMA_NO_NODE, MEMBLOCK_NONE, &start_addr, &end_addr, NULL) { + start_pfn = PFN_UP(start_addr); + end_pfn = PFN_DOWN(end_addr); + + if (start_pfn < end_pfn) { + nr_all_pages += end_pfn - start_pfn; +#ifdef CONFIG_HIGHMEM + start_pfn = clamp(start_pfn, 0, high_zone_low); + end_pfn = clamp(end_pfn, 0, high_zone_low); +#endif + nr_kernel_pages += end_pfn - start_pfn; + } + } +} + static void __init calculate_node_totalpages(struct pglist_data *pgdat, unsigned long node_start_pfn, unsigned long node_end_pfn)
This is a preparation to calculate nr_kernel_pages and nr_all_pages, both of which will be used later in alloc_large_system_hash(). nr_all_pages counts up all free but not reserved memory in memblock allocator, including HIGHMEM memory. While nr_kernel_pages counts up all free but not reserved low memory in memblock allocator, excluding HIGHMEM memory. Signed-off-by: Baoquan He <bhe@redhat.com> --- mm/mm_init.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)