Message ID | 20191018105606.3249-2-mgorman@techsingularity.net (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Recalculate per-cpu page allocator batch and high limits after deferred meminit | expand |
On Fri, 18 Oct, at 11:56:04AM, Mel Gorman wrote: > Both the percpu_pagelist_fraction sysctl handler and memory hotplug > have a common requirement of updating the pcpu page allocation batch > and high values. Split the relevant helper to share common code. > > No functional change. > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> > --- > mm/page_alloc.c | 23 ++++++++++++----------- > 1 file changed, 12 insertions(+), 11 deletions(-) Tested-by: Matt Fleming <matt@codeblueprint.co.uk>
On Fri 18-10-19 11:56:04, Mel Gorman wrote: > Both the percpu_pagelist_fraction sysctl handler and memory hotplug > have a common requirement of updating the pcpu page allocation batch > and high values. Split the relevant helper to share common code. > > No functional change. > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Michal Hocko <mhocko@suse.com> > --- > mm/page_alloc.c | 23 ++++++++++++----------- > 1 file changed, 12 insertions(+), 11 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index c0b2e0306720..cafe568d36f6 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -7983,6 +7983,15 @@ int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write, > return 0; > } > > +static void __zone_pcp_update(struct zone *zone) > +{ > + unsigned int cpu; > + > + for_each_possible_cpu(cpu) > + pageset_set_high_and_batch(zone, > + per_cpu_ptr(zone->pageset, cpu)); > +} > + > /* > * percpu_pagelist_fraction - changes the pcp->high for each zone on each > * cpu. It is the fraction of total pages in each zone that a hot per cpu > @@ -8014,13 +8023,8 @@ int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write, > if (percpu_pagelist_fraction == old_percpu_pagelist_fraction) > goto out; > > - for_each_populated_zone(zone) { > - unsigned int cpu; > - > - for_each_possible_cpu(cpu) > - pageset_set_high_and_batch(zone, > - per_cpu_ptr(zone->pageset, cpu)); > - } > + for_each_populated_zone(zone) > + __zone_pcp_update(zone); > out: > mutex_unlock(&pcp_batch_high_lock); > return ret; > @@ -8519,11 +8523,8 @@ void free_contig_range(unsigned long pfn, unsigned int nr_pages) > */ > void __meminit zone_pcp_update(struct zone *zone) > { > - unsigned cpu; > mutex_lock(&pcp_batch_high_lock); > - for_each_possible_cpu(cpu) > - pageset_set_high_and_batch(zone, > - per_cpu_ptr(zone->pageset, cpu)); > + __zone_pcp_update(zone); > mutex_unlock(&pcp_batch_high_lock); > } > #endif > -- > 2.16.4 >
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c0b2e0306720..cafe568d36f6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7983,6 +7983,15 @@ int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write, return 0; } +static void __zone_pcp_update(struct zone *zone) +{ + unsigned int cpu; + + for_each_possible_cpu(cpu) + pageset_set_high_and_batch(zone, + per_cpu_ptr(zone->pageset, cpu)); +} + /* * percpu_pagelist_fraction - changes the pcp->high for each zone on each * cpu. It is the fraction of total pages in each zone that a hot per cpu @@ -8014,13 +8023,8 @@ int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write, if (percpu_pagelist_fraction == old_percpu_pagelist_fraction) goto out; - for_each_populated_zone(zone) { - unsigned int cpu; - - for_each_possible_cpu(cpu) - pageset_set_high_and_batch(zone, - per_cpu_ptr(zone->pageset, cpu)); - } + for_each_populated_zone(zone) + __zone_pcp_update(zone); out: mutex_unlock(&pcp_batch_high_lock); return ret; @@ -8519,11 +8523,8 @@ void free_contig_range(unsigned long pfn, unsigned int nr_pages) */ void __meminit zone_pcp_update(struct zone *zone) { - unsigned cpu; mutex_lock(&pcp_batch_high_lock); - for_each_possible_cpu(cpu) - pageset_set_high_and_batch(zone, - per_cpu_ptr(zone->pageset, cpu)); + __zone_pcp_update(zone); mutex_unlock(&pcp_batch_high_lock); } #endif
Both the percpu_pagelist_fraction sysctl handler and memory hotplug have a common requirement of updating the pcpu page allocation batch and high values. Split the relevant helper to share common code. No functional change. Signed-off-by: Mel Gorman <mgorman@techsingularity.net> --- mm/page_alloc.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-)