Message ID | 20200218224422.3407-1-richardw.yang@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v4] mm/vmscan.c: remove cpu online notification for now | expand |
On Wed, 19 Feb 2020 06:44:22 +0800 Wei Yang <richardw.yang@linux.intel.com> wrote: > kswapd kernel thread starts either with a CPU affinity set to the full > cpu mask of its target node or without any affinity at all if the node > is CPUless. There is a cpu hotplug callback (kswapd_cpu_online) that > implements an elaborate way to update this mask when a cpu is onlined. > > It is not really clear whether there is any actual benefit from this > scheme. Completely CPU-less NUMA nodes rarely gain a new CPU during > runtime. This is the case across all platforms, all architectures, all users for the next N years? I'm surprised that we know this with sufficient confidence. Can you explain how you came to make this assertion? > Drop the code for that reason. If there is a real usecase then > we can resurrect and simplify the code.
On Wed 19-02-20 12:08:10, Andrew Morton wrote: > On Wed, 19 Feb 2020 06:44:22 +0800 Wei Yang <richardw.yang@linux.intel.com> wrote: > > > kswapd kernel thread starts either with a CPU affinity set to the full > > cpu mask of its target node or without any affinity at all if the node > > is CPUless. There is a cpu hotplug callback (kswapd_cpu_online) that > > implements an elaborate way to update this mask when a cpu is onlined. > > > > It is not really clear whether there is any actual benefit from this > > scheme. Completely CPU-less NUMA nodes rarely gain a new CPU during > > runtime. > > This is the case across all platforms, all architectures, all users for > the next N years? I'm surprised that we know this with sufficient > confidence. Can you explain how you came to make this assertion? CPUless NUMA nodes are quite rare - mostly ppc with crippled LPARs. I am not aware those would dynamically get CPUs for those nodes later in the runtime. Maybe they do but we would like to learn about that. A missing cpu mask is not going cause any fatal problems anyway. As the changelog states the callback can be reintroduced with a sign of testing and usecase description. I prefer we drop this code in the mean time as the benefit is not really clear or testable. > > Drop the code for that reason. If there is a real usecase then > > we can resurrect and simplify the code.
On Wed, Feb 19, 2020 at 11:52 PM Michal Hocko <mhocko@kernel.org> wrote: > > On Wed 19-02-20 12:08:10, Andrew Morton wrote: > > On Wed, 19 Feb 2020 06:44:22 +0800 Wei Yang <richardw.yang@linux.intel.com> wrote: > > > > > kswapd kernel thread starts either with a CPU affinity set to the full > > > cpu mask of its target node or without any affinity at all if the node > > > is CPUless. There is a cpu hotplug callback (kswapd_cpu_online) that > > > implements an elaborate way to update this mask when a cpu is onlined. > > > > > > It is not really clear whether there is any actual benefit from this > > > scheme. Completely CPU-less NUMA nodes rarely gain a new CPU during > > > runtime. > > > > This is the case across all platforms, all architectures, all users for > > the next N years? I'm surprised that we know this with sufficient > > confidence. Can you explain how you came to make this assertion? > > CPUless NUMA nodes are quite rare - mostly ppc with crippled LPARs. > I am not aware those would dynamically get CPUs for those nodes later in > the runtime. Maybe they do but we would like to learn about that. A > missing cpu mask is not going cause any fatal problems anyway. Persistent memory nodes are CPUless nodes. But, I don't think they would get any CPU online later in the runtime. > > As the changelog states the callback can be reintroduced with a sign of > testing and usecase description. I prefer we drop this code in the mean > time as the benefit is not really clear or testable. > > > > Drop the code for that reason. If there is a real usecase then > > > we can resurrect and simplify the code. > > -- > Michal Hocko > SUSE Labs >
diff --git a/mm/vmscan.c b/mm/vmscan.c index 665f33258cd7..a4fdf3dc8887 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4023,27 +4023,6 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim) } #endif /* CONFIG_HIBERNATION */ -/* It's optimal to keep kswapds on the same CPUs as their memory, but - not required for correctness. So if the last cpu in a node goes - away, we get changed to run anywhere: as the first one comes back, - restore their cpu bindings. */ -static int kswapd_cpu_online(unsigned int cpu) -{ - int nid; - - for_each_node_state(nid, N_MEMORY) { - pg_data_t *pgdat = NODE_DATA(nid); - const struct cpumask *mask; - - mask = cpumask_of_node(pgdat->node_id); - - if (cpumask_any_and(cpu_online_mask, mask) < nr_cpu_ids) - /* One of our CPUs online: restore mask */ - set_cpus_allowed_ptr(pgdat->kswapd, mask); - } - return 0; -} - /* * This kswapd start function will be called by init and node-hot-add. * On node-hot-add, kswapd will moved to proper cpus if cpus are hot-added. @@ -4083,15 +4062,11 @@ void kswapd_stop(int nid) static int __init kswapd_init(void) { - int nid, ret; + int nid; swap_setup(); for_each_node_state(nid, N_MEMORY) kswapd_run(nid); - ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, - "mm/vmscan:online", kswapd_cpu_online, - NULL); - WARN_ON(ret < 0); return 0; }