From patchwork Thu Nov 11 07:48:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 12614371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0554C433FE for ; Thu, 11 Nov 2021 07:48:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 74F596128E for ; Thu, 11 Nov 2021 07:48:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 74F596128E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id B75976B0092; Thu, 11 Nov 2021 02:48:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B256A6B0080; Thu, 11 Nov 2021 02:48:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C84F6B00A7; Thu, 11 Nov 2021 02:48:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id 652F16B0080 for ; Thu, 11 Nov 2021 02:48:50 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 13E70805B0 for ; Thu, 11 Nov 2021 07:48:50 +0000 (UTC) X-FDA: 78795872820.11.8BCE54B Received: from out30-44.freemail.mail.aliyun.com (out30-44.freemail.mail.aliyun.com [115.124.30.44]) by imf06.hostedemail.com (Postfix) with ESMTP id 05AD7801AB02 for ; Thu, 11 Nov 2021 07:48:48 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04394;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0Uw0LcS2_1636616925; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0Uw0LcS2_1636616925) by smtp.aliyun-inc.com(127.0.0.1); Thu, 11 Nov 2021 15:48:45 +0800 From: Baolin Wang To: akpm@linux-foundation.org, ying.huang@intel.com, dave.hansen@linux.intel.com Cc: ziy@nvidia.com, osalvador@suse.de, shy828301@gmail.com, baolin.wang@linux.alibaba.com, zhongjiang-ali@linux.alibaba.com, xlpang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/2] mm: migrate: Support multiple target nodes demotion Date: Thu, 11 Nov 2021 15:48:34 +0800 Message-Id: X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 05AD7801AB02 X-Stat-Signature: 7yrimseeumgkqzkx7qu8ynn68mzcr8iw Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf06.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.44 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com X-HE-Tag: 1636616928-677410 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We have some machines with multiple memory types like below, which have one fast (DRAM) memory node and two slow (persistent memory) memory nodes. According to current node demotion policy, if node 0 fills up, its memory should be migrated to node 1, when node 1 fills up, its memory will be migrated to node 2: node 0 -> node 1 -> node 2 ->stop. But this is not efficient and suitbale memory migration route for our machine with multiple slow memory nodes. Since the distance between node 0 to node 1 and node 0 to node 2 is equal, and memory migration between slow memory nodes will increase persistent memory bandwidth greatly, which will hurt the whole system's performance. Thus for this case, we can treat the slow memory node 1 and node 2 as a whole slow memory region, and we should migrate memory from node 0 to node 1 and node 2 if node 0 fills up. This patch changes the node_demotion data structure to support multiple target nodes, and establishes the migration path to support multiple target nodes with validating if the node distance is the best or not. available: 3 nodes (0-2) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 node 0 size: 62153 MB node 0 free: 55135 MB node 1 cpus: node 1 size: 127007 MB node 1 free: 126930 MB node 2 cpus: node 2 size: 126968 MB node 2 free: 126878 MB node distances: node 0 1 2 0: 10 20 20 1: 20 10 20 2: 20 20 10 Signed-off-by: Baolin Wang --- mm/migrate.c | 138 +++++++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 102 insertions(+), 36 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index cf25b00..126e9e6 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -50,6 +50,7 @@ #include #include #include +#include #include @@ -1119,12 +1120,25 @@ static int __unmap_and_move(struct page *page, struct page *newpage, * * This is represented in the node_demotion[] like this: * - * { 1, // Node 0 migrates to 1 - * 2, // Node 1 migrates to 2 - * -1, // Node 2 does not migrate - * 4, // Node 3 migrates to 4 - * 5, // Node 4 migrates to 5 - * -1} // Node 5 does not migrate + * { nr=1, nodes[0]=1 }, // Node 0 migrates to 1 + * { nr=1, nodes[0]=2 }, // Node 1 migrates to 2 + * { nr=0, nodes[0]=-1 }, // Node 2 does not migrate + * { nr=1, nodes[0]=4 }, // Node 3 migrates to 4 + * { nr=1, nodes[0]=5 }, // Node 4 migrates to 5 + * { nr=0, nodes[0]=-1 }, // Node 5 does not migrate + * + * Moreover some systems may have multiple slow memory nodes. + * Suppose a system has one socket with 3 memory nodes, node 0 + * is fast memory type, and node 1/2 both are slow memory + * type, and the distance between fast memory node and slow + * memory node is same. So the migration path should be: + * + * 0 -> 1/2 -> stop + * + * This is represented in the node_demotion[] like this: + * { nr=2, {nodes[0]=1, nodes[1]=2} }, // Node 0 migrates to node 1 and node 2 + * { nr=0, nodes[0]=-1, }, // Node 1 dose not migrate + * { nr=0, nodes[0]=-1, }, // Node 2 does not migrate */ /* @@ -1135,8 +1149,13 @@ static int __unmap_and_move(struct page *page, struct page *newpage, * must be held over all reads to ensure that no cycles are * observed. */ -static int node_demotion[MAX_NUMNODES] __read_mostly = - {[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE}; +#define DEFAULT_DEMOTION_TARGET_NODES 15 +struct demotion_nodes { + unsigned short nr; + short nodes[DEFAULT_DEMOTION_TARGET_NODES]; +}; + +static struct demotion_nodes node_demotion[MAX_NUMNODES] __read_mostly; /** * next_demotion_node() - Get the next node in the demotion path @@ -1149,6 +1168,8 @@ static int __unmap_and_move(struct page *page, struct page *newpage, */ int next_demotion_node(int node) { + struct demotion_nodes *nd = &node_demotion[node]; + unsigned short target_nr, index; int target; /* @@ -1161,9 +1182,25 @@ int next_demotion_node(int node) * node_demotion[] reads need to be consistent. */ rcu_read_lock(); - target = READ_ONCE(node_demotion[node]); - rcu_read_unlock(); + target_nr = READ_ONCE(nd->nr); + + if (target_nr == 0) { + target = NUMA_NO_NODE; + goto out; + } else if (target_nr == 1) { + index = 0; + } else { + /* + * If there are multiple target nodes, just select one + * target node randomly. + */ + index = get_random_int() % target_nr; + } + + target = READ_ONCE(nd->nodes[index]); +out: + rcu_read_unlock(); return target; } @@ -2974,10 +3011,13 @@ void migrate_vma_finalize(struct migrate_vma *migrate) /* Disable reclaim-based migration. */ static void __disable_all_migrate_targets(void) { - int node; + int node, i; - for_each_online_node(node) - node_demotion[node] = NUMA_NO_NODE; + for_each_online_node(node) { + node_demotion[node].nr = 0; + for (i = 0; i < DEFAULT_DEMOTION_TARGET_NODES; i++) + node_demotion[node].nodes[i] = NUMA_NO_NODE; + } } static void disable_all_migrate_targets(void) @@ -3004,26 +3044,35 @@ static void disable_all_migrate_targets(void) * Failing here is OK. It might just indicate * being at the end of a chain. */ -static int establish_migrate_target(int node, nodemask_t *used) +static int establish_migrate_target(int node, nodemask_t *used, + int best_distance) { - int migration_target; + int migration_target, index, val; + struct demotion_nodes *nd = &node_demotion[node]; + + migration_target = find_next_best_node(node, used); + if (migration_target == NUMA_NO_NODE) + return NUMA_NO_NODE; /* - * Can not set a migration target on a - * node with it already set. - * - * No need for READ_ONCE() here since this - * in the write path for node_demotion[]. - * This should be the only thread writing. + * If the node has been set a migration target node before, + * which means it's the best distance between them. Still + * check if this node can be demoted to other target nodes + * if they have a same best distance. */ - if (node_demotion[node] != NUMA_NO_NODE) - return NUMA_NO_NODE; + if (best_distance != -1) { + val = node_distance(node, migration_target); + if (val > best_distance) + return NUMA_NO_NODE; + } - migration_target = find_next_best_node(node, used); - if (migration_target == NUMA_NO_NODE) + index = nd->nr; + if (WARN_ONCE(index >= DEFAULT_DEMOTION_TARGET_NODES, + "Exceeds maximum demotion target nodes\n")) return NUMA_NO_NODE; - node_demotion[node] = migration_target; + nd->nodes[index] = migration_target; + nd->nr++; return migration_target; } @@ -3039,7 +3088,9 @@ static int establish_migrate_target(int node, nodemask_t *used) * * The difference here is that cycles must be avoided. If * node0 migrates to node1, then neither node1, nor anything - * node1 migrates to can migrate to node0. + * node1 migrates to can migrate to node0. Also one node can + * be migrated to multiple nodes if the target nodes all have + * a same best-distance against the source node. * * This function can run simultaneously with readers of * node_demotion[]. However, it can not run simultaneously @@ -3051,7 +3102,7 @@ static void __set_migration_target_nodes(void) nodemask_t next_pass = NODE_MASK_NONE; nodemask_t this_pass = NODE_MASK_NONE; nodemask_t used_targets = NODE_MASK_NONE; - int node; + int node, best_distance; /* * Avoid any oddities like cycles that could occur @@ -3080,18 +3131,33 @@ static void __set_migration_target_nodes(void) * multiple source nodes to share a destination. */ nodes_or(used_targets, used_targets, this_pass); - for_each_node_mask(node, this_pass) { - int target_node = establish_migrate_target(node, &used_targets); - if (target_node == NUMA_NO_NODE) - continue; + for_each_node_mask(node, this_pass) { + best_distance = -1; /* - * Visit targets from this pass in the next pass. - * Eventually, every node will have been part of - * a pass, and will become set in 'used_targets'. + * Try to set up the migration path for the node, and the target + * migration nodes can be multiple, so doing a loop to find all + * the target nodes if they all have a best node distance. */ - node_set(target_node, next_pass); + do { + int target_node = + establish_migrate_target(node, &used_targets, + best_distance); + + if (target_node == NUMA_NO_NODE) + break; + + if (best_distance == -1) + best_distance = node_distance(node, target_node); + + /* + * Visit targets from this pass in the next pass. + * Eventually, every node will have been part of + * a pass, and will become set in 'used_targets'. + */ + node_set(target_node, next_pass); + } while (1); } /* * 'next_pass' contains nodes which became migration From patchwork Thu Nov 11 07:48:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 12614373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2EE1C433F5 for ; Thu, 11 Nov 2021 07:48:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8780761267 for ; Thu, 11 Nov 2021 07:48:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8780761267 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8196D6B0080; Thu, 11 Nov 2021 02:48:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 77E356B00A6; Thu, 11 Nov 2021 02:48:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6691C6B00A7; Thu, 11 Nov 2021 02:48:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id 552B66B0080 for ; Thu, 11 Nov 2021 02:48:51 -0500 (EST) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0AE5571B94 for ; Thu, 11 Nov 2021 07:48:51 +0000 (UTC) X-FDA: 78795872862.31.01200CB Received: from out30-45.freemail.mail.aliyun.com (out30-45.freemail.mail.aliyun.com [115.124.30.45]) by imf31.hostedemail.com (Postfix) with ESMTP id A9C5C10531BE for ; Thu, 11 Nov 2021 07:48:36 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04400;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0Uw0LcSB_1636616926; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0Uw0LcSB_1636616926) by smtp.aliyun-inc.com(127.0.0.1); Thu, 11 Nov 2021 15:48:46 +0800 From: Baolin Wang To: akpm@linux-foundation.org, ying.huang@intel.com, dave.hansen@linux.intel.com Cc: ziy@nvidia.com, osalvador@suse.de, shy828301@gmail.com, baolin.wang@linux.alibaba.com, zhongjiang-ali@linux.alibaba.com, xlpang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/2] mm: migrate: Allocate the node_demotion structure dynamically Date: Thu, 11 Nov 2021 15:48:35 +0800 Message-Id: X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A9C5C10531BE X-Stat-Signature: psrozxpobyaipngref6hbs6n9quyer8j Authentication-Results: imf31.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf31.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.45 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com X-HE-Tag: 1636616916-777107 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For the worst case (MAX_NUMNODES=1024), the node_demotion structure can consume 32k bytes, which appears too large, so we can change to allocate node_demotion dynamically at initialization time. Meanwhile allocating the target demotion nodes array dynamically to select a suitable size according to the MAX_NUMNODES. Signed-off-by: Baolin Wang Reported-by: kernel test robot --- mm/migrate.c | 38 +++++++++++++++++++++++++++++--------- 1 file changed, 29 insertions(+), 9 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 126e9e6..0145b38 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1152,10 +1152,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage, #define DEFAULT_DEMOTION_TARGET_NODES 15 struct demotion_nodes { unsigned short nr; - short nodes[DEFAULT_DEMOTION_TARGET_NODES]; + short nodes[]; }; -static struct demotion_nodes node_demotion[MAX_NUMNODES] __read_mostly; +static struct demotion_nodes *node_demotion[MAX_NUMNODES] __read_mostly; +static unsigned short target_nodes_max; /** * next_demotion_node() - Get the next node in the demotion path @@ -1168,10 +1169,13 @@ struct demotion_nodes { */ int next_demotion_node(int node) { - struct demotion_nodes *nd = &node_demotion[node]; + struct demotion_nodes *nd = node_demotion[node]; unsigned short target_nr, index; int target; + if (!nd) + return NUMA_NO_NODE; + /* * node_demotion[] is updated without excluding this * function from running. RCU doesn't provide any @@ -3014,9 +3018,9 @@ static void __disable_all_migrate_targets(void) int node, i; for_each_online_node(node) { - node_demotion[node].nr = 0; - for (i = 0; i < DEFAULT_DEMOTION_TARGET_NODES; i++) - node_demotion[node].nodes[i] = NUMA_NO_NODE; + node_demotion[node]->nr = 0; + for (i = 0; i < target_nodes_max; i++) + node_demotion[node]->nodes[i] = NUMA_NO_NODE; } } @@ -3048,7 +3052,10 @@ static int establish_migrate_target(int node, nodemask_t *used, int best_distance) { int migration_target, index, val; - struct demotion_nodes *nd = &node_demotion[node]; + struct demotion_nodes *nd = node_demotion[node]; + + if (WARN_ONCE(!nd, "Can not set up migration path for node:%d\n", node)) + return NUMA_NO_NODE; migration_target = find_next_best_node(node, used); if (migration_target == NUMA_NO_NODE) @@ -3067,7 +3074,7 @@ static int establish_migrate_target(int node, nodemask_t *used, } index = nd->nr; - if (WARN_ONCE(index >= DEFAULT_DEMOTION_TARGET_NODES, + if (WARN_ONCE(index >= target_nodes_max, "Exceeds maximum demotion target nodes\n")) return NUMA_NO_NODE; @@ -3256,7 +3263,20 @@ static int migration_offline_cpu(unsigned int cpu) static int __init migrate_on_reclaim_init(void) { - int ret; + struct demotion_nodes *nd; + int ret, node; + + /* Keep the maximum target demotion nodes are less than MAX_NUMNODES. */ + target_nodes_max = min_t(unsigned short, DEFAULT_DEMOTION_TARGET_NODES, + MAX_NUMNODES - 1); + for_each_node(node) { + nd = kmalloc(struct_size(nd, nodes, target_nodes_max), + GFP_KERNEL); + if (!nd) + continue; + + node_demotion[node] = nd; + } ret = cpuhp_setup_state_nocalls(CPUHP_MM_DEMOTION_DEAD, "mm/demotion:offline", NULL, migration_offline_cpu);