From patchwork Tue Feb 25 14:00:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chen Yu X-Patchwork-Id: 13990065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F91DC021BB for ; Tue, 25 Feb 2025 14:06:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C0556B0092; Tue, 25 Feb 2025 09:06:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0494D6B0093; Tue, 25 Feb 2025 09:06:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E05ED6B0095; Tue, 25 Feb 2025 09:06:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id BE11E6B0092 for ; Tue, 25 Feb 2025 09:06:01 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2AD0DC172A for ; Tue, 25 Feb 2025 14:06:01 +0000 (UTC) X-FDA: 83158640922.16.DC1CFF8 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by imf16.hostedemail.com (Postfix) with ESMTP id E869918002D for ; Tue, 25 Feb 2025 14:05:55 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WNyKJh5J; spf=pass (imf16.hostedemail.com: domain of yu.c.chen@intel.com designates 198.175.65.11 as permitted sender) smtp.mailfrom=yu.c.chen@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740492356; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=D/aUhmgkLU+optdIXXSfFxAw69KcEJSVyRu44xG4FwU=; b=Y1XOGwvGJecNxVhBwlTVa3gMnUTaxYBgxdvxlLG3WlAEGuxqXcxCqtDOrWajcZCZNnOok6 H0m5F93snkoXVySn7zZLXbzu2TBfv4NaFiweHdpUSNefx7mMYf+1eHzoV0G5ctywnVpRNM rZK4Og0BHy0UMv6wpZnO3n8lVkcRDyo= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WNyKJh5J; spf=pass (imf16.hostedemail.com: domain of yu.c.chen@intel.com designates 198.175.65.11 as permitted sender) smtp.mailfrom=yu.c.chen@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740492356; a=rsa-sha256; cv=none; b=GMwZMU17ZbX+pZSueo8bwa9sC+gviGVh/YI6KRVKqhMhoKzP87+Iu2buOvVZ0LsWi8G9O6 OcVPU4w+7WNHcB9c0N9DqtkG4Z+zRy23G6b9H6NpXUhjk8kdeKyQ2jXrNSl8w0wtr9w9Cu RmaBLHqLOiw0lNm80quvUERG5aKUZAk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740492356; x=1772028356; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pdPlp0rDL1jfLw0dpmJfYk+F666XwJmSc0932Ij+l7s=; b=WNyKJh5JZpkL6lRQVLK3w6MmtE4ir9zR0df5jLDetjqKzPoNoAykBS1l LGT1qz5PJlMpQQDnuEdPh8Mo9vc9jcbZvdX1lmRHNh3YqyY3bviYgXToQ I1uqZ4fELtLstiWu7MTb+ali49T3LoEMiMkEUD0vbDqUpG9LnUqcqDAol Lfbs9y/Q2xx9HVBqjamQJwF8XQ0XkTDP0hAvUn/d15vH2rSOaGtMJhSol sycFk/SkkX2QvWkPKE+v3+IY6JcXgd7xkoSZqRzNr+gDVnO5sIc62IE1l eTztfOmZ/IlxBTJ7bDvyLpvIubBt2GDIRi9Dxncqfb7v4OZO1x+1rLbxx A==; X-CSE-ConnectionGUID: XjHNlaseTPCPWh6a2vuEVg== X-CSE-MsgGUID: wQxrGxxiQamSnmyYdlTdzw== X-IronPort-AV: E=McAfee;i="6700,10204,11356"; a="51513796" X-IronPort-AV: E=Sophos;i="6.13,314,1732608000"; d="scan'208";a="51513796" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Feb 2025 06:05:42 -0800 X-CSE-ConnectionGUID: VzWe+/YeR0W/uh6YQZcfXg== X-CSE-MsgGUID: kkeWeCHnS1e4ahFVdyY1dw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="117326296" Received: from chenyu-dev.sh.intel.com ([10.239.62.107]) by orviesa008.jf.intel.com with ESMTP; 25 Feb 2025 06:05:35 -0800 From: Chen Yu To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Andrew Morton Cc: Rik van Riel , Mel Gorman , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , "Liam R. Howlett" , Lorenzo Stoakes , "Huang, Ying" , Tim Chen , Aubrey Li , Michael Wang , Kaiyang Zhao , David Rientjes , Raghavendra K T , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Chen Yu Subject: [RFC PATCH 3/3] sched/numa: Allow intervale memory allocation for numa balance Date: Tue, 25 Feb 2025 22:00:40 +0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: E869918002D X-Stat-Signature: 6n8gor7fkmdns8eks5h5t8z8jzqcq5pr X-HE-Tag: 1740492355-152757 X-HE-Meta: U2FsdGVkX19k8jDHelXfTrknvjTXknhZLTLUVAbu1WTjrJitvtEtvnj7cMZ4H/Wu7bAbixRnQWDG4X+70gtyDL9huyJy4GlyL7r+Q/t0rcwely1hOwXM6L5P28zYQsaaNN8Dsd5L4ihMv4Gs1gINII2SJBxhcgoUSDqWdfnz7lLXZt7KNKYcMn2fXRxjP5nNxynmHzvDn1hkeXYOXteBTE8F3rkWQ5t1t6NlRz9npwvyFF7Ew7aETNzVpm5VCkoxWgKEZm70F0CaKsEXskUIUtLxC1WxVSkfy+LhiQkgu5waH0QG+yZDKZPgzlPAy7RTKuNh/qXOPFYBYVeotH4oInr/SDx5g9urVkKRJMkciL8AUmvpmhXBYNO3O0lXOEA3GNHi/HqINGcWUuQf2wsV7OYAF6EkVMiz0cT8ICnLBd+DAfHeXmWfU1s390KOM3z+IpRh404sUQTvvglIuDe1HjKWBKgntPQ5x3p6ugNTAHmrwDB0ddzTZZ/TRdbesmaahLbwQk5wI6kpdYezZ5+QvPdfXLPt44nrs+cXXQvDGlSMTAeNZpHn+OPBlpUK93MOB6Elbyujfp2Wh4mTYyLhcPwdXg0IFni4artkGav0V6wfc5kAMXy5Do0WHQ65akayl2o7aE/IbE27UWI71ZtGUkuj3veIZ4FaewC8UhVZZ7M73ylhjHVnuxMbaYDPg4Q1//KrpZ3M0jH/xEBfOoksi4ghEK1nV0xGC6FBuwFs6QnP8R2omI1Mu/gVIwIidXEU3beCqWVv80eVO+wS6OYWKFe0OvFKS92mljLKC9Fo2/jVXPqFltRV441I0RvX71XUIydqlCjuL85RgW4DONoOjPrpdm1yewbnfr5bIlukaSIfjftfMfgIyf91wkpoE41Uwg2VayFSOW26pm+vxDMOMYfi6eC3KAVZrSF7QV0NHaqYhyIoAUruyywLC4W9BI9AV1RUcLihy4NQxst4u/S Kbpd4nOR SIqRkAAQRNCRJN0nS9WXTf+LTZeuFHwPjiW/vp2Ocly9mI2leqOJxf1nnJPM0e7GYnAwbrDcfOFTVTkzByGKAN1TD3DsTVd3XoaPR2qKF+k6cabPCSjKgoax+bQazI/AJm+7IRKQN3Q5yzs4Efu5SexNWxSXq+UlvvVjeu/MVnlvHTnWFGWBe47C31pdhYNADFGVBveA+m8aATB1+AjA3gvy56Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: MPOL_INTERLEAVE is used to allocate pages interleaved across different NUMA nodes to make the best use of memory bandwidth. Under MPOL_INTERLEAVE mode, NUMA load balance page migration does not occur because the page is already in its designated place. Similarly, NUMA load task migration does not occur either—mpol_misplaced() returns NUMA_NO_NODE, which instructs do_numa_page() to skip page/task migration. However, there is a scenario in the production environment where NUMA balance could benefit MPOL_INTERLEAVE. This typical scenario involves tasks within cgroup g_A being bound to two SNC (Sub-NUMA Cluster) nodes via cpuset, with their pages allocated only on these two SNC nodes in an interleaved manner using MPOL_INTERLEAVE. This setup allows g_A to achieve good resource isolation while effectively utilizing the memory bandwidth of the two SNC nodes. However, it is possible that tasks t1 and t2 in g_A could experience remote access patterns: Node 0 Node 1 t1 t1.page t2.page t2 Ideally, a NUMA balance task swap would be beneficial: Node 0 Node 1 t2 t1.page t2.page t1 In other words, NUMA balancing can help swap t1 and t2 to improve NUMA locality without migrating pages, thereby still honoring the MPOL_INTERLEAVE policy. To enable NUMA balancing to manage MPOL_INTERLEAVE, add MPOL_F_MOF to the MPOL_INTERLEAVE policy if the user has requested it via MPOL_F_NUMA_BALANCING (similar to MPOL_BIND). In summary, pages will not be migrated for MPOL_INTERLEAVE, but tasks will be migrated to their preferred nodes. Tested on a system with 4 nodes, 40 Cores(80 CPUs)/node, using autonumabench NUMA01_THREADLOCAL, with some minor changes to support MPOL_INTERLEAVE: p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, \ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); set_mempolicy(MPOL_INTERLEAVE | MPOL_F_NUMA_BALANCING, \ &nodemask_global, max_nodes); ... //each thread accesses 4K of data every 8K, //1 thread should access the pages on 1 node. No obvious score difference was observed, but noticed some Numa balance task migration: baseline_nocg_interleav nb_nocg_interlave baseline_nocg_interleave nb_nocg_interlave/ Min syst-NUMA01_THREADLOCAL 7156.34 ( 0.00%) 7267.28 ( -1.55%) Min elsp-NUMA01_THREADLOCAL 90.73 ( 0.00%) 90.88 ( -0.17%) Amean syst-NUMA01_THREADLOCAL 7156.34 ( 0.00%) 7267.28 ( -1.55%) Amean elsp-NUMA01_THREADLOCAL 90.73 ( 0.00%) 90.88 ( -0.17%) Stddev syst-NUMA01_THREADLOCAL 0.00 ( 0.00%) 0.00 ( 0.00%) Stddev elsp-NUMA01_THREADLOCAL 0.00 ( 0.00%) 0.00 ( 0.00%) CoeffVar syst-NUMA01_THREADLOCAL 0.00 ( 0.00%) 0.00 ( 0.00%) CoeffVar elsp-NUMA01_THREADLOCAL 0.00 ( 0.00%) 0.00 ( 0.00%) Max syst-NUMA01_THREADLOCAL 7156.34 ( 0.00%) 7267.28 ( -1.55%) Max elsp-NUMA01_THREADLOCAL 90.73 ( 0.00%) 90.88 ( -0.17%) BAmean-50 syst-NUMA01_THREADLOCAL 7156.34 ( 0.00%) 7267.28 ( -1.55%) BAmean-50 elsp-NUMA01_THREADLOCAL 90.73 ( 0.00%) 90.88 ( -0.17%) BAmean-95 syst-NUMA01_THREADLOCAL 7156.34 ( 0.00%) 7267.28 ( -1.55%) BAmean-95 elsp-NUMA01_THREADLOCAL 90.73 ( 0.00%) 90.88 ( -0.17%) BAmean-99 syst-NUMA01_THREADLOCAL 7156.34 ( 0.00%) 7267.28 ( -1.55%) BAmean-99 elsp-NUMA01_THREADLOCAL 90.73 ( 0.00%) 90.88 ( -0.17%) delta of /sys/fs/cgroup/mytest/memory.stat during the test: numa_pages_migrated: 0 numa_pte_updates: 9156154 numa_hint_faults: 8659673 numa_task_migrated: 282 <--- introduced in previous patch numa_task_swaped: 114 <---- introduced in previous patch More tests to come. Suggested-by: Aubrey Li Signed-off-by: Chen Yu --- include/linux/numa.h | 1 + include/uapi/linux/mempolicy.h | 1 + mm/memory.c | 2 +- mm/mempolicy.c | 7 +++++++ 4 files changed, 10 insertions(+), 1 deletion(-) diff --git a/include/linux/numa.h b/include/linux/numa.h index 3567e40329eb..6c3f2d839c76 100644 --- a/include/linux/numa.h +++ b/include/linux/numa.h @@ -14,6 +14,7 @@ #define NUMA_NO_NODE (-1) #define NUMA_NO_MEMBLK (-1) +#define NUMA_TASK_MIG (1) static inline bool numa_valid_node(int nid) { diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index 1f9bb10d1a47..2081365612ac 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -64,6 +64,7 @@ enum { #define MPOL_F_SHARED (1 << 0) /* identify shared policies */ #define MPOL_F_MOF (1 << 3) /* this policy wants migrate on fault */ #define MPOL_F_MORON (1 << 4) /* Migrate On protnone Reference On Node */ +#define MPOL_F_MOFT (1 << 5) /* allow task but no page migrate on fault */ /* * These bit locations are exposed in the vm.zone_reclaim_mode sysctl diff --git a/mm/memory.c b/mm/memory.c index 539c0f7c6d54..4013bbcbf40f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5683,7 +5683,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) target_nid = numa_migrate_check(folio, vmf, vmf->address, &flags, writable, &last_cpupid); - if (target_nid == NUMA_NO_NODE) + if (target_nid == NUMA_NO_NODE || target_nid == NUMA_TASK_MIG) goto out_map; if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) { flags |= TNF_MIGRATE_FAIL; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index bbaadbeeb291..0b88601ec22d 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1510,6 +1510,8 @@ static inline int sanitize_mpol_flags(int *mode, unsigned short *flags) if (*flags & MPOL_F_NUMA_BALANCING) { if (*mode == MPOL_BIND || *mode == MPOL_PREFERRED_MANY) *flags |= (MPOL_F_MOF | MPOL_F_MORON); + else if (*mode == MPOL_INTERLEAVE) + *flags |= (MPOL_F_MOF | MPOL_F_MOFT); else return -EINVAL; } @@ -2779,6 +2781,11 @@ int mpol_misplaced(struct folio *folio, struct vm_fault *vmf, if (!(pol->flags & MPOL_F_MOF)) goto out; + if (pol->flags & MPOL_F_MOFT) { + ret = NUMA_TASK_MIG; + goto out; + } + switch (pol->mode) { case MPOL_INTERLEAVE: polnid = interleave_nid(pol, ilx);